More AI Plans

More AI Plans

More AI Plans

More AI Plans

9

Min read

Aug 12, 2025

Aug 12, 2025

Share this

Share this

Share this

Share this

Share this

PREVIOUSLY ON POLICYSPHERE:

Analysis: The US-South Africa Crisis, Explained

Exclusive: New Think Tank Wants To Make The US A Manufacturing Superpower Again

Exclusive: "Nature Is Nonpartisan" Wants To Unite Americans Around Conservation

Exclusive: Imagine Operation Warp Speed, But For Every Drug

SUBSCRIBE to the Sphere Podcast on Spotify, YouTube, or Apple Podcasts.

More AI Plans

The good folks at the Institute for Progress have published what they're calling "The Launch Sequence," that is to say, a list of essays with "concrete, ambitious ideas to accelerate AI for science and security."

This initiative, introduced in an essay by Tim Fist, Tao Burga, and Tim Hwang, represents a significant departure from reactive AI governance approaches, instead advocating for deliberate technological steering at a critical juncture in AI's evolution. In particular, they focus on two central topics: science, and security.

The authors argue that AI capabilities are advancing along a "jagged frontier," with exponential improvements in certain domains while others lag behind. They note that AI software engineering performance has been doubling roughly every seven months, and that some companies like Google now generate more than a quarter of their production code through AI. Yet despite these advances, only five percent of US businesses were using AI as of early last year, highlighting the uneven distribution of AI's benefits.

The initiative emerges from a fundamental premise: technological trajectories are not predetermined. As the authors state, "AI doesn't automatically solve the most important problems first, and it won't neutralize the new risks it creates by default." They argue that the United States, given its dominant position in the AI supply chain and status as the world's most powerful democracy, has both the responsibility and capability to shape AI's development path. US private sector investment in AI exceeded $109 billion in 2024, more than ten times that of any other country, providing leverage to influence global AI development.

So, why the focus on science and technology?

On the science side, the authors identify numerous medical technologies that could save millions of lives annually if developed, including malaria vaccines, tuberculosis vaccines for adults, and stroke-reducing drugs. They argue that AI could dramatically accelerate the development of these technologies, but market incentives alone are insufficient to prioritize such public goods.

The security dimension addresses the reality that advanced AI capabilities present asymmetric risks. The same coding agents that boost productivity could be weaponized to exploit infrastructure vulnerabilities continuously. AI systems capable of accelerating medical research might also facilitate biological weapon development. The authors warn that "there's no iron law of computer science or economics that says defensive capabilities must grow in tandem with offensive capabilities," making proactive defensive technology development essential.

There's a political angle as well. The authors note that the CHIPS Act succeeded by appealing to both national security stakeholders and public science advocates; the implication is that those two topics are useful for building a broad coalition of support for their initiatives, which is fair enough.

The Launch Sequence explicitly targets projects that meet three criteria: they are unlikely to occur through existing commercial incentives, they can be achieved or fully established by 2030, and they are particularly important given rapid AI advances. These proposals go beyond traditional regulatory frameworks or abstract principles, instead offering specific technological and institutional innovations.

The authors emphasize that their approach differs from both unfettered acceleration and heavy-handed control. They reject licensing regimes for models above certain compute thresholds as "brittle, top-down control," while also dismissing proposals that are "poorly targeted or sorely lacking in ambition." Instead, they advocate for targeted interventions that leverage AI's capabilities to solve specific problems while building defensive infrastructure against emerging risks.

One of the most interesting planks of their proposals is their focus on metascience and linking the idea of reforming science to AI. "To realize the benefits of AI, we should redesign how science works," they write. They note, very aptly, that the science funding ecosystem has become dangerously bureaucratic, with researchers facing wait times of up to 20 months for grant funding and principal investigators spending nearly half their time on paperwork. They propose that agencies like NIH and NSF should adopt flexible award mechanisms such as Other Transactions Authority to enable institutional block grants supporting organizations like Arc Institute and FutureHouse, which have invested heavily in infrastructure-driven "team science." All of this is well and good, and indeed worth pursuing on its own. One almost gets the sense that they want to get these reforms done for their own sake and are just sprinkling some AI fairy dust on them to make them more attractive. And bully for them.

Another interesting idea relates to governance and AI safety: the author propose that, rather than attempting to predict specific AI developments, we should build measurement infrastructure to track the frontier of AI capabilities across different domains. This would provide what they call an "adaptation buffer," that is to say, a period during which society can develop appropriate tools, infrastructure, and policies before new capabilities become widely available. For example, evidence of new bio-offensive capabilities in early model versions could trigger the build-up and distribution of personal protective equipment and the rollout of wastewater surveillance systems.

These are all very good ideas, and there's much, much more.

Policy News You Need To Know

#BLS — President Trump has announced his selection of Heritage economist EJ Antoni as the new commissioner of BLS. This has been controversial, and not just on the left. The criticism is that Antoni is (bluntly) a partisan hack and doesn't have the credentials for the job. We don't know Antoni but we have consumed and used his work for a long time (his charts have been featured here as Chart of the Day for a long time), and while he's certainly pugnacious on X dot com we have never seen any evidence of these alleged problems.

#HumanCapital — American Moment is one of the most interesting institutions in DC, preparing and credentialing young smart conservatives and then placing them on the Hill and in the Admin and elsewhere. They have been enormously influential in the Transition, we happen to know. Well, they are stepping up their fellowship program with something that a zoomer will appreciate: pay. From the release: "With a salary increase, the Fellowship for American Statecraft remains the highest-paid internship in Washington, D.C. On Tuesday, American Moment announced a pay raise for its flagship program, the Fellowship for American Statecraft. Fellows will now receive a stipend of $3,250 per month for three months, plus a $250 signing bonus, 401(k) matching, gym benefits, and a $100 monthly credit for networking coffees and lunches. All told, each Fellow receives nearly $15,000 in total value over the course of the 12-week program — an unprecedented investment in young talent on the right-of-center." This is a big deal. Kids from non-privileged backgrounds need money to live, especially in a city like DC, and the conservative movement needs to identify these kids from non-coastal areas of America and bring them up. It's also a sign of American Moment's fundraising success, which doesn't surprise us one bit.

#Nukes — DOE has announced 11 projects have been selected for its New Reactor Pilot Program. The program is basically about taking new reactor designs such as small modular reactors and accelerating the construction of prototypes and then their deployment. These technologies are very exciting but have always been held back by regulation. This is very, very exciting.

#DCLiberation — Manhattan Institute Senior Fellow Charles Fain Lehman has a very good piece in The Atlantic on the crime situation in DC.

#DCLiberation #LawAndOrder — Speaking of: according to a report, DoD is planning to create a "Domestic Civil Disturbance Quick Reaction Force" with hundreds of National Guard troops to swiftly deploy to US cities during protests or civil unrest. This seems like a very good idea, especially given past weaponization of civil unrest for political purposes.

#AI #Chips #Chyna — Yesterday we set the record straight on the Admin's decision to allow nVidia to export its H20 chips to China, pointing out that these are not top-of-the-line chips and therefore there are no problematic national security implications. We also reported on the rumors that these chips even include backdoors to US systems. Well now Bloomberg is reporting that the Chinese government is using firms to not use these chips, "particularly for government-related purposes."

#Education — The America First Policy Institute, which has proven very influential on the administration, has just announced big hires to their education team: Max Eden and James Paul. From the release: "Max Eden, a nationally respected education scholar, most recently served on the Domestic Policy Council in the White House. With more than a decade of experience in education research and policy development, Eden is well known for his ability to translate ideas into action. As Director of Federal Education Policy, he will play a central role in AFPI’s efforts to dismantle the U.S. Department of Education, empower states and parents, and shape common sense policies that reflect the values of American families across the nation. James Paul joins AFPI with over a decade of practical implementation and principled leadership in education policy. Paul served as the inaugural Executive Director of the West Virginia Professional Charter School Board, and has extensive experience in academic research, including debunking Diversity, Equity and Inclusion programs in schools, school choice, and parental rights. As Director of State Education Policy, Paul will drive state-level victories that expand educational freedom and empower American families." We will be keeping an eye on them!

#Education — Speaking of education: you may have seen a recent piece in WaPo which tried to use a particular school district in Arizona (a state that recently implemented some very audacious school choice policies) as a kind of case study on how school choice is the devil. At the Informed Choice Substack, Marty Lueken and John Kristof debunk the WaPo's report, which is, of course, fake news.

#Energy — Remember when Spain had this massive blackout? The official report on the cause of the blackout is out, and the FT has a good article on it. Basically, the culprit is solar. The intermittency of solar power causes issues with grid stability. You need baseload power!

#Immigration — Stunning numbers from the good folks at the Center for Immigration Studies: "Analysis of the raw data from the Bureau of Labor Statistics’ (BLS) household survey, officially called the Current Population Survey (CPS), shows an unprecedented 2.2 million decline in the total foreign-born or immigrant population (legal and illegal) between January and July of this year. We preliminarily estimate that the number of illegal immigrants has fallen by 1.6 million in just the last six months. This is likely due to increased out-migration in response to stepped-up enforcement." In other words, the illegals are self-deporting.

#GameTheory — Most often, economics theory-building is not interesting. This is an exception, however. A new NBER paper by John S. Becko, Gene M. Grossman, and Elhanan Helpman creates a stylized example of a world in which countries pursue trade policy not just to maximize economic welfare, but also geopolitical advantage (shocking). They find that in such a world, the optimal tariff level is higher than in a world without such geopolitical rivalries. In other words, in a world of geopolitical competition between states—that is to say, the real world—free trade is suboptimal.

Chart of the Day

The Treasury estimates that customs duties will amount to 2.7% of federal revenue in Fiscal 2025. This is higher than last year but, surprisingly, not particularly high relative to historical norms.

Meme of the Day

PolicySphere

Newsletter

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms

PolicySphere

Newsletter

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms