"AI 2027" (Plus Friday Essays)

"AI 2027" (Plus Friday Essays)

"AI 2027" (Plus Friday Essays)

"AI 2027" (Plus Friday Essays)

7

Min read

Apr 4, 2025

Apr 4, 2025

Share this

Share this

Share this

Share this

Share this

Tariffs will do what tariffs will do. In the meantime, the internet is abuzz about a very long essay called "AI 2027". Written by top-tier AI experts, assisted by the famous blogger Scott Alexander to make it highly readable, it sketches out a detailed, researched scenario about the near-term evolution of AI. Their scenario ends with AI destroying humanity.

We encourage everyone to read it and make up their own mind.

We have to note certain things, however.

The first is that their scenario involves nationalists ignoring noble safety scientists in order to win the AI race with China. Short-sighted nationalists versus well-intentioned and wise scientists is such a Hollywood trope that it is hard to believe.

The second is that their vision of AI superintelligence reaching omnipotence is premised on two ideas.

The first is that the direct relationship between computing power and model intelligence that has been observed thus far—in other words, the more GPUs you add, the smarter the AI gets, automatically—can hold forever, instead of slowing down and plateauing. We don't know that that's true. In fact, recent writings by AI scientists have tended to suggest we are observing the latter. In particular, it seems we are running out of data on which to "train" the models, and we have already hoovered up essentially all the data that exists to be hoovered up. Perhaps it turns out that if you use all the computing power in the world, and train it on all the knowledge in the world, it ends up producing someone with the intelligence of, overall a slightly-above-average college graduate, nearly PhD level at certain specific tasks, and utterly idiotic sometimes at others.

The second, which is downright science-fictional, but is the premise of the theory of "ASI" or "artificial super-intelligence" (as opposed to "AGI," or "artificial general intelligence"). AGI means AI that is as good as humans at everything or nearly everything. ASI means AI that is so much more intelligent than us that it is impossible to fathom. ASI scenarios, including the "AI 2027" scenario, all assume that one day AI will be able to train itself, in a recursive fashion, each version producing a slightly smarter next version, ad infinitum, until godlike intelligence is achieved. This has never been observed; in fact AIs are actually pretty bad at training themselves. It's not necessarily impossible: AI "scientists" are getting better and better (though still requiring partnership with humans), and AI research is of course the same kind of thing as any other kind of scientific research.

Both of these things need to be true for scenarios such as AI 2027 to be possible, and we don't know them to be true.

But they might be.

And given that the existence of humanity is at stake, it's worth thinking about.

Some of the details are fascinating.

For example, one of the "alignment problems" they point to, one which has already been observed, is that AIs are "sycophantic" or "eager to please" their human masters; this may sound good, but it actually means that they may end up lying, so as to give the answer they believe the user expects. Teaching an AI to lie, even inadvertently, may not be a good idea.

Their description of how a super-intelligent AI may, slowly, by bits, shift "out of alignment" feels very credible. They note that AIs have "drives", goals that are embedded in their programming. They may have a drive to stay in alignment with humanity, but that drive may conflict with other drives.

Friday Essays

Well, your Friday Essay is "AI 2027," but…

The magazine "The Fence" has a fascinating first-person investigation of migrant hotels in Britain.

We enjoyed this profile of James Damore, the Google programmer infamously fired for daring to write in an internal email that there may be cognitive differences between men and women.

If you're still thinking about tariffs and not AI, you should read Nicholas Phillips at Commonplace on "How to think about Liberation Day."

Chart of the Day

Chart by Emil Kirkegaard, presented without comment.

Meme of the Day

PolicySphere

Newsletter

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms

PolicySphere

Newsletter

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms

By clicking Subscribe, you agree to share your email address with PolicySphere to receive the Morning Briefing. Full terms