• Authors: Daniel Kokotajlo, Scott Alexander, Thomas Larsen, Eli Lifland, Romeo Dean
  • Published 3 April 2025

AI 2027 report

We predict that the impact of superhuman AI over the next decade will be enormous, exceeding that of the Industrial Revolution.

Topics covered

  • AI Agents (Unimpressive and error-prone at first, but starting to replace some employee tasks and companies inserting into workflows regardless. And getting better)
  • Autonomous Coding
  • AI Research (AIs creating better AIs)

After being trained to predict internet text, the model is trained to produce text in response to instructions. This bakes in a basic personality and “drives.” For example, an agent that understands a task clearly is more likely to complete it successfully; over the course of training the model “learns” a “drive” to get a clear understanding of its tasks. Other drives in this category might be effectiveness, knowledge, and self-presentation (i.e. the tendency to frame its results in the best possible light).

Link to highlight

“Unlike ordinary software, our models are massive neural networks. Their behaviors are learned from a broad range of data, not programmed explicitly. Though not a perfect analogy, the process is more similar to training a dog than to ordinary programming.” —OpenAI

Link to highlight

Reviewed (and some counterarguments) in Alberto Romero’s newsletter, the Algorithmic Bridge:

YouTube interview - Scott Alexander and Daniel Kokotaijo interviewed by Dwarkesh Patel

Further reading