Why I don’t think AGI is right around the corner
🎯 Summary
Podcast Episode Summary: Why I Don’t Think AGI Is Right Around the Corner
This podcast episode features a narrated reading of a blog post from June 3rd, 2025, where the host argues against the immediate arrival of Artificial General Intelligence (AGI), despite acknowledging the massive recent progress in Large Language Models (LLMs). The central thesis is that current LLMs fundamentally lack the capability for continual, organic, on-the-job learning, which is the key differentiator between today’s powerful tools and truly transformative, human-level general intelligence.
1. Focus Area: The discussion centers on the limitations of current LLMs in achieving AGI, specifically focusing on the lack of continual learning and the challenges in developing reliable, end-to-end computer use agents. The comparison is drawn between the static capabilities of current models and the adaptive nature of human employees.
2. Key Technical Insights:
- The Continual Learning Bottleneck: Current LLMs are stuck with their out-of-the-box abilities. While prompt engineering offers minor adjustments, it cannot replicate the iterative, context-building, and failure-interrogating learning process humans employ on the job (analogized to learning the saxophone).
- Context Window Brittleness: While long context windows (like Claude Code’s compaction feature) offer temporary session memory, titrating rich, task-specific experience into a text summary is brittle, especially outside text-heavy domains like software engineering.
- Computer Use Modality Gap: Developing reliable computer use agents (e.g., doing taxes end-to-end) is significantly harder than language tasks due to the need to process multimodal inputs (images/video) and the lack of a sufficiently large, pre-training corpus for complex UI interaction data.
3. Business/Investment Angle:
- Limited Near-Term Transformation: The host disagrees with the view that current LLMs are transformative even if progress stalls; he estimates less than 25% of white-collar employment would be automated without continuous learning capabilities.
- Value Discontinuity Upon Solving Learning: The true economic discontinuity—a broadly deployed intelligence explosion—will only occur when continuous, organic learning is solved, making the next two decades potentially more impactful than the immediate future.
- Skepticism on Near-Term Agent Rollout: The host is skeptical that fully autonomous, week-long computer use agents (like end-to-end tax filing) will arrive by the end of 2026, suggesting the difficulty is comparable to the leap from GPT-2 to GPT-4 in language.
4. Notable Companies/People:
- Routager, Dornbush: Quoted for the adage: “Things take longer to happen than you think they will, and then they happen faster than you thought they could.”
- Shoto and Trenton (Brickett): Mentioned for holding the opposing view that stalled AI progress would still automate 25% of white-collar work within five years due to the ease of data collection on job tasks.
- Claude Code/Claude 3 Opus/Gemini 2.5: Cited as examples of models demonstrating impressive, albeit static, reasoning capabilities.
- DeepSeek (R1 paper): Referenced to illustrate how even seemingly simple algorithmic innovations (like verifiable problem training) required significant engineering time (two years from GPT-4 to 01).
5. Future Implications: The industry is heading toward a massive inflection point, but it is delayed until the continuous learning bottleneck is solved (predicted around 2032 for human-level on-the-job learning). Once solved, the ability for AI models to amalgamate learnings across all copies will lead to a rapid, broad deployment of intelligence, potentially resembling a “super intelligence” without requiring further fundamental algorithmic breakthroughs.
6. Target Audience: AI/ML Professionals, Tech Strategists, and Venture Capitalists focused on long-term AI timelines and the practical deployment challenges of current foundation models in enterprise settings.
🏢 Companies Mentioned
💬 Key Insights
"I think we're in the GPT-2 era for computer use, but we have no pre-training corpus, and the models are optimizing for a much sparser reward over a much longer time horizon using action primitives that they're unfamiliar with."
"An AI that is capable of online learning might functionally become a super intelligence quite rapidly without any further algorithmic progress."
"When we do solve continuous learning, we'll see a huge discontinuity in the value of these models."
"The fundamental problem is that LLMs don't get better over time the way a human would. This lack of continual learning is a huge, huge problem."
"AGI timelines are very lognormal. It's either this decade or bust."
"An AI that learns on the job as easily, organically, seamlessly, and quickly as a human for any white-collar work...: 2032."