AGI is still 30 years away — Ege Erdil & Tamay Besiroglu
🎯 Summary
Podcast Summary: AGI is Still 30 Years Away — Ege Erdil & Tamay Besiroglu
This 188-minute episode features Ege Erdil and Tamay Besiroglu, founders of the new company Mechanize (focused on work automation), discussing their contrarian, longer-term view on achieving Artificial General Intelligence (AGI) compared to the prevailing Silicon Valley sentiment.
1. Focus Area
The discussion centers on Artificial General Intelligence (AGI) timelines, the nature of technological acceleration, the limitations of current AI capabilities, and the necessary prerequisites for achieving broad economic transformation. They contrast the concept of an “intelligence explosion” with a more complex, multi-faceted technological revolution akin to the Industrial Revolution.
2. Key Technical Insights
- The “Intelligence Explosion” Analogy is Misleading: The hosts argue that focusing solely on an “intelligence explosion” is like focusing only on “horsepower” during the Industrial Revolution. True transformation requires numerous complementary innovations across various sectors (finance, law, infrastructure), not just one core capability leap.
- Capability Unlocks vs. Compute Scaling: Progress in AI appears rapid because major capabilities (like sophisticated reasoning or coding) are unlocked periodically, often coinciding with large steps in compute scaling (e.g., 9-10 orders of magnitude since AlexNet). However, they suggest the remaining necessary unlocks (e.g., long-horizon agency, full multi-modality) might require significantly more compute scaling than the economy can easily sustain, or they might require fundamentally new innovations beyond current scaling laws.
- The “Unhobbling” Hypothesis vs. New Capabilities: They debate whether current models are “baby AGIs” that just need “unhobbling” (better context, agency scaffolding, post-training) or if entirely new, difficult capabilities must be engineered. They lean toward the latter, noting that while LLMs excel at knowledge retrieval (like answering Pokemon questions), they fail at executing complex, novel tasks within a dynamic environment (like playing an unknown Steam game).
3. Business/Investment Angle
- Longer Timelines for Full Automation: Tamay predicts a drop-in remote worker replacement around 2045, while Ege is slightly more bullish but still suggests timelines extending significantly beyond the common 2027-2030 predictions. This implies that the immediate economic disruption might be slower than anticipated by hyper-bulls.
- Job Automation Nuance: Many current tasks people perform are only a small fraction of their overall job. Automating that single task (e.g., booking a flight) does not automate the entire job, suggesting the fraction of the economy automated by AI is currently very small.
- Revenue as a Proxy for Utility (Debated): The discussion touches on whether massive revenue (e.g., OpenAI hitting $100B) is evidence of transformative intelligence. They caution that people pay trillions for non-transformative things (like oil), suggesting high revenue alone isn’t definitive proof of AGI readiness.
4. Notable Companies/People
- Ege Erdil & Tamay Besiroglu: Founders of Mechanize, advocating for longer AGI timelines and a broader view of technological change.
- Robin Hanson: Mentioned as an example of someone whose extrapolation of current automation trends suggests centuries until full automation.
- OpenAI: Referenced regarding their potential $100B revenue milestone, which they see as a weak signal for AGI progress unless the figure is significantly higher (e.g., $500B).
- WorkOS: Mentioned in an ad break as a company helping software firms transition from consumer-grade products to enterprise-ready solutions (SSO, audit logs), highlighting the difficulty of building necessary infrastructure features.
5. Future Implications
The conversation suggests that the path to AGI is not a straight extrapolation of current LLM performance. Significant, hard-to-predict breakthroughs in areas like long-horizon agency, robust multi-modal integration, and general environmental interaction (not just text-based reasoning) are still required. If these breakthroughs are as difficult as scaling compute constraints suggest, AGI remains decades away.
6. Target Audience
This episode is highly valuable for AI researchers, venture capitalists, technology strategists, and technical professionals interested in the fundamental drivers and realistic constraints shaping the AGI timeline, moving beyond hype cycles.
🏢 Companies Mentioned
💬 Key Insights
"for humans you like every human has to learn things from scratch basically like they are born and then they have a certain and a lifetime learning that they have to do so in human learning there is a ton of duplication well for an AI system it could just learn once you just have one huge train run which a tons of data and then that run could be deployed everywhere"
"the fact that I can like this is the skill I need or the set of skills I need and I can have a worker and just like I can have a thousand workers in parallel if there's something that has a high elasticity of demand I think is like probably along with the transformative AI the most underrated tangible thing that like you need to understand about what the future AI society will look like"
"firms right now have two of the three relevant criteria for evolution they have selection and they have variation but they don't have high fidelity replication and you could imagine a much more fast-paced and intense sequence of evolution for firms once you once you have this final piece click in"
"the crucial point we were making was that people tend to overemphasize and think of AI from the perspective of how smart individual copies will be and if you actually want to understand the ways in which they are superhuman you want to focus on their collective advantages which because of biology we are just precluded from"
"there's an argument about growth levels so we're saying we're gonna see 30% growth per year instead of 3% they responded at with an objection about levels so they say well how much more efficient how much more valuable can you make like hairdressing or like taking flights or whatever or going to a restaurant and like that is just fundamentally the wrong kind of objection"
"we can think about you know an h100 does about there are some estimates of how much computation the human brain does per second and it's about one e15 flopper so it's a bit a bit unclear but and then it turns out that an h100 roughly does on that order of computation so you can ask the question of how long does it take for an h100 to pay itself back if you run the software of the human brain if you run the software of the human brain you can then deploy that in the economy and earn say human wages on the order of 50 to 100k a year or whatever in the US and so then it pays itself back because it costs on the order of 30k per h100 and so you get a doubling time of maybe on the order of a year"