EP 559: ChatGPT’s Updated Custom GPTs: What’s New and How They Work
🎯 Summary
Podcast Summary: EP 559: ChatGPT’s Updated Custom GPTs: What’s New and How They Work
This episode of the Everyday AI Show focuses entirely on the significant recent updates to OpenAI’s Custom GPTs feature, highlighting how these changes dramatically increase their utility for business and individual workflows. The host argues that the initial 2023 launch was overhyped because the GPTs lacked access to the best underlying models, a limitation that has now been resolved.
1. Focus Area
The primary focus is the enhancement of Custom GPTs following OpenAI’s latest model releases. Key discussion points include the expanded model selection (especially the integration of GPT-4o), practical demonstrations of complex, multi-step GPTs, and the strategic value of using these customized tools for repetitive knowledge work.
2. Key Technical Insights
- Expanded Model Support: Creators can now select from the full suite of ChatGPT models (GPT-4o, GPT-4o mini, GPT-3.5, etc.) when building or updating a GPT, allowing for fine-tuning performance based on task needs (e.g., using faster models for simple tasks or advanced reasoning models for complex ones).
- Action Limitation: A key technical caveat is that GPTs utilizing Custom Actions (APIs, webhooks) are currently restricted to using only the GPT-4o model, while GPTs without actions can leverage the full model picker.
- Agentic Capabilities of 3.5: The host notes that the updated GPT-3.5 model exhibits impressive agentic behavior, capable of dynamically switching between research, code execution (via Code Interpreter), and knowledge retrieval mid-task, making it a powerful workhorse, comparable to models like Gemini 1.5 Pro.
3. Business/Investment Angle
- Democratization of Custom AI: The updates effectively allow companies to create internally fine-tuned AI assistants for specific roles without the massive development costs previously required, essentially offering a no-code path to specialized internal LLM applications.
- ROI Acceleration: By saving time on repetitive knowledge work (research, summarization, synthesis), businesses can quickly realize ROI on their Gen AI investment by standardizing complex workflows into easily accessible GPTs.
- Strategic Model Selection: Businesses can now optimize costs and performance by assigning the most appropriate model (e.g., faster, cheaper 3.5 for routine tasks; powerful 4o for critical analysis) to each custom tool.
4. Notable Companies/People
- OpenAI: The developer of the Custom GPTs feature and the underlying models (GPT-4o, etc.).
- Everyday AI (Host/Company): Mentioned as an expert consultancy that trains teams on leveraging ChatGPT Enterprise and Teams, positioning themselves as leaders in practical AI implementation.
5. Future Implications
The conversation suggests the industry is moving toward highly specialized, modular AI agents accessible through simple interfaces. The ability to easily swap models within a single custom tool indicates a future where AI deployment is highly flexible, cost-aware, and tailored to specific performance envelopes rather than relying on a single, monolithic model. The host also hints at advanced topics like “context stacking” and “actions” for future deep dives.
6. Target Audience
This episode is highly valuable for AI Practitioners, Business Leaders, Knowledge Workers, and Technology Managers who are already using ChatGPT Plus/Team and are looking for practical, actionable ways to move beyond basic prompting into building scalable, repeatable AI workflows within their organizations.
🏢 Companies Mentioned
💬 Key Insights
"I essentially said, 'Yo, like, yeah, go do the normal meeting analyzer stuff. You know, find, give me an executive summary... But what's cool here is the things that we talked about in this meeting that we're like, 'Oh, yeah, we should look into A, B, and C.'... it went and did this."
"The 3.5 model in from OpenAI and Gemini 1.5 from Google, they are agentic in how they work because they make decisions on their own."
"I spend so much of my time just orchestrating large language models, right? I'm not going to pretend that I can research better than Gemini. I'm not going to pretend that I can write code better than Claude. Right. I'm not going to pretend that I can synthesize information better than ChatGPT. I can't, right?"
"the gap in terms of what these things can accomplish, what a GPT can actually do with a GPT-4.0, a non-reasoning model, in the 3.5, it's night and day in terms of capabilities."
"Also, the domain expertise. All right. I think now that you can use these reasoning models that are agentic in nature—that's the key thing—is you can use if you look at the 3.5 model, you can use everything that that model can do."
"Number one, better guidance. So all the different models, whether you're talking about GPT-4o that has a very high EQ, you know, 4.0, which is a fast workhorse, you can go all the way up to 3.5 Pro in your GPTs, right?"