EP 549: Custom GPTs, Gems and Projects. What they are and why you need to use them
🎯 Summary
Podcast Episode Summary: EP 549: Custom GPTs, Gems and Projects. What they are and why you need to use them
This episode of the Everyday AI Show focuses on the critical, yet often overlooked, shift from using generic “new chat” sessions in AI chatbots to leveraging Custom GPTs (OpenAI), Gems (Google), and Projects (OpenAI/Claude). The host argues that failing to utilize these persistent, customized environments leads to significant time waste due to constant context re-establishment and disorganized chat histories.
The core narrative arc centers on demonstrating the superior efficiency and output quality achieved when an LLM is pre-loaded with specific instructions, context, and proprietary data via these dedicated tools. The host conducts a live, unedited demonstration comparing the performance of the most powerful models across these four platforms (Gemini 2.5 Pro, GPT-4o, and Claude 3 Opus 4) on a complex task requiring analysis of extensive proprietary business data.
1. Focus Area
The discussion centers on Advanced LLM Workflow Optimization using platform-specific persistent AI tools: OpenAI’s Custom GPTs and Projects, Google’s Gems, and Anthropic’s Projects. The main theme is moving beyond ad-hoc prompting to structured, context-aware AI interaction for professional use cases.
2. Key Technical Insights
- Model Access Parity: OpenAI recently enabled Custom GPTs to utilize their most powerful models (like GPT-4o), a crucial update that was initially under-the-radar, making GPTs far more capable than previously assumed.
- Tool Differentiation: While GPTs/Gems are personalized assistants, Projects (OpenAI/Claude) offer access to advanced features like Canvas Mode (OpenAI) and Deep Research capabilities within the project context, features often unavailable in the simpler GPT/Gem interfaces.
- Data Handling Constraints: The host noted a practical limitation where Claude had a smaller file size limit (under 30MB) compared to Google Gems and OpenAI GPTs, preventing the upload of a very large newsletter data file.
3. Business/Investment Angle
- Productivity Trap: Relying solely on “new chat” sessions is framed as a productivity trap, as the time spent re-explaining context, brand voice, and uploading files negates the time saved by using AI.
- Context as Competitive Advantage: Properly configuring these persistent tools—by uploading proprietary data (analytics, search console data, internal documents) and defining brand voice—is presented as a way to create a personalized, high-value AI asset that rivals the initial insights provided by expensive external consultants.
- Domain Expertise Encapsulation: These tools allow users to effectively “unload their brain” and domain expertise into the AI, ensuring consistent, high-quality outputs across all future interactions, regardless of the specific task.
4. Notable Companies/People
- OpenAI: Mentioned for Custom GPTs and Projects, and the recent enabling of advanced model usage within GPTs.
- Google: Mentioned for Gems and the integration capabilities within Google Workspace apps (Drive, Gmail) when using Gems.
- Anthropic (Claude): Mentioned for Claude Projects and the use of the Model Context Protocol (MCP), described as a web-based API equivalent.
- Jordan Wilson (Host): Drives the demonstration by using his own company’s (Everyday AI) extensive operational data for the live test.
5. Future Implications
The industry is moving rapidly toward persistent, context-rich AI environments. The future of professional AI usage involves setting up these specialized agents once and using them continuously, transforming them from simple chat interfaces into integrated, knowledgeable digital colleagues that retain institutional memory and context.
6. Target Audience
This episode is highly valuable for AI Practitioners, Business Leaders, and Power Users who rely on LLMs daily for complex, recurring tasks and need to maximize efficiency and output quality beyond basic prompting.
🏢 Companies Mentioned
đź’¬ Key Insights
"Never try something once and say this is good enough, right? You don't have to go the full benchmarking route of doing things 50 times and finding the mean or the median, the average. You don't have to do that. But generative AI is generative, right? I would always encourage you, especially if it's something that you're going to do over and over and over and roll it out within your organization, you should at least be testing it a minimum of five times minimum."
"But as I tell you all every time, you need to be reading this chain of thought. This tells you because we're using these either reasoning models or hybrid models that go between reasoning and non-reasoning, but the quality of my prompt was very low, right? The custom instructions were not very great."
"Think of not just how much time you're saving, but how much better the outputs are going to be. Now, even for me, even if I don't necessarily have a data question for Everyday AI, I should probably still just use this, right? Because again, now this gives these models context, a huge amount of context, even if I'm not asking for specific stats, if I'm just using any of these AI chatbots as a strategy partner, as a brainstorming partner to help me plan things, why wouldn't I want to give it access to all of this information?"
"Will this, like, will this replace like hiring a big consultancy? Maybe. Bless me, honest, I couldn't afford to go hire a six-figure consultant, but I think right here, we have the making of it, right?"
"OpenAI actually didn't really announce it at first. They updated some projects... They enabled anyone to use any version of their model for GPTs, and this is why it's huge."
"The new Canvas mode in Google Gemini is so freaking good. Probably one of the most unique things of any AI"