EP 549: Custom GPTs, Gems and Projects. What they are and why you need to use them

Unknown Source June 18, 2025 48 min
generative-ai artificial-intelligence google openai anthropic meta
66 Companies
64 Key Quotes
2 Topics
1 Insights

🎯 Summary

Podcast Episode Summary: EP 549: Custom GPTs, Gems and Projects. What they are and why you need to use them

This episode of the Everyday AI Show focuses on the critical, yet often overlooked, shift from using generic “new chat” sessions in AI chatbots to leveraging Custom GPTs (OpenAI), Gems (Google), and Projects (OpenAI/Claude). The host argues that failing to utilize these persistent, customized environments leads to significant time waste due to constant context re-establishment and disorganized chat histories.

The core narrative arc centers on demonstrating the superior efficiency and output quality achieved when an LLM is pre-loaded with specific instructions, context, and proprietary data via these dedicated tools. The host conducts a live, unedited demonstration comparing the performance of the most powerful models across these four platforms (Gemini 2.5 Pro, GPT-4o, and Claude 3 Opus 4) on a complex task requiring analysis of extensive proprietary business data.

1. Focus Area

The discussion centers on Advanced LLM Workflow Optimization using platform-specific persistent AI tools: OpenAI’s Custom GPTs and Projects, Google’s Gems, and Anthropic’s Projects. The main theme is moving beyond ad-hoc prompting to structured, context-aware AI interaction for professional use cases.

2. Key Technical Insights

  • Model Access Parity: OpenAI recently enabled Custom GPTs to utilize their most powerful models (like GPT-4o), a crucial update that was initially under-the-radar, making GPTs far more capable than previously assumed.
  • Tool Differentiation: While GPTs/Gems are personalized assistants, Projects (OpenAI/Claude) offer access to advanced features like Canvas Mode (OpenAI) and Deep Research capabilities within the project context, features often unavailable in the simpler GPT/Gem interfaces.
  • Data Handling Constraints: The host noted a practical limitation where Claude had a smaller file size limit (under 30MB) compared to Google Gems and OpenAI GPTs, preventing the upload of a very large newsletter data file.

3. Business/Investment Angle

  • Productivity Trap: Relying solely on “new chat” sessions is framed as a productivity trap, as the time spent re-explaining context, brand voice, and uploading files negates the time saved by using AI.
  • Context as Competitive Advantage: Properly configuring these persistent tools—by uploading proprietary data (analytics, search console data, internal documents) and defining brand voice—is presented as a way to create a personalized, high-value AI asset that rivals the initial insights provided by expensive external consultants.
  • Domain Expertise Encapsulation: These tools allow users to effectively “unload their brain” and domain expertise into the AI, ensuring consistent, high-quality outputs across all future interactions, regardless of the specific task.

4. Notable Companies/People

  • OpenAI: Mentioned for Custom GPTs and Projects, and the recent enabling of advanced model usage within GPTs.
  • Google: Mentioned for Gems and the integration capabilities within Google Workspace apps (Drive, Gmail) when using Gems.
  • Anthropic (Claude): Mentioned for Claude Projects and the use of the Model Context Protocol (MCP), described as a web-based API equivalent.
  • Jordan Wilson (Host): Drives the demonstration by using his own company’s (Everyday AI) extensive operational data for the live test.

5. Future Implications

The industry is moving rapidly toward persistent, context-rich AI environments. The future of professional AI usage involves setting up these specialized agents once and using them continuously, transforming them from simple chat interfaces into integrated, knowledgeable digital colleagues that retain institutional memory and context.

6. Target Audience

This episode is highly valuable for AI Practitioners, Business Leaders, and Power Users who rely on LLMs daily for complex, recurring tasks and need to maximize efficiency and output quality beyond basic prompting.

🏢 Companies Mentioned

GPT-4 (implied by 03/03 Pro) âś… ai_model
Copilot âś… ai_application
When I âś… unknown
Like I âś… unknown
Although I âś… unknown
The Python âś… unknown
Generative AI âś… unknown
Show Thinking âś… unknown
So Google âś… unknown
If I âś… unknown
So Claude âś… unknown
Google Analytics âś… unknown
Google Search Console âś… unknown
Then I âś… unknown
Google AI Pro âś… unknown

đź’¬ Key Insights

"Never try something once and say this is good enough, right? You don't have to go the full benchmarking route of doing things 50 times and finding the mean or the median, the average. You don't have to do that. But generative AI is generative, right? I would always encourage you, especially if it's something that you're going to do over and over and over and roll it out within your organization, you should at least be testing it a minimum of five times minimum."
Impact Score: 10
"But as I tell you all every time, you need to be reading this chain of thought. This tells you because we're using these either reasoning models or hybrid models that go between reasoning and non-reasoning, but the quality of my prompt was very low, right? The custom instructions were not very great."
Impact Score: 10
"Think of not just how much time you're saving, but how much better the outputs are going to be. Now, even for me, even if I don't necessarily have a data question for Everyday AI, I should probably still just use this, right? Because again, now this gives these models context, a huge amount of context, even if I'm not asking for specific stats, if I'm just using any of these AI chatbots as a strategy partner, as a brainstorming partner to help me plan things, why wouldn't I want to give it access to all of this information?"
Impact Score: 10
"Will this, like, will this replace like hiring a big consultancy? Maybe. Bless me, honest, I couldn't afford to go hire a six-figure consultant, but I think right here, we have the making of it, right?"
Impact Score: 10
"OpenAI actually didn't really announce it at first. They updated some projects... They enabled anyone to use any version of their model for GPTs, and this is why it's huge."
Impact Score: 10
"The new Canvas mode in Google Gemini is so freaking good. Probably one of the most unique things of any AI"
Impact Score: 9

📊 Topics

#generativeai 136 #artificialintelligence 121

đź§  Key Takeaways

đź’ˇ be using these AI chatbots because if you're just going in there every single day and clicking that new chat button, you're actually probably not saving yourself very much time

🤖 Processed with true analysis

Generated: October 05, 2025 at 09:01 AM