5 Prompting Tricks to Make Your AI Less Average

Unknown Source October 19, 2025 21 min
artificial-intelligence generative-ai startup openai anthropic
41 Companies
60 Key Quotes
3 Topics

🎯 Summary

Podcast Summary: 5 Prompting Tricks to Make Your AI Less Average

This 20-minute episode of the AI Daily Brief addresses the pervasive issue the host terms “AI’s tyranny of the average,” where Large Language Models (LLMs) default to conventional wisdom and generic output because they are trained on the mean of human-generated data. The goal of the episode is to provide actionable prompting strategies to elevate AI output from merely “passable” to “unique” and “high quality.”

The discussion is framed by an essay from technology writer Alex Cantrowitz, who highlighted the “AI sameness problem,” noting that generative content (like Sora videos or common business writing) exhibits a uniformity that requires deliberate prompting to break.

The host then details five specific prompting techniques designed to overcome this averageness:

  1. Negative Style Guide: Explicitly instructing the model on what not to do. This involves banning overused, hackneyed words (e.g., “telemetry,” “leverage,” “synergy”) and stylistic choices (e.g., avoiding titles with colons) that signal generic AI output.
  2. Forced Divergence in Choice: Counteracting the LLM’s pathological unwillingness to commit to a single path. The host recommends forcing the model to pick one option and argue vociferously for it, often preceded by a “steelman” exercise where it first builds the strongest case for all competing options before committing. This mimics strong human decision-making.
  3. Cliche Burn Down: Asking the model to first identify the most common analogies or turns of phrase (cliches) present in its initial draft for a given topic, and then explicitly instructing it to replace those with more original phrasing.
  4. Self-Critique (Iterative Refinement): Moving beyond the first pass by building a multi-step process into the prompt. A powerful example is: Draft V1, Red Team it by listing the top five ways it’s generic, Rewrite V2 fixing those issues, and then Explain the changes. This can be enhanced by using different models (e.g., GPT-5 thinking vs. GPT-4o) to critique each other’s output for added dimensionality.
  5. Exemplar Prompting with Rationale: Providing the LLM with an example of superior, non-average output. Crucially, the user must explain why the example is better and specifically detail how it breaks from the conventional wisdom the model typically aggregates (e.g., placing a strong revenue growth metric on the first slide of a pitch deck, contrary to standard templates).

Summary Analysis:

Category Detail
1. Focus Area Prompt Engineering strategies for Large Language Models (LLMs) to overcome generic output, specifically addressing the “AI sameness problem” in text, image, and video generation.
2. Key Technical Insights 1. LLMs are fundamentally optimized for the mean of human output, leading to predictable patterns. 2. Forcing commitment (Forced Divergence) is necessary because models naturally equivocate to cover all bases. 3. Multi-model critique (using different model architectures within the same ecosystem) introduces necessary cognitive diversity.
3. Business/Investment Angle 1. Generic AI output is insufficient for high-stakes production use cases, necessitating advanced prompting skills for competitive advantage. 2. Companies relying on AI for external communication (e.g., PR pitches) risk sounding uniform and inauthentic. 3. Investment in AI tooling must prioritize platforms that facilitate complex, multi-step refinement processes (like Blitzy or Notion AI agents).
4. Notable Companies/People Alex Cantrowitz (Technology Writer, author of the “AI sameness problem” essay); Mike Krieger (Instagram co-founder, Anthropic CPO, commenting on Sora’s lack of long-term variety); Sponsors: Super Intelligent, Robots and Pencils, Notion, Blitzy.
5. Future Implications The industry is moving toward a requirement for highly skilled prompt engineers or specialized agents capable of executing complex, multi-stage refinement workflows to extract true value beyond baseline performance. The “sameness” problem must be solved for AI content to achieve long-term relevance in social media and professional contexts.
6. Target Audience AI practitioners, prompt engineers, content creators, product managers, and technology professionals who use generative AI daily for production-level work and need to differentiate their output.

🏢 Companies Mentioned

AWS âś… ai_infrastructure
bigtechnology.com âś… media_or_platform
Something I âś… unknown
Book Demo âś… unknown
Visit Blitzy âś… unknown
The Blitzy âś… unknown
Try Notion AI âś… unknown
Your AI âś… unknown
For AI âś… unknown
Latin America âś… unknown
As AI âś… unknown
Should I âś… unknown
The Sora âś… unknown
Studio Ghibli âś… unknown
Anthropic Chief Product Officer Mike Krieger âś… unknown

đź’¬ Key Insights

"This to me is a quintessential example of the LLM not doing anything wrong, but where its process of aggregating the collected and conventional wisdom of people who have built decks just makes for a generic product that is almost doomed to not do what the creator needs it to do."
Impact Score: 10
"Super Intelligent right now is growing 41% month over month when it comes to revenue. You better believe I'm not waiting till business slide 6 or whatever to show that. That is going on slide number one."
Impact Score: 10
"However, the important thing that I think to add, which many people miss, is to actually take the time to explain why that example is better, and in particular, why the consensus or conventional wisdom that it floats is wrong or at least limited."
Impact Score: 10
"I turned that whole thread into a link and shared it and flipped over to a new chat in the same app, toggled that new chat to the O3 model instead of the 5-thinking model, asked it to review and basically make a set of critiques and changes, and argue for what it thought we—which is me and GPT-5 thinking together—were missing as part of the whole conversation."
Impact Score: 10
"O3 is much more clinical. It's much more likely to give you lists and charts and tables. There's a certain concession and precision of thought that O3 goes for that GPT-5 thinking doesn't have in the same way, which is not to say that O3 is better for all use cases."
Impact Score: 10
"In a single prompt, you could, for example, say, 'Draft a first version... then red team it and list the top five ways it's generic. Rewrite a V2 that fixes each issue, and then explain why you changed what you changed.'"
Impact Score: 10

📊 Topics

#artificialintelligence 88 #generativeai 14 #startup 1

🤖 Processed with true analysis

Generated: October 20, 2025 at 01:11 AM