Responsible AI That Scales Across Customer Workflows - with Miranda Jones of Emprise Bank
π― Summary
Podcast Episode Summary: Responsible AI That Scales Across Customer Workflows - with Miranda Jones of Emprise Bank
This episode focuses on the critical challenge faced by regulated industries, particularly banking, in scaling responsible Artificial Intelligence (AI) while maintaining customer trust, data integrity, and regulatory compliance. Miranda Jones emphasizes a pragmatic approach to adopting generative AI, moving beyond initial hype toward robust, domain-specific implementation.
1. Focus Area
The discussion centers on Responsible AI scaling in regulated environments, specifically addressing the transition from generalized foundational models to bespoke, domain-specific AI solutions. Key themes include enabling safe employee experimentation with generative tools, understanding model limitations (like hallucinations), and the strategic shift toward tailored AI applications over one-size-fits-all models.
2. Key Technical Insights
- Generative Models as Word Guessers, Not Fact Disseminators: A core technical understanding emphasized is that LLMs are fundamentally trained to generate human-sounding text, not to discern or verify facts, necessitating critical challenge of outputs by subject matter experts.
- Value of Domain Specificity: Generalized models lack the necessary nuance for specific industries or even regional dialects (e.g., British vs. American English in financial contexts). Bespoke, domain-specific models, akin to specialized apps in an app store, are crucial for solving targeted problems effectively.
- Unstructured Data Insight: Generative AI helps employees better understand the importance of structured data by revealing how poorly structured unstructured data (like verbose documents) hinders effective prompting and information extraction.
3. Business/Investment Angle
- Safe Experimentation as a Prerequisite for Adoption: Organizations must create protected, low-risk environments where employees can safely experiment with generative tools to build practical understanding of prompting, limitations, and appropriate data usage.
- The Shift from Foundational to Bespoke: The industry conversation has moved away from relying solely on massive foundational models toward prioritizing smaller, targeted models closer to the customer workflow, acknowledging that one-size-fits-all solutions fail in complex banking operations.
- Pace of Adoption in Regulated Industries: Financial services can leverage their naturally cautious pace to focus on alignment, transparency, and measurable outcomes, contrasting with the βgold rushβ mentality seen in less regulated sectors.
4. Notable Companies/People
- Miranda Jones (SVP, Data and AI Strategy Leader, Emprise Bank): The primary expert providing insights on responsible scaling within the banking sector.
- Emerge AI Research/AI and Business Podcast: The platform hosting the discussion, featuring other leaders like the CIO of Goldman Sachs and the head of AI at Raytheon.
- Google DeepMind (Sponsor Mention): Mentioned Gemini 2.5 Flash as a model balancing intelligence, speed, and cost through features like βthinking budgets.β
5. Future Implications
The industry is moving toward a web of smaller, targeted models integrated into specific workflows rather than monolithic systems. Furthermore, the sharp memory of recent AI hype cycles (unlike previous cycles like the dot-com boom) provides a unique opportunity for regulated industries to adopt AI thoughtfully, prioritizing human-in-the-loop processes, especially when deploying AI agents that could execute actions based on potentially flawed outputs.
6. Target Audience
This episode is most valuable for Enterprise AI Leaders, Data Science Executives, Compliance Officers, and CIOs within highly regulated industries (Financial Services, Healthcare, Life Sciences) who are responsible for guiding AI investment, strategy, and responsible deployment at scale.
π’ Companies Mentioned
π¬ Key Insights
"Second, domain-specific AI delivers more value. Smaller bespoke models are better suited to customer service, compliance, and communication patterns in banking."
"First, safe experimentation matters. Employees need hands-on, low-risk environments to learn prompt engineering and recognize model limitations like hallucinations."
"...the hallucination wouldn't be an answer, the hallucination could be a series of actions, which is really what we stand at risk here with a lot of agent tech AI coming in."
"We shouldn't ignore the change in new ways of doing things, but we should look at it from a perspective of what is the problem and what are we trying to accomplish, and then is AI the right tool for that versus finding ways to use agents?"
"we know that generative AI can and will hallucinate. So as we roll out agents or other applications, I think the right thing to do is to start, always start with a human in the loop..."
"I think a good example of why I believe domain specific could be much better if you think about the Apple iPhone. They have an app store because specific apps designed for a specific purpose are going to perform one app design for many. It's the same on set."