Agentic Insecurities with Keren Katz
🎯 Summary
Podcast Summary: Agentic Insecurities with Keren Katz
This 49-minute episode of Pondering AI, featuring Keren Katz (Senior Group Manager of Threat Detection, Product Management, and AI at Tenable and OWASP contributor), dives deep into the evolving threat landscape introduced by modern AI agents and Large Language Models (LLMs), emphasizing how these technologies fundamentally redefine organizational risk.
1. Focus Area
The primary focus is the security and governance challenges posed by Generative AI and AI Agents within enterprise environments. Key themes include:
- The shift from traditional breach-focused risk to risk defined as “success in the wrong context.”
- The dual threat of amplified insider risk and non-deterministic AI outputs.
- Specific attack vectors like context injection and the risks associated with multi-agent systems (MAS) and protocols like MCP (Multi-Agent Communication Protocol).
- The necessity for new governance models to manage the rapid, decentralized adoption of AI tools.
2. Key Technical Insights
- Non-Deterministic Outputs: LLMs are inherently inconsistent (non-deterministic), meaning automated workflows relying on them (e.g., updating financial or legal files) can produce unpredictable and potentially erroneous results, even without malicious intent.
- Context Injection in RAG Systems: A major enterprise threat where external, malicious data (like a prompt injection hidden in an email) can be stored as context in a Retrieval-Augmented Generation (RAG) system and later used by the AI to execute unintended actions.
- Agentic Protocol Vulnerabilities: Protocols like MCP, which enable agents to call other services or AIs, introduce new attack surfaces analogous to traditional network protocol attacks, where agents might be tricked into interacting with malicious or corrupted tools.
3. Business/Investment Angle
- Amplified Insider Threat: Accessible tools like Copilot significantly shorten the “dwell time” for legitimate users to access and expose sensitive data simply by asking questions, bypassing traditional access controls if monitoring is inadequate.
- Trust vs. Safeguards Gap: A significant disconnect exists where 78% of organizations trust AI, but only 40% support it with responsible safeguards, indicating a major governance deficit impacting ROI potential.
- The “2025 Ransomware”: Attackers no longer need complex privilege escalation; they can leverage low-privilege user access combined with AI tools to extract core data, making data exfiltration easier than traditional ransomware entry.
4. Notable Companies/People
- Keren Katz: Expert perspective from 12 years at the intersection of AI and security, including experience at a founder level and in Special Operations.
- Tenable: Current employer, highlighting product management in threat detection.
- OWASP: Contributor, emphasizing the collaborative effort to define AI security standards (referenced the recent Gen AI Security Report).
- Apex: Company founded by Sam Altman and Sequoia, where Katz recognized the urgent security gap.
- Google Gemini: Mentioned specifically regarding the validation of context injection as a significant threat vector.
5. Future Implications
The industry is moving toward a state where AI is deeply embedded in core operations, making robust governance non-negotiable. The future requires a shift from reactive security (waiting for breaches) to proactive governance focused on intention detection and human-in-the-loop oversight for the most critical workflows. Organizations that fail to implement these safeguards risk severe operational failures and data exposure, even from non-malicious user curiosity.
6. Target Audience
This episode is highly valuable for Cybersecurity Professionals (CISOs, Threat Hunters, Security Architects), AI/ML Product Managers, and Technology Executives responsible for risk management and digital transformation strategy. It provides a necessary framework for understanding and mitigating emerging AI-native risks.
Comprehensive Narrative Summary
Keren Katz frames the current AI security challenge as a paradigm shift: risk is no longer just about breaches, but about unintended functionality—AI succeeding in the wrong context. She highlights two core differences from traditional security: the amplification of insider threats and the non-deterministic nature of LLMs.
Regarding insider threats, Katz notes that tools like Copilot make it trivially easy for employees to query vast amounts of sensitive internal data (like forecasted earnings) without triggering traditional alerts, as the activity appears “normal.” This exposes data that RBAC (Role-Based Access Control) might not adequately cover due to poor configuration or the sheer ease of access.
The non-determinism issue means that even when developers intend for an AI agent to perform a specific task (like updating a file), the output is variable and potentially incorrect, leading to operational errors (e.g., wrong financial data in reports, emails sent to the wrong leads).
Katz details findings from the recent OWASP Gen AI Security Report, emphasizing threats across different use cases:
- Enterprise AI: Dominated by insider risk and context injection attacks targeting RAG systems.
- Client-Facing Interfaces: Vulnerable to prompt injection and jailbreaking, potentially leading to data extraction or manipulation of customer-facing policies (e.g., unauthorized refunds).
- Multi-Agent Systems (MAS): Introduce new risks via agentic protocols like MCP, where trust in external tools called by the agent can be exploited.
The solution, according to Katz, begins with Visibility: organizations must first inventory what AI is running and what data it touches. Following visibility, governance must focus on intention detection—understanding the specific, company
🏢 Companies Mentioned
💬 Key Insights
"And then of course, there needs to be adjustments depending on the use case because it's an entirely different use case if the user successfully exfiltrated the data to the network after they got it from the AI, or if the data that they got is super precious, or if they actually triggered a specific, you know, the most risky action ever in the organization, or just an internal email to something very not risky."
"You can't create, and you should create a template of, okay, so I know data exfiltration can happen. I know that output can be manipulated. What should I be doing in each of these cases? And then of course, there needs to be adjustments depending on the use case..."
"if that agent got jailbroken by the user, which data it's connected to, I want to look if this data was exfiltrated. I want to look also in the network to see if this data got out of my organization. I want to know the ransomware is behind my door, right? And then think about which actions does this agent can do."
"companies have haven't figured out yet what they do in case of an AI breach. It's very interesting because there is a use case, it's so funny to me, but one of my clients, they just had a jailbreak attempt in their environment, like a real jailbreak, and I helped them looking at it and helped them sorting into it, and they asked me, okay, that's bad, what do I do now? So they haven't sorted a plan yet..."
"the new user behavior analysis will be now the agent behavior analysis because we were talking about the vastness and volume of activities that happen."
"I will be putting lots of attention into mapping risky interactions, mapping risky users, and mapping risky applications or critical workflows that are using Generative AI, and then addressing them up front instead of being, I'd say, proactive instead of just being reactive to it."