AI ROI for Patient Insights and Better Eligibility Rates in Clinical Trials - with Michael Zaiac of Daiichi Sankyo
π― Summary
Podcast Episode Summary: AI ROI for Patient Insights and Better Eligibility Rates in Clinical Trials
This 25-minute episode features Michael Zaiac, Head of Medical Oncology for Europe and Canada at Daiichi Sankyo, discussing the practical application of Artificial Intelligence (AI) within clinical trials, focusing specifically on accelerating recruitment and improving patient eligibility precision.
1. Focus Area
The discussion centers on the application of AI/ML technologies in clinical trial optimization, specifically patient recruitment, matching, and diversity enhancement in oncology studies. It clearly differentiates between deterministic AI (advanced analytics) for pattern matching in patient identification and generative AI for patient-facing communications (e.g., consent forms, trial summaries). A significant theme is the pharmaceutical industryβs conservative, high-accuracy approach to AI deployment, heavily influenced by regulatory environments.
2. Key Technical Insights
- Deterministic AI for Recruitment: Advanced analytics (deterministic AI) is currently the primary driver for identifying trial-ready and diverse patient populations by leveraging pattern matching across Electronic Health Records (EHRs).
- Generative AI for Patient Engagement: Generative AI is being cautiously explored for adapting complex trial documentation into patient-friendly summaries, requiring significant restriction and validation to ensure regulatory compliance and prevent misinformation.
- Intentional Model Restriction: In pharma, the strategy for deploying powerful models (like LLMs) is often to make them less complex but more precise and restricted, ensuring high accuracy and adherence to safety standards, rather than maximizing general capability.
3. Business/Investment Angle
- Clear ROI for Deterministic AI: The ROI for deterministic AI in recruitment is immediate and quantifiable: faster enrollment leads to reduced study costs and earlier regulatory filing, resulting in longer market exclusivity.
- Transparency as a Prerequisite for Acceptance: Success in deploying AI requires extensive βpre-preparationββa lengthy (sometimes nine-month equivalent) process of aligning legal, compliance, and regulatory stakeholders to ensure transparency and acceptance of the AI-generated outcomes.
- Data Collaboration as the Bottleneck: Despite technological readiness, the primary barrier to achieving future, faster trial timelines is the lack of sufficient, collaborative, high-quality data, particularly in Europe due to GDPR and competitive behavior among healthcare centers.
4. Notable Companies/People
- Michael Zaiac (Daiichi Sankyo): Guest expert, providing the perspective of a major global pharmaceutical company on AI adoption in clinical operations.
- Daiichi Sankyo: Highlighted as the second-largest pharmaceutical company in Japan, demonstrating leadership in applying advanced analytics in oncology trials.
- Sam Altman: Briefly referenced regarding the general observation that highly regulated industries like pharma are not the first adopters of cutting-edge AI technology.
5. Future Implications
The industry is moving toward leveraging AI to create trials that perfectly mirror real-world patient populations, enhancing scientific validity. However, the pace of this advancement is currently constrained by data infrastructure and collaboration, rather than the AI software itself. Future success hinges on overcoming competitive barriers to data sharing, especially in regions like Europe.
6. Target Audience
This episode is most valuable for Pharmaceutical Executives, Clinical Operations Leaders, AI Strategy Leaders in Healthcare, and Regulatory/Compliance Officers who are navigating the practical implementation, risk management, and ROI justification for AI technologies in regulated environments.
π’ Companies Mentioned
π¬ Key Insights
"As soon as we move to a generative AI, you need to bring on board all your stakeholders. You're not in isolation, you will fail. They will block you for good reasons because they don't understand the black box, they will block you."
"We work very closely with experts on the AI Act, which of course has been implemented recently. A lot of it makes sense. I mean, it differentiates very clearly by risk levels, and we plan as an organization to always automatically apply the highest risk level to what we do. So we go in excess of what the current EU AI Act actually requires by choosing always automatically the highest risk level."
"What isn't there is the collaboration of data providers. We don't have the quality of the data for the quality of the software we have."
"I think it's the precision we deploy, which comes with restrictions. As I said, when we take a large language model, we make it less rather than more, but we make it less with intent, be restricted with intent."
"In advanced analytics, this seems to be less of an issue. In generative AI, we are very much aware of it, and that is what I meant earlier. We have built workbenches and filters in which restrict existing large language models to a much more basic, but hence much more precise machine."
"we are aware of it [bias] and we fear it. I think when we use the generative AI part, we haven't really seen it when we deploy our models or anything in patient selection, but we of course check for that, that the machine doesn't have a bias because of course it amplifies the human bias of the data it has been fed with..."