From Probabilistic to Trustworthy: Building Orion, an Agentic Analytics Platform
🎯 Summary
Podcast Summary: From Probabilistic to Trustworthy: Building Orion, an Agentic Analytics Platform
This 72-minute episode of the AI Engineering Podcast features an in-depth discussion with Lucas Talosen (CEO) and Drew Gilson of Gravity, the company behind Orion, an agentic analytics platform. The conversation centers on bridging the gap between raw data and actionable business insights using advanced AI, with a critical focus on achieving trust and accuracy in enterprise analytics.
1. Focus Area
The primary focus is the development and deployment of agentic analytics platforms (like Orion) designed to automate and enhance business intelligence (BI) and data analysis. Key themes include overcoming the limitations of traditional BI tools, leveraging LLMs for deep analytical reasoning, ensuring enterprise-grade accuracy, and shifting the paradigm from reactive dashboard monitoring to proactive, context-aware insight delivery.
2. Key Technical Insights
- Grounding for Accuracy: To achieve enterprise-level trust, Orion must connect to a ground source of truth (e.g., fact tables, dbt models) that its quality assurance agents can use to validate LLM-generated insights, moving beyond naive text-to-SQL approaches.
- Asynchronous Deep Analytics: Orion is designed to think asynchronously (e.g., overnight) about user roles, upcoming meetings, and organizational data context, enabling “deep analytics” like preparing comprehensive pre-meeting briefings, which traditional BI cannot support.
- Probabilistic Reasoning via Redundancy: While LLMs are probabilistic, high confidence in scalar answers can be achieved through parallel test-time compute (running the query multiple times) to ensure the model consistently derives the same result from the grounded data source.
3. Business/Investment Angle
- Bridging the Analyst Gap: Orion directly addresses the persistent gap between data experts and business users by embedding analytical best practices (cohort analysis, root cause analysis) directly into an agent that understands the user’s role and context.
- Action-Oriented Insights: The platform emphasizes connecting insights to actionable recommendations. It investigates both positive (upward) and negative (downward) metric changes, moving beyond the traditional BI tendency to only focus on “red numbers” (problems).
- Proactive Delivery: The business model relies on pushing insights directly into existing workflows (Email, Slack) where users already operate, removing the friction of forcing users to visit separate dashboards.
4. Notable Companies/People
- Lucas Talosen & Drew Gilson: Founders of Gravity, both with deep backgrounds in data and analytics, stemming from their time at Looker (acquired by Google) and subsequent roles within Google’s data and AI teams.
- Orion: The agentic analytics platform being built by Gravity.
- Looker: Mentioned as the foundation where the founders identified the core problem of the data-business gap.
- Prefect/Fast MCP: Briefly mentioned in the podcast intro as an example of modern orchestration tools solving infrastructure challenges in ML workflows (though not the main topic of the interview).
5. Future Implications
The conversation suggests the future of BI is agentic and proactive. The industry is moving away from static dashboards where users must manually seek information toward intelligent systems that understand context, perform deep, iterative analysis, and deliver synthesized, actionable intelligence directly to the user, fundamentally changing organizational behavior regarding data interaction.
6. Target Audience
This episode is highly valuable for AI/ML Engineers, Data Leaders (CDOs, VPs of Data), Product Managers building data tools, and Investors focused on the enterprise AI and Business Intelligence landscape. It requires a professional understanding of LLM limitations, data warehousing concepts (dbt), and agentic system design.
🏢 Companies Mentioned
đź’¬ Key Insights
"I think also anything where latency is super important, like if split seconds are important, don't put an LLM there. It has different cycles of responding and thinking, and that's how it was built and how it's supposed to work."
"Because of the way these models are trained, reinforcement learning from human feedback makes it very likely that no matter what you ask, the model is going to attempt to satisfy you."
"It's not, 'I don't care what the question is, the answer is AI.' That's not the right approach. We have to be clear on what it's good for and the areas where you shouldn't necessarily use it for."
"Consistency is very, very difficult to achieve. That's it. And you build an enterprise tool; consistency is very important. And in some ways, AI is kind of the wrong tool for consistency."
"We're not yet getting into the territory where we have completely custom tools that would allow for actions to happen in somebody's systems of record or that sort of thing. I think that we'd certainly like to go there when we have the confidence with our customer, of course, that that would be the responsible thing to do. But that's going to be very interesting when we start to build custom tools that are not only ergonomic for doing analysis but also just to do a full end-to-end job in an organization, like an operational job. And I think that's the next frontier."
"The best way to improve a probabilistic system, as far as my team has concluded anyway, is to make as many parts of it deterministic as possible."