Steady hands, not knee-jerk bans: Brainbox’s approach to AI regulation

Unknown Source October 15, 2025 38 min
artificial-intelligence startup investment generative-ai openai anthropic
52 Companies
60 Key Quotes
4 Topics

🎯 Summary

Podcast Summary: Steady Hands, Not Knee-Jerk Bans: Brainbox’s Approach to AI Regulation

This 37-minute episode of “The Business of Tech” features Peter Griffin in conversation with Tom Barraclough, tech policy expert and co-founder of the Brainbox Institute, focusing on a pragmatic, measured approach to regulating Artificial Intelligence in New Zealand. Barraclough strongly advocates against the immediate introduction of new, AI-specific legislation, arguing instead for the effective coordination and application of existing statutes.


1. Focus Area

The primary focus is AI Regulation Policy, specifically contrasting the global trend toward rapid, specific AI legislation (like the EU AI Act) with a New Zealand-centric approach emphasizing leveraging existing legal frameworks (Privacy Act, Crimes Act, Health and Safety legislation) and developing practical, non-legislative tools like codes of practice. Secondary themes include the challenges of policy development (information asymmetry, economic barriers) and the role of independent think tanks.

2. Key Technical Insights

  • Existing Legal Coverage: Many potential AI harms (e.g., fraud via deepfakes, discrimination by chatbots) are already covered by existing, “open-textured” legislation like the Crimes Act and the Human Rights Act, though clarification and enforcement guidance are needed.
  • The Regulatory Cascade: Even the most stringent legislation (like the EU AI Act) is only the top layer; effective regulation relies on a lengthy cascade of delegated acts, guidance documents, and institutional infrastructure to define practical compliance.
  • Self-Regulation as a Proving Ground: Industry self-regulation or codes of practice (like the Biometric Code) can serve as a crucial, fast-moving “trial run” to test practical compliance mechanisms before formal legislation is enacted.

3. Business/Investment Angle

  • Competitive Advantage in Reliability: New Zealand has a potential competitive edge by aiming to become the world’s smartest and most reliable employer/user of AI systems, focusing on capability and digital literacy rather than just restrictive laws.
  • Risk of Hasty Policy: Knee-jerk regulation, exemplified by the proposed under-16 social media ban in Australia, risks implementing policies that cause unintended harm without addressing the core issues effectively.
  • Resource Imbalance in Policy Advocacy: There is a significant concern that policy discussions are dominated by well-resourced entities (Big Tech, large law firms), leaving under-resourced civil society and public interest groups marginalized.

4. Notable Companies/People

  • Tom Barraclough (Brainbox Institute): Central figure advocating for coordination, leveraging existing law, and addressing the “information problem” in policy development.
  • Brainbox Institute: A non-government think tank/consultancy focused on coordinating policy discussions at the interface of government, research, and business.
  • Sunkipate Lab: Barraclough’s company focused on building tools to translate law and regulation into code/data instructions.
  • Lenson McGavin, Chris, and Andrew: Mentioned as leaders behind the open letter urging immediate AI regulation, which Barraclough views as a positive catalyst for discussion.
  • Carnegie Endowment for International Peace / Brookings Institution: Used as examples of well-endowed, influential international think tanks.

5. Future Implications

The industry is heading toward a complex, multi-layered regulatory environment where international standards will heavily influence local compliance, even if specific national legislation is slow to materialize. The immediate future in New Zealand will likely involve more focus on codes of practice and guidance (like the Responsible AI Guidance for Businesses) to bridge the gap until formal legislation is deemed necessary. There is a critical need to solve the information coordination problem to ensure policy development is efficient and informed by existing legal analysis.

6. Target Audience

This episode is most valuable for Tech Policy Professionals, Government Regulators, Legal Counsel specializing in Technology, and Business Leaders navigating the compliance landscape for AI deployment in New Zealand and similar jurisdictions.

🏢 Companies Mentioned

Internet Society Foundation ai_research
Brookings Institution ai_research
Carnegie Endowment for International Peace ai_research
Oracle ai_infrastructure
Foodstuffs North Island ai_application
Internet New Zealand ai_research
Knowledge Wave unknown
Straker AI unknown
White House unknown
The US unknown
Internet Society Foundation unknown
Ellen Strickland unknown
Brookings Institution unknown
Washington D unknown
Elon Musk unknown

💬 Key Insights

"they're only as good as your systems for making sure that what they're doing is good and reliable and not a hallucination and takes into account all the data that it needs to, and that all the code that it's produced can actually be tested for security purposes and works."
Impact Score: 10
"this isn't about having like NZGPT that's trained on all the data in New Zealand and speaks with a Kiwi accent and all that kind of thing. It can be as simple as talking about meaningful AI literacy, or it could be as simple as making sure that we do have resilient digital infrastructure for access and deployment of AI systems."
Impact Score: 10
"What I do think we are missing is a single coordinating vision for what New Zealand needs from AI. I think there is recognition that it's very, very important, and I think in fact it is important. Probably if you'd asked me maybe even a year ago, I would have been on the fence about that, but I'm not now. I think it's critically important."
Impact Score: 10
"But the ones that are really left out are civil society and those groups that are really poorly resourced... my concern is that we're going to miss increasingly in this discussion really important dialogue with the public and researchers who work on behalf of the public."
Impact Score: 10
"fraud through deepfakes is already a criminal offense. The other example of this is non-consensual sexual imagery as well. There's an argument that that is covered by the Harmful Digital Communications Act and the Crimes Act."
Impact Score: 10
"The information problem. If you can't coordinate and work more effectively with others, that's the coordination problem. And then today, I've just shared another one on what I'm calling the economic problem, which is effectively it takes time and energy to do this. It's pretty hard to do it for free."
Impact Score: 10

📊 Topics

#artificialintelligence 69 #investment 2 #startup 2 #generativeai 1

🤖 Processed with true analysis

Generated: October 16, 2025 at 10:16 AM