• AI SAFETY LAB 2026

    AI SAFETY WORKSHOP for executives & leaders

    MAY 7, 2026 (12.00 PM-2.00) PM PACIFIC TIME (live online)

    At our AI Safety Lab workshop, you will learn simple rules for safe AI use,

    helping you stay in control, while continuing to use and build with AI with confidence.

  • ABOUT THIS EVENT:

    AI is powerful. But without the right safeguards, it can create significant risks inside your business.Most companies today are using AI - but very few truly understand: what they are exposing, where the risks are, and how to stay in control. This session is designed to give you control, and confidence in how you use AI — as a leader, operator, or decision-maker.

    Join a Live Session on AI Safety:

    A focused, real-time workshop where you will learn how to use AI strategically, responsibly, and safely — without slowing down innovation.

    At our live, real-time (online) AI safety workshop, you will meet our invited speakers - AI safety experts from leading companies, who will teach you all about AI safety, how to minimize privacy and security risks, how to responsibly use AI, and make it your trusted companion, employee, or friend.

    You will learn simple rules for safe AI use, helping you stay in control, while continuing to use and build with AI with confidence.

    What You Will Walk Away With:

    • A clear framework for evaluating AI risks in your work and organization
    • Practical rules for using AI safely in everyday workflows
    • Confidence in when to trust AI — and when not to
    • Understanding of how data is handled when using AI tools
    • A sharper ability to detect errors, hallucinations, and flawed outputs
    • A personal approach to balancing automation and human judgment

    What You’ll Learn:

    • Core principles of AI safety and responsible use
    • The most common (and underestimated) risks in AI adoption
    • Real-world examples of AI failures and what caused them
    • How to approach AI risk mitigation in your company
    • How AI systems handle your data, prompts, and outputs
    • Where leaders lose control — and how to prevent it

    Live Q&A with AI Safety Expert:

    • You will have the opportunity to:
    • Ask your specific questions
    • Discuss your AI use cases
    • Get clarity on your current approach

    Who This Session Is For:

    • CEOs and corporate executives
    • Founders and entrepreneurs
    • Investors and decision-makers
    • Product, innovation, and strategy leaders
    • Policy and governance professionals
    • All AI enthusiasts

    Some of the questions our participants often ask at our AI Safety workshops:

    • What are practical “rules” I can follow when using AI?
    • What should I never share with AI tools?
    • What risks are underestimated right now?
    • What mistakes are most people making with AI today?
    • Is using AI making me more effective—or more replaceable (where is the line between leverage and dependency)?
    • Am I outsourcing too much of my judgment to AI?
    • Is AI shaping my thinking without me noticing?
    • How do I know when I can trust AI—and when I shouldn’t?
    • How private are these tools?
    • Am I accidentally sharing data I shouldn’t, and how can I prevent it?
    • Can I actually trust what AI tells me?
    • How do I know when it’s wrong?
    • Can AI leak sensitive or confidential information?
    • Is everything I type being stored, reused, or seen by others?
    • What happens to the data I share with AI?
    • How do I prevent an AI agent from taking actions that I didn’t intend?
    • How much control do I actually have over AI tools and agents?
    • Where is it safe to draw the line between automation and control?
    • What can actually go wrong with AI today?
    • Is it safe to use AI for work, business, or sensitive information?
    • Who is actually controlling my AI systems—and should I be worried about that?
    • What are the main rules I should follow to use AI safely and smartly?
  • AI SAFETY LAB 2026

    AI SAFETY WORKSHOP for executives & leaders

    MAY 7, 2026 (12.00 PM-2.00) PM PACIFIC TIME (live online)

    At our AI Safety Lab workshop, you will learn simple rules for safe AI use,

    helping you stay in control, while continuing to use and build with AI with confidence.

  • AI SAFETY WORKSHOP TUTORS:

    Section image

    TANYA INDINA-MITCHELL (Ph.D)

    CEO & Founder, Indina-Consulting

    & Mission2Mars Academy

    Tanya is a Silicon Valley innovation strategist and AI foresight expert working at the intersection of AI, leadership, and global innovation ecosystems.

    She has spent over 15 years advising executives, founders, and policymakers across the U.S., Europe, and Asia — helping them navigate technological disruption and make high-stakes decisions with clarity.

    Her work brings together perspectives from business, policy, and emerging technologies to help leaders think ahead — and act with confidence.

    Fulbright Scholar (Wilson Center, Washington DC)

    Former Harvard University (Berkman Klein Center) Fellow

    Advisor to founders, investors, and leadership teams globally.

    Section image

    ALEXANDRE LAULHE

    CyberSecurity & AI Solutions Consultant - PALO ALTO NETWORKS

    Alexandre Laulhé, based in San Francisco, CA, US, is a Sr Solutions Consultant - Strategic AI Accounts at Palo Alto Networks. Alexandre Laulhé brings experience from previous roles at Palo Alto Networks and Cisco Meraki. He holds a Masters in Computer Science, Engineering with a robust skill set that includes Security Architecture Design, Cybersecurity, Cloud Security and more.
    In Palo Alto Networks he is responsible for protecting models and AI Infrastructure from threats like prompt injection, data poisoning, model theft, and adversarial attacks.

    Section image

    DUSTIN ALLEN

    CEO & Co-Founder, Trinitite

    - enterprise AI governance platform building the "autocorrect for AI agents."

    Dustin Allen is the Co-Founder of Trinitite, an enterprise AI governance platform building the "autocorrect for AI agents."

    Over the past decade, Dustin has specialized in architecting deeply embedded, data-driven SDKs and applied AI systems for highly regulated and complex industries. His background includes serving as Technical Lead for the first telehealth SDK to market at Amwell prior to their $3B IPO, as well as Head of Product and Sales Engineering for the most widely installed enterprise location SDK globally at Infillion. Today, he and the Trinitite team help enterprises transition from unpredictable probabilistic AI deployments to mathematically secure, deterministic AI infrastructure.

  • AI SAFETY LAB ONLINE WORKSHOP ON MAY 7, 2026

    12.00 PM PACIFIC TIME ONLINE (ZOOM)


    Book your spot to join this live session.

    Limited number of seats is available.

    Join a focused, high-value session designed to help you use AI with control, and confidence.

    Secure your place in the session

  • ABOUT INDINA-CONSULTING INNOVATION LAB:

    Indina-Consulting Innovation Lab is a global foresight and innovation studio helping leaders navigate the next decade of AI and technological transformation.

    We operate at the intersection of AI, emerging technologies, business reinvention and global innovation ecosystems, translating the frontier of Silicon Valley innovation into reaI strategic advantage for leaders around the world.

  • REQUEST CUSTOM PROGRAM ON AI SAFETY FOR YOUR TEAM:

    APPLY TO BE A SPEAKER: