About this event
As enterprises, governments, and society writ large race to unlock the transformational value of AI, the risks, ranging from bias and security vulnerabilities to regulatory noncompliance, are growing just as quickly. Frameworks like the Databricks AI Governance Framework (DAGF) and standards like ISO 42001 are emerging to provide structure and uniformity of assessment, manage risk, and ultimately, build trust in AI systems.
But how can enterprises translate frameworks and standards into practical, actionable guardrails and reduce friction between governance practitioners and AI stakeholders?
Join experts from Databricks, Schellman, and Trustible for an exploration of the Databricks AI Governance Framework, including its principles, architecture, and role in helping enterprises scale AI responsibly. We’ll examine how it aligns with emerging regulations and standards (like ISO 42001, NIST AI RMF, and the EU AI Act), and discuss practical considerations for implementing governance controls and assurance programs across the AI lifecycle, while creating a feedback loop between governance and compliance teams and AI internal stakeholders that can increase and accelerate safe AI adoption.
This 45-minute session is designed to equip technology, risk, and compliance leaders with insights they can apply to operationalize AI governance in their organizations.
Key Highlights:
Learning Objectives:
Hosted by
Trustible empowers AI governance leaders with insights and tools to drive responsible AI innovation, manage risks, and build trust in AI systems.