About this event
π Join us for our next webinar on how to scan and test AI models to detect risks of biases, performance issues, and errors across various types of models, from tabular to LLMs.
π In this webinar, you'll learn to easily detect the vulnerabilities that can affect your models, such as data leakage, non-robustness, ethical biases, and overconfidence. Learn how to perform these assessments directly in your notebook.
π©βπ» Not only will we introduce our Python Testing library (now in beta!) which can assist you in scanning and testing your models, but we will also present how to centralize your AI testing in a collaborative Hub using the Giskard platform.
ποΈ Speakers
Jean-Marie John-Mathews, Chief Product Officer and Co-founder at Giskard
Matteo Dora, Machine Learning Researcher at Giskard
π Agenda
π§ Introduction: Understanding errors and biases in AI models
π Detecting vulnerabilities in your AI model
π» Demo session: Scanning and testing with Giskard's Python library
ππ»ββοΈ Q&A
π Thursday, June 22nd, 11 am (Paris time)
β Free online event
π Limited spaces. Register now to secure your spot!
Hosted by
Giskard is a company that provides a Quality Assurance platform for AI models, helping organizations increase the efficiency of their AI development workflow, eliminate risks of AI biases and ensure robust, reliable & ethical AI models.