About this event
In April this year, the European Commission presented the first version of the EU AI Act, a Europe-wide framework aiming at making AI human-centric, trustworthy, and explainable.
Machine learning (ML) models are now widely and heavily used in every business areas. For example, predicting the probability of a client to leave a service (churn prediction); predicting a significant decrease of sales for the next weeks of months; or even predicting the likelihood of a person to not reimburse a loan they are applying for.
All these predictions are useful and more and more accurate. However, it is still very opaque as of why these models reach these numbers? Thus, having a direct impact on the validity of the subsequent decision-making process. What led these models / algorithms to output these numbers? Can they really be trusted?
As ML becomes ubiquitous, it becomes now urgent to ensure:
Through this webinar, we will explain the importance and relevance of XAI (eXplainable Artifical Intelligence) & MLI (Machine Learning Intelligibility) approaches, as well as the keys to successful implementation and insertion within the existing Data Science pipelines and workflows. We will rely on real-world use cases that we implemented at client of ours.
👋 Can't attend live? Register now to receive the recording and the slide deck within 24 hours after the live session.
We are a global tech company dedicated to making technology work for people. With over 2,600 experts across 21 countries, we believe in the collaboration between technology and people, amplifying the potential of one another. Every day, we help our clients achieve their ambitions with technology.