À propos de cet événement
Les organisateurs de cette journée, Teddy Furon et Caroline Fontaine, remercient l’INS2I (https://ins2i.cnrs.fr), le GdR-ISIS (http://www.gdr-isis.fr) et le GdR Sécurité Informatique (https://gdr-securite.irisa.fr) pour leur confiance.
9h - 10h Exposé Invité
• Elvis Dohmatob (Criteo), Adversarial Examples: The Good, The Bad, and The Ugly!
10h - 11h Session #1
• Ihsen ALOUANI (Insa Hauts de France), Defensive Approximation: Securing CNNs through Approximate Computing
• Rémi Bernhard (CEA), Luring of transferable adversarial perturbations in the black-box paradigm
• Alexandre Araujo (Paris Dauphine), On Lipschitz Regularization of Convolutional Layers using Toeplitz Matrix Theory
11h - 11h20 Pause
11h20 - 12h20 Session #2
• Thibault Maho (Inria Rennes), Fast, Real, Black and White Attacks
• Wassim Hamidouche (Insa Rennes), Detect and Defense Against Adversarial Examples in Deep Learning using Natural Scene Statistics and Adaptive Denoising
• Pierre-Yves Lagrave (Thales), Robustness and Vulnerability of Lie Group-Equivariant Neural Networks
12h20 - 14h Déjeuner
14h-14h40 Session #3
• Ahmed Aldahdooh (INSA Rennes), SFAD: Selective and Feature based Adversarial Detection
• Rafael Pinot (Paris Dauphine), Randomization matters. How to defend against strong adversarial attacks?
14h40 - 15h40 Exposé Invité
• Alexandre Sablayrolles (Facebook), Privacy and data tracing in machine learning models
15h40 - 16h Pause
16h - 17h20 Session #4
• Adrien Chan Hon Tong (Onera), Adversarial poisoning against deep models
• Samuel Tap (ZAMA.ai), Inférence homomorphe de réseaux profonds
• Arnaud Grivet-Sébert (LIST - CEA), SPEED: Secure, PrivatE, and Efficient Deep learning
• Katarzyna KAPUSTA (Thales), Watermarking de modèle comme moyen de vérification de vol de propriété intellectuelle
Proposé par