About this event
In recent years, visual synthetic data has emerged as a powerful tool to radically transform and accelerate the development of computer vision systems. This fast-developing technology has the potential to fuel development across industries and applications, while protecting privacy, combating biases, and replacing outdated and time-consuming manual data collection and annotation methods.
We invite you to join us for a conversation with 3 industry experts about how synthetic data generation is unlocking the full potential of computer vision and expanding our ability to simulate the world in 3D. Providing perspectives from both industry and academia, these leaders will discuss some of the exciting advances in the field of synthetic data generation and how industry professionals can utilize, interact with, and contribute to cutting-edge development in this fast-changing area of AI.
With the ability to generate high-quality synthetic data, replacing the common data gathering, annotation and cleaning process, we now have a new powerful tool for solving practical computer vision challenges. This tool, provides us two new capabilities, the ability to generate precise datasets that are fully labeled, and the ability to quickly iterate on the datasets themselves. This new flexibility warrants new thinking around how we approach understanding target visual domains, how we go about designing synthetic datasets and the methodology we use for iterative experimentation with synthetic data. The goal here is to dive into how we analyze target domains and what principles guide us when generating synthetic data.
Nowadays, collecting the right dataset for machine learning is often more challenging than choosing the algorithm. We address this challenge with photorealistic synthetic training data – labeled images of humans made using computer graphics. With synthetic training data, we can generate clean labels without annotation noise or error, produce labels otherwise impossible to annotate by hand, and easily control variation and diversity in our datasets. I will show you how synthetics drives our work on understanding humans, including how it powers Fully Articulated Hand Tracking on HoloLens 2, providing users with a new way of interacting with virtual objects – simply reaching out and touching them.
We approach the problem of understanding embodied human behavior through capture, modeling, and synthesis. First, we learn realistic and expressive 3D human avatars from 3D scans. We then train neural networks to estimate human pose and shape from images and video. Specifically, we focus on humans interacting with each other and the 3D world. By capturing people in action, we are able to model human movement and human-scene interaction. To validate our models, we synthesize virtual humans in novel 3D scenes. The goal is to produce realistic human avatars that interact with virtual worlds in ways that are indistinguishable from real humans.
Solve the data bottleneck for computer vision with human-focused datasets designed to meet your training needs.
Share this event