This Thursday, Facebook announced a project of research within the framework of which she collected 2,200 hours of footage filmed at the first person all over the world, in order to train role modelsartificial intelligence (HE) new generation.
a new form of ai
This project, called Ego4D, could prove to be crucial for la division Reality Labs de Facebook, who is working on many projects likely to benefit from AI models formed directly from video footage filmed from a human point of view (in the first person therefore). This then includes smart glasses, like the Ray Ban Stories that were produced by Facebook last month, and also virtual reality, in which Facebook has invested heavily since the acquisition of Oculus in 2014, for $ 2 billion.
The images then recorded could allow AI to learn to understand or identify something in the real or virtual world, which you could see in first person, through a pair of glasses or an Oculus headset. Facebook said it would make all of the Ego4D data available to researchers as early as November.
Kristen Grauman, Chief Scientist at Facebook, declared to our colleagues CNBC :
This publication, which is an open dataset and research challenge, will catalyze progress for us internally, but also widely externally in the academic community and [permettre] it is up to other researchers to look at these new problems, but now being able to do so in a more meaningful way and on a larger scale.
The dataset could be deployed in AI models used to train technologies like robots, so that they can understand our world more quickly, as Grauman explained. She continued:
Traditionally, a robot learns by doing things in the world or by being literally “taken by the hand” to be shown how to do it. [Mais] there are possibilities for them to learn from videos, just from our own experience.
Facebook and a consortium of 13 partner universities relied on more than 700 participants in nine countries for the purpose of capturing first-person view images. According to Facebook, Ego4D then contains more than 20 times more hours of video than any other dataset of this type.
Facebook partner universities included among others Carnegie Mellon in the United States, the University of Bristol in the United Kingdom, the National University of Singapore, the University of Tokyo in Japan, as well as the International Institute of Information Technology in India.
The images were captured in the US, UK, Italy, India, Japan, Singapore, and Saudi Arabia. Facebook said it hopes to expand the project to other countries, especially Colombia and Rwanda. Grauman then added:
An important decision in the design of this project is that we wanted partners who are first and foremost leading experts in the field, interested in these issues and motivated to pursue them, but who also exhibit geographic diversity.
Interesting timing for Facebook
The company has been steadily stepping up its hardware efforts. Last month it launched the Ray Ban Stories. In July, Facebook also announced the creation of a team to work specifically on the “metaverse”, a concept which implies that the meeting between the real and the virtual, until they are almost confused.
However, over the past month, Facebook has been hit by a barrage of news stemming from a body of internal company research, disclosed by Frances Haugen, former product manager at Facebook. Among these studies, one in particular raised the harmful effects of Instagram on adolescent mental health.
We worsen the body perception problems of one in three teenage girls
In the name of privacy, Facebook said that Ego4D test participants were then instructed avoid capturing personal identifying characteristics when they were collecting footage. This therefore includes people’s faces, conversations, tattoos and jewelry.
Facebook said that she had deleted the information allowing to identify the persons in the videos, and had blurred the faces of passers-by as well as the license plates of vehicles. She also clarified that the audio had been removed from many videos. Grauman then added: “The university partners who collected these videos all went through an intensive and important process to create an appropriate collection policy.”