0
1
0
1
2
3
4
5
6
7
8
9
0
0
1
2
3
4
5
6
7
8
9
%

Meta confirms it may train its AI on any image you ask Ray-Ban Meta AI to analyze

We recently inquired whether the Meta smart glasses' users' photos and videos serve as training grounds for the AI. At first, the business didn't say much.Meta has provided TechCrunch with a little bit more color since then.

We recently inquired whether the Meta smart glasses’ users’ photos and videos serve as training grounds for the AI. At first, the business didn’t say much.

Meta has provided TechCrunch with a little bit more color since then.

In a nutshell, Meta AI’s AI can be trained on any image you share with it.

In an email to TechCrunch, Meta policy communications manager Emil Vazquez stated, “[i]n locations where multimodal AI is available (currently the US and Canada), images and videos shared with Meta AI may be used to improve it in accordance with our Privacy Policy.”

A spokesperson clarified in an earlier email that photos and videos taken with the Ray-Ban Meta are not used for AI training as long as the user does not submit them. However, once Meta AI is asked to look at those photos, they are subject to a completely different set of rules.

To put it another way, the company is making use of its first consumer AI device to accumulate a huge amount of data that could be used to make AI models that are ever more powerful. The only way to “opt out” of Meta’s multimodal AI features is to not use them at all.

Ray-Ban Meta users may not be aware that they are providing Meta with a lot of images to train its new AI models, which could include images of their homes, loved ones, or personal files. This is made abundantly clear in the Ray-Ban Meta’s user interface, according to spokespersons for Meta, but executives for the company either initially lacked knowledge of this or were unwilling to discuss it with TechCrunch. We already knew that Meta uses everything Americans post publicly on Instagram and Facebook to train its Llama AI models. However, Meta has now extended this definition of “publicly available data” to include anything that can be viewed through its smart glasses and analyzed by its AI chatbot.

This is especially relevant right now. Meta began rolling out new AI features on Wednesday. These features make it easier for Ray-Ban Meta users to invoke Meta AI in a more natural way, increasing the likelihood that users will send it new data that can also be used for training. During its 2024 Connect conference last week, the company also talked about a new live video analysis feature for Ray-Ban Meta that basically feeds the multimodal AI models in Meta a continuous stream of images. Meta said in a promotional video that you could use the feature to look through your closet, use AI to look at everything, and choose an outfit.

The fact that you are also sending these images to Meta for model training is not mentioned by the company.

TechCrunch was directed to Meta’s privacy policy, which clearly states: The AI models you interact with can be trained by you. Despite Meta’s continued refusal to clarify, this appears to include images shared with Meta AI via the Ray-Bans smart glasses.

Additionally, spokespersons pointed TechCrunch to the terms of service for Meta AI, which state that “you agree that Meta will analyze those images, including facial features, using AI.”

In a court case involving the use of facial recognition software by Meta, Meta recently paid the state of Texas $1.4 billion. The issue at hand was a 2011 Facebook feature known as “Tag Suggestions.” By 2021, Facebook had removed billions of people’s biometric information and made the feature explicitly opt-in. Several of Meta AI’s image features are not being made available in Texas, for example.

According to Meta’s privacy policies, the company stores all transcriptions of your voice conversations with Ray-Ban Meta automatically for the purpose of training future AI models. There is a way to decline the actual voice recordings. Users of the Ray-Ban Meta app have the option to use voice recordings to train the AI models when they first log in.

Meta, Snap, and a number of other tech companies are unmistakably advocating for smart glasses as a new computing form factor. The majority of these devices are powered by artificial intelligence and feature face-mounted cameras. During the Google Glass era, we first heard about a lot of privacy concerns, which are reiterated in this. According to 404 Media, some college students have already hacked the Ray-Ban Meta glasses, allowing them to view anyone’s name, address, and phone number.

Join our newsletter to stay updated

Related Posts

Join Our Newsletter

Services

Lets Get In Touch