Meta’s AI Glasses Reportedly Send Sensitive Footage to Human Reviewers in Kenya
Nairobi, Kenya — Meta’s AI-powered smart glasses are sending videos of highly personal and intimate moments captured by users to human reviewers in Kenya, according to an investigation by Swedish news outlets Svenska Dagbladet and Göteborgs-Posten published last week.
The report claims that contractors in Nairobi have viewed footage showing “bathroom visits, sex and other intimate moments.” The glasses, which include cameras, microphones, and an always-listening “Hey Meta” AI assistant, have been heavily marketed by Meta as privacy-focused devices. The revelations have already triggered at least one proposed class-action lawsuit in the United States accusing Meta of false advertising and privacy violations.
Meta Acknowledges Limited Human Review
In response to the reporting, Meta confirmed that subcontracted workers may sometimes review content captured by its AI smart glasses to improve the user experience. The company stated that such reviews are conducted for the purpose of refining the AI’s performance on images and videos.
According to the Swedish investigation, faces in the annotation data are supposed to be automatically blurred. However, a former Meta employee told Svenska Dagbladet that the blurring “does not always work as intended,” and reviewers in Kenya have reported that some faces remain visible. The investigation raises serious questions about whether users fully understand that footage from their wearable cameras could be viewed by humans halfway around the world.
Privacy Concerns and Regulatory Scrutiny
The report has drawn swift attention from privacy regulators. The UK’s Information Commissioner’s Office (ICO) said it has written to Meta over the “concerning” allegations. Privacy advocates argue the situation highlights a broader tension in the AI industry: the need for large volumes of real-world data to train and improve multimodal models versus growing expectations around user consent and data protection.
Meta’s Ray-Ban Meta smart glasses, developed in partnership with EssilorLuxottica, feature built-in cameras capable of recording video and photos that can be processed by on-device and cloud-based AI. The company has positioned the product as a convenient, hands-free way to interact with AI while emphasizing privacy safeguards, including a prominent LED light that activates when the camera is recording.
Impact on Users, Developers, and the Industry
For users, the report undermines confidence in Meta’s assurances about data handling for its consumer AI hardware. Wearable cameras that continuously listen and occasionally record present unique privacy risks compared to smartphones, as they can capture moments in bathrooms, bedrooms, and other private settings without users necessarily realizing footage may leave the device.
The controversy arrives as the broader AI industry races to gather diverse training data for next-generation multimodal models. Companies including Meta, Google, and OpenAI have increasingly relied on global workforces — often in lower-cost regions — to label and annotate sensitive content. The Meta case highlights the human-in-the-loop reality behind many “AI” products and the potential compliance gaps that can emerge.
At least one proposed class-action lawsuit has already been filed in the U.S., citing Meta’s public claims that the smart glasses are designed with privacy in mind. Legal experts expect further litigation and regulatory investigations in both the U.S. and Europe.
What’s Next
Meta has not yet provided a detailed public timeline for any changes to its data review practices for the smart glasses. The company is likely to face increased pressure from privacy regulators in the EU and UK, where data protection rules are particularly strict regarding biometric and intimate personal data.
Industry observers anticipate that wearable AI devices will come under greater regulatory scrutiny in the coming months. Future iterations of Meta’s smart glasses or those from competitors may require clearer user consent mechanisms, improved on-device processing to minimize cloud transmission of raw footage, and stronger technical safeguards for face blurring and content filtering.
The full story was first reported by The Verge, with additional coverage from the BBC and Kenyan media outlets examining the local impact on reviewers in Nairobi. As of now, Meta has not issued a comprehensive statement addressing the specific examples of intimate content cited in the Swedish reporting.
