This story was originally published on MyNorthwest.com
You put on your Meta Ray-Ban glasses, say “Hey Meta,” and ask for help identifying a restaurant down the street. Routine enough. What you probably don’t know is that somewhere in Nairobi, Kenya, a human being may be watching that same footage.
That’s the finding of a bombshell investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten, now being widely circulated by U.S. and international tech outlets, including Mashable. The papers sent journalists to Kenya, where they interviewed workers employed by Sama, a data annotation subcontractor hired by Meta to help train its AI systems. What those workers described should give pause to anyone who owns a pair.
“We see everything, from living rooms to naked bodies,” one Nairobi-based worker told the Swedish journalists. “Meta has that type of content in its databases. People can record themselves in the wrong way and not even know what they are recording. They are real people like you and me.”
“Unless users choose to share media they’ve captured with Meta or others, that media stays on the user’s device,” a Meta spokesperson said in a statement to KIRO Newsradio. “When people share content with Meta AI, we sometimes use contractors to review this data for the purpose of improving people’s experience, as many other companies do. We take steps to filter this data to protect people’s privacy and to help prevent identifying information from being reviewed.”
Meta also said it has been in contact with Sama, which confirmed it “is not aware of any workflows where sexual or objectionable content is present, or where faces or sensitive content are continually unblurred.” Meta said it will continue to investigate.
The company added that its terms of service require users to comply with all applicable laws and use the glasses in a safe, respectful manner, and pointed to the glasses’ built-in LED light as a clear signal to bystanders when content is being captured.
That explanation may raise as many questions as it answers. The Swedish investigation found that some of the most sensitive footage appeared to have been captured unintentionally by users who didn’t realize their glasses were recording at all, not by people deliberately sharing content with Meta.
What Meta Ray-Ban glasses workers in Kenya say they’ve seen
The workers described a steady stream of deeply private footage captured by unsuspecting glasses wearers in Western homes. People walking out of bathrooms naked, couples in bed, users watching pornography while wearing the glasses, bank cards accidentally filmed, text conversations covering crimes, personal secrets, and explicit sexual commentary.
One worker recounted a man setting his glasses on a nightstand, then his partner walking into the frame and undressing, completely unaware she was being recorded and watched by a stranger thousands of miles away.
“In some videos, you can see someone going to the toilet, or getting undressed,” another worker said. “I don’t think they know, because if they knew they wouldn’t be recording.”
Many of the videos appear to be moments captured when users weren’t aware they were even recording, Mashable reported.
The workers operate under strict non-disclosure agreements. Offices are monitored by security cameras, and personal phones are banned. Anyone who asks questions, they told the journalists, risks losing their job.
“You understand that it is someone’s private life you are looking at, but at the same time, you are just expected to carry out the work,” one employee told the Swedish outlets. “You are not supposed to question it. If you start asking questions, you are gone.”
Meta says Ray-Ban glasses users have control, the investigation says otherwise
Meta markets the Ray-Ban glasses as a product built with privacy in mind, telling buyers they have full control over what gets shared. When the Swedish journalists bought a pair and tested them, they found the glasses require a constant internet connection to function, and that data is automatically routed through Meta’s servers, whether users want it to be or not.
Meta’s own terms of service include language allowing for “manual (human)” review of AI interactions. The Swedish newspapers contend that most users have no idea that the clause exists, let alone what it means in practice. When the Swedish newspapers sought comment, Meta’s spokesperson declined to directly answer their questions, instead pointing to that same policy. The company’s full response: “When live AI is being used, we process that media according to the Meta AI Terms of Service and Privacy Policy.”
The glasses have also drawn scrutiny for how they’re being used in public. In the months since launch, influencers and pickup artists have used Meta Ray-Ban glasses to secretly record strangers, finding ways to obscure the recording light intended to alert bystanders, Mashable reported. Privacy advocates warn the technology could eventually be used for mass surveillance, and Meta has acknowledged it is moving ahead with live AI features that could include facial recognition.
Meta says faces in annotation data are automatically blurred to protect identities. Workers told the journalists the system doesn’t always work.
“The algorithms sometimes miss,” a former Meta employee confirmed. “Especially in difficult lighting conditions, certain faces and bodies become visible.”
The contractor reviewing your footage is facing its own legal troubles
Sama, the Nairobi firm reviewing footage from Meta’s glasses, is no stranger to controversy. More than 140 of the company’s former Facebook content moderators in Kenya were diagnosed with PTSD and other mental health conditions after years of reviewing graphic content. A class action lawsuit filed against Meta and Sama alleged the workers were fired in retaliation for organizing and raising concerns about their working conditions. You can read more about that case in CNN’s reporting here.
Privacy experts say Meta Ray-Ban users have no idea what’s happening
Kleanthi Sardeli, a data privacy lawyer at the Vienna-based organization None Of Your Business, which has filed multiple legal cases against Meta, told the Swedish journalists the problem is fundamental.
“Once the material has been fed into the models, the user in practice loses control over how it is used,” she said.
Petter Flink, a security specialist at Sweden’s data privacy authority, put it plainly: “The user really has no idea what is happening behind the scenes.”
Meta sold seven million pairs of these glasses in 2025 alone. Millions of people are wearing them right now, in homes, offices, coffee shops, and public spaces across Seattle, the Eastside, and the broader Pacific Northwest, likely unaware of what happens the moment they say, “Hey Meta.”
What you can do right now to limit Meta’s access to your data
If you own a pair of Meta Ray-Ban glasses, there are steps you can take. In the Meta View app, go to Settings, then Privacy, and review the data sharing options you have enabled. When prompted during setup, decline the option to share additional data with Meta to improve their products. Be aware that even with those settings adjusted, some data processing through Meta’s servers is required for the AI assistant to function at all. For a full review of what Meta collects and how to manage it, visit Meta’s privacy policy and data settings here.
The investigation’s bottom line is hard to shake. As one Kenyan annotator told the Swedish journalists: “You think that if they knew about the extent of the data collection, no one would dare to use the glasses.”
Charlie Harger is the host of “Seattle’s Morning News” on KIRO Newsradio. You can read more of his stories and commentaries here. Follow Charlie on X and email him here.