This browser is not actively supported anymore. For the best passle experience, we strongly recommend you upgrade your browser.
| 4 minute read

Eyes Wide Open? The Risks of AI Smart Glasses

A joint investigation by Swedish newspapers Svenska Dagbladet and Göteborgs-Posten has led to allegations of a significant gap between the privacy assurances offered by Meta for its Ray-Ban AI smart glasses and the reality of how user data is handled.  

Outsourced data annotators, employed by a Kenya-based subcontractor, reported routinely viewing highly sensitive footage captured by the glasses, including recordings of users in bathrooms, bedrooms, and intimate situations, as part of their work training Meta's AI models. Meta maintains that its privacy notice discloses that human review may occur and that filtering measures, including face-blurring, are applied before content reaches reviewers. However, it has been reported that workers told the Swedish journalists that this anonymisation frequently fails, leaving identifiable individuals visible in deeply private settings. 

Regulatory Scrutiny - UK and EU Responses

The UK's Information Commissioner's Office (ICO) has described the allegations as "concerning" and confirmed it is writing to Meta to request information on how the company meets its obligations under UK data protection law. The ICO stated plainly that "devices processing personal data, including smart glasses, should put users in control and provide for appropriate transparency," and that service providers "must clearly explain what data is collected and how it is used". 

Within the EU, Italy's data protection authority (the Garante) was among the first European regulators to raise concerns, having previously sent formal questions to Meta about how the glasses handle personal data collection. European Parliament legislators have also questioned the practice. Meanwhile, the privacy advocacy organisation NOYB has highlighted a transparency problem with such glasses under the GDPR, arguing that individuals may not realise the camera is recording when they interact with someone wearing a device, and that explicit consent should be required when data is used to train artificial intelligence. 

It seems that few users truly understand what happens behind the scenes with their data (which may be more commercially valuable to the provider than the revenue from the glasses themselves). 

Considerations for Businesses

For businesses deploying or developing AI-enabled technology, the issues in hand are a stark reminder that burying human review disclosures in lengthy terms of service is unlikely to satisfy regulators' expectations of genuine transparency and informed consent. Organisations should treat this as a prompt to review their own data processing practices, privacy notices, and supply chain; ensuring that the speed of AI innovation does not outpace the rigour of their data protection compliance.

Workplace Implications: What Employers Should Be Considering

Beyond the consumer-facing controversy, this scrutiny of smart glasses raises urgent questions for employers about the use of AI-enabled wearables in the workplace. As commentators have observed, smart glasses have been deliberately designed to be unobtrusive, making them inherently covert and difficult to detect. An employee wearing such a device could be routinely collecting data about colleagues, clients, or visitors, including confidential audio, video, and even biometric information, without anyone being aware. Unlike a smartphone held visibly in hand, smart glasses provide no reliable notice to bystanders that recording is taking place. This creates a fundamental transparency problem for employers, who bear their own obligations under the EU/UK GDPR to ensure lawful processing of personal data in their workplace, including in respect of what could be categorised as a “bring your own device” tool, if it is used in a work context.

The risk to confidential business information is equally acute. Data captured by AI-enabled glasses, whether of screen content, client documents, or sensitive conversations, can be transmitted to cloud-based servers operated by third parties, including the device manufacturer's AI systems. Even a well-meaning employee using an AI translation or transcription feature "just briefly" could inadvertently feed trade secrets or privileged material into an external large language model. Without enterprise controls in place, these LLMs could ingest and train on that data, breaching confidentiality and exposing commercially sensitive data. As noted above, that data may also be reviewed by humans in circumstances the employer (or employee) neither anticipated nor authorised. For organisations in regulated sectors - including financial services, healthcare, and legal - this creates a serious risk of breach of professional confidentiality obligations and regulatory non-compliance, in addition to the data protection risks. 

Covert recording in the workplace also raises significant concerns under employment law. Colleagues who discover they have been filmed or recorded without their knowledge or consent may have legitimate grievances, and in certain circumstances, such recording could amount to harassment or a breach of the implied duty of mutual trust and confidence. Recordings will also likely lead to additional (and complicated) disclosure requirements in any grievance or legal proceedings, or under data subject access requests. 

Employers may also find that workers will more easily be able to gather their own information in support of their grievances or whistleblowing allegations. Many employers have specific policies prohibiting covert recording, so employees wearing these glasses may deliberately or inadvertently breach these policies and may expose themselves to disciplinary action. However, policies will need to be reviewed and updated to ensure that they address the use of AI eyewear.

A further layer of complexity arises where employees use, or may in future be prescribed, AI-enabled glasses to assist with overcoming or mitigating disabilities. Smart glasses with features such as real-time text recognition, object identification, and scene description are already proving particularly valuable for individuals who are blind or partially sighted. Under the Equality Act 2010, employers are required to make reasonable adjustments to remove or reduce disadvantages faced by disabled employees, which could include considering (and carrying out the necessary assessment in respect of) permitting or providing AI-assisted wearable devices as an auxiliary aid. A blanket ban on smart glasses in the workplace could conceivably constitute a failure to consider and make reasonable adjustments in certain jurisdictions, exposing the employer to claims of disability discrimination. 

Employers will need to develop nuanced policies that assess and (where appropriate) accommodate legitimate accessibility needs whilst managing the privacy, confidentiality, and security risks that these devices present. This is a balance that, as this technology rapidly advances, will only become more difficult to strike. Without enterprise controls around the use of such consumer devices, and in light of the risks discussed above, it is difficult to see how organisations could currently allow uncontrolled use in a workplace environment. 

Regulator contacts Meta over workers watching intimate AI glasses videos

Tags

partner, london, technology, commercial data & tech, employment pensions & immigration