Originally published at: The Privacy Risks of Facial Recognition in Smart Glasses - TidBITS
In a 404 Media post titled “Someone Put Facial Recognition Tech onto Meta’s Smart Glasses to Instantly Dox Strangers,” Joseph Cox writes:
A pair of students at Harvard have built what big tech companies refused to release publicly due to the overwhelming risks and danger involved: smart glasses with facial recognition technology that automatically looks up someone’s face and identifies them. The students have gone a step further too. Their customized glasses also pull other information about their subject from around the web, including their home address, phone number, and family members.
The project is designed to raise awareness of what is possible with this technology, and the pair are not releasing their code, AnhPhu Nguyen, one of the creators, told 404 Media.
Despite the title, what’s going on here feels more like extreme Google stalking than doxing, which involves the public release of personal information. In 2023, Kashmir Hill wrote in the New York Times about how Google and Facebook had developed but decided not to ship such technology. Now, thanks to the likes of Clearview AI and PimEyes, facial recognition on consumer-grade hardware is within the reach of clever college students.
With luck, the awareness this project raises will highlight the woeful lack of privacy regulation in the US, with fragmented state-level policies leaving large gaps in privacy protections. Although enforcement and compliance mechanisms won’t be fully active for several years, the recently passed EU Artificial Intelligence Act prohibits:
the placing on the market, the putting into service for this specific purpose, or the use of AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage
Smart glasses aren’t far away. While tech giants will likely block problematic uses of facial recognition—I can’t imagine Apple approving such an app—technology like this could soon become available to those who want it thanks to fast-moving Asian electronics manufacturers, open source tools, and readily available services. While the technology may appeal to those who wouldn’t use it for malicious purposes, it also opens the door to significant abuses like real-world stalking, identity theft, and kidnapping.
While corporate self-regulation and ethical guidelines are necessary, it’s hard to see anything other than comprehensive regulation effectively curbing privacy abuses enabled by instant facial recognition and real-time digital stalking.