Google Chrome, the widely-used web browser developed by tech giant Google, has come under fire for silently installing a 4-gigabyte (GB) artificial intelligence (AI) model on users’ devices without their consent. The discovery was made by Alexander Hanff, a prominent computer scientist and lawyer, who argues that this practice is both illegal and environmentally costly.
According to Hanff, who shared his findings on his blog, Chrome is “reaching into users’ machines and writing a 4GB on-device AI model file to disk without asking.” The file, named “weights.bin,” resides in the OptGuideOnDeviceModel directory and is essentially the weights for Gemini Nano, Google’s on-device large language model. This file is downloaded onto the user’s device when Chrome’s AI features are active, and those features are activated by default in recent Chrome versions.
What’s even more concerning, says Hanff, is that there is no checkbox or prompt in the Chrome settings to indicate that the 4GB AI model will be installed. This lack of transparency and consent raises serious questions about data sovereignty and user control over their devices.
The silent installation of the 4GB AI model also has significant implications for climate change. With more devices worldwide adopting AI-powered applications, the energy consumption required to power these models is substantial. According to a report by the International Energy Agency, the data center sector is expected to account for 8% of global electricity consumption by 2030. The widespread adoption of large language models like the 4GB AI model in Chrome will undoubtedly exacerbate this issue.
Hanff argues that this practice is also illegal, citing Article 4 of the European Union’s General Data Protection Regulation (GDPR), which states that any processing of personal data should be based on a lawful basis, including consent. Google’s silent installation of the AI model without consent appears to contravene this regulation.
In response to these concerns, Google has yet to publicly address the issue or provide a clear explanation for their actions. However, Hanff’s discovery has sparked a heated debate about the limits of consent and user control in the digital age.
As the world becomes increasingly reliant on AI-powered applications, it is essential that tech companies prioritize transparency and user consent. Google must address these concerns and take immediate action to rectify their practice of silently installing large AI models on users’ devices. The implications for data sovereignty, climate change, and user trust are too great to ignore.
