Google Chrome has begun installing a 4 GB [1] Gemini Nano AI model directly onto users' computers without explicit notification.

The move has sparked a privacy backlash because Google removed a previous disclosure stating that data processed by the model would stay off its servers. This change suggests a shift in how the company handles local AI data, potentially compromising user trust regarding on-device privacy.

Reports indicate the browser silently downloads the model weights, identified as a file named "weights.bin" [2], into a directory titled "OptGuideOnDeviceModel" [3]. The installation occurs in the background, meaning many users are unaware that several gigabytes of storage are being utilized for the AI's operation.

Google said the use of an on-device model improves overall privacy and security [4]. By processing information locally, the company said it can reduce the need to send sensitive data to the cloud for every request.

Critics said the removal of the privacy promise contradicts these claims. While the model lives on the local machine, the lack of a guarantee that data will not be transmitted back to Google servers creates a legal and ethical gray area regarding user consent.

The deployment of Gemini Nano is part of a broader push to integrate generative AI into the browsing experience. However, the method of delivery — a silent download of a 4 GB [1] file — has led to accusations that the company is bypassing user preference to force AI adoption.

Chrome silently downloads a 4 GB Gemini Nano AI model onto users' devices

This deployment highlights the tension between the technical benefits of edge computing and the transparency requirements of data privacy. By removing the explicit promise to keep data off its servers, Google is granting itself more flexibility in how it trains and optimizes AI models, but it does so at the cost of the 'local-only' privacy guarantee that originally made on-device AI attractive to security-conscious users.