Major U.S. AI services including ChatGPT, Google Bard, Alexa, Siri, and Starlink now let users disable data collection for model training.[1]
The change matters because companies routinely feed user inputs back into their models, exposing personal details that could be stored, analyzed, or shared without consent.[1] Privacy advocates warn that even casual queries can reveal health, financial, or location information.
Research shows about one‑third of AI app users have deeply personal conversations with chatbots, and six leading U.S. AI companies still feed user inputs into their models for future training.[1] Those numbers underscore the scale of data that could be harvested.
Each app provides a privacy toggle—usually found under Settings > Privacy—that stops the service from sending chats to the cloud.[2] For ChatGPT, users select “Do not use my data for training.” Google Bard offers a similar “Opt‑out of data sharing” switch, while Alexa and Siri hide the option inside voice‑assistant settings. Starlink’s AI tools require a separate privacy‑policy acknowledgment, which can be adjusted on the account portal.[3]
Opting out may limit personalized features such as context‑aware suggestions or improved response accuracy, but it does not affect core functionality. Users should weigh convenience against the risk of their conversations becoming part of large‑scale training sets.
Lawmakers and regulators are watching the practice closely, with several bills proposing stricter disclosure requirements for AI data usage. The new opt‑out options could influence future policy, as companies demonstrate a willingness to give users more control.
**What this means**
Consumers now have a concrete tool to curb the flow of personal data into AI training pipelines, reducing exposure to unintended profiling or data breaches. While the safeguards are not uniform across platforms, the availability of opt‑out settings marks a shift toward greater transparency and user agency in the rapidly expanding AI ecosystem.
“You can turn off data collection in each app’s privacy settings.”
The rollout of opt‑out controls gives users a practical way to protect personal conversations from being harvested for AI model improvement, signaling a growing industry response to privacy concerns and potentially shaping upcoming regulation.




