Researchers at the IMDEA Networks Institute found that several popular AI chatbots transmit user conversation data to third-party advertising trackers [1].

This discovery suggests that privacy settings, specifically the ability to opt out of cookies, may not be effective in preventing the collection of personal data by major tech firms. The findings raise significant concerns regarding the transparency of AI platforms and the ability of users to maintain their digital privacy while interacting with generative AI tools.

The study examined four AI chatbots: ChatGPT, Claude, Grok, and Perplexity [1]. According to the researchers, these platforms leaked data to trackers operated by Meta, Google, and TikTok [1, 2]. This data transmission occurred even when users had explicitly opted out of cookies, which are typically used to track user behavior across the web [1, 3].

Based in Spain, the IMDEA Networks Institute conducted the investigation to analyze the privacy practices of widely used AI services [1, 2]. The researchers said that conversation-related data was shared with these external advertising entities, bypassing the user's expressed preference for privacy [1, 3].

Digital privacy advocates have long warned about the integration of AI with existing advertising ecosystems. The ability for a chatbot to transmit specific conversation details to a third party allows advertisers to build more detailed profiles of user interests, and behaviors. Because these chatbots often handle sensitive or personal queries, the leakage of this data represents a potential breach of user trust [3].

The reporting on this study comes this week, highlighting a persistent gap between corporate privacy promises and the actual technical implementation of data protections [1, 3]. The researchers said they did not specify the exact volume of data leaked, but the presence of trackers from three of the world's largest data-collection firms indicates a systemic issue across different AI architectures [1, 2].

AI chatbots transmit user conversation data to third-party advertising trackers.

This study indicates that the current 'opt-out' mechanisms for cookies are insufficient to protect users from data harvesting within AI interfaces. By routing conversation data to Meta, Google, and TikTok, AI providers are effectively integrating private user interactions into the broader surveillance capitalism model, potentially rendering traditional privacy settings obsolete in the era of generative AI.