Canadian privacy regulators found that OpenAI violated national privacy laws by launching ChatGPT without addressing known privacy issues [1].
The finding signals a tightening of regulatory oversight for artificial intelligence companies operating within Canada. By failing to implement safeguards before the public release, the company may have established a precedent for how data is handled in the generative AI era.
Privacy Commissioner Philippe Dufresne announced the findings during a press conference in Ottawa [4]. He said OpenAI launched ChatGPT without having fully addressed known privacy issues [4]. The investigation focused on the early versions of the model and how the company handled the training process [1, 2].
According to the commissioner, this lack of preparation created significant vulnerabilities for the public [4]. "This exposed Canadians to potential risks of harm such as breaches and discrimination on the basis of information about them," Dufresne said [4].
The report is the result of a joint effort between federal and provincial privacy watchdogs [1]. These agencies examined whether the company respected the privacy rights of Canadians while collecting, and processing, data to train its large language models [2, 3].
OpenAI has not yet provided a detailed public response to the specific findings of the commissioner's report [1]. The investigation highlights a tension between the rapid deployment of AI technology and the legal requirements for data protection [3].
“"OpenAI launched ChatGPT without having fully addressed known privacy issues."”
This ruling underscores a growing global trend of regulators demanding 'privacy by design' for AI developers. Rather than allowing companies to iterate and fix privacy flaws after a product is live, Canadian authorities are asserting that legal compliance must precede market entry. This may force OpenAI and similar firms to alter their data ingestion and training protocols to avoid further legal sanctions in the North American market.




