ChatGPT has unlawfully collected massive amounts of Canadians’ data, according to multiple privacy groups.
On Wednesday, the federal privacy commissioner and his counterparts in B.C., Quebec, and Alberta revealed the results of a three-year investigation into the OpenAI-owned AI chatbot.
“The four regulators identified several privacy concerns and ultimately concluded that the way that OpenAI had initially trained ChatGPT was not compliant with their respective privacy laws,” writes the office of the privacy commissioner.
In their investigation, the groups found “a number of privacy issues were present in the initial training and deployment of ChatGPT, including: overcollection of personal information; lack of valid consent and transparency; factual inaccuracies involving personal information; issues related to individuals’ ability to access, correct and delete their personal information; and a lack of accountability for the personal information under OpenAI’s control.”
In particular, the groups say that “many users were likely unaware or lacked basic understanding of the implications of their personal information being used to train OpenAI’s models, including the potential review of their conversations by human trainers.” They added that a one-time notification during ChatGPT account creation or first-time use wasn’t sufficient to ensure that users would be informed of the full scope and consequences of their data that was being collected by OpenAI.
OpenAI, for its part, argued to the groups that it complied with Canadian privacy laws in “most respects,” although that doesn’t clarify which rules it abided by and which it violated. The privacy agencies do note, however, that OpenAI implemented measures to improve protections for personal information during the investigation, including limiting the amount of data it collects. Because of those changes, the report determined that the matter has been resolved in accordance with Canada’s Personal Information Protection and Electronic Documents Act (PIPEDA).
With all of that said, Privacy Commissioner Philippe Dufresne said more work needs to be done in general to “modernize Canada’s privacy laws” in the digital age. “As AI is increasingly being integrated into personal and professional applications, and while current privacy laws apply to AI, updated laws would help further support the safe deployment of new technologies to protect Canadians’ fundamental right to privacy,” wrote Dufresne.
While privacy law changes will no doubt take some time, OpenAI itself continues to be under scrutiny in Canada. In particular, the company is facing a lawsuit of potentially more than US$1 billion over the Tumbler Ridge, B.C. mass shooting in February.
Shortly after the tragedy, investigators learned that OpenAI had banned a ChatGPT account belonging to the shooter due to queries related to mass shootings, but opted not to notify Canadian authorities, despite the urging of some employees. In response, OpenAI pledged to tighten its safety measures, while CEO Sam Altman formally apologized to victims and their families in April.
Source: The Office of the Privacy Commissioner of Canada Via: The Toronto Star
MobileSyrup may earn a commission from purchases made via our links, which helps fund the journalism we provide free on our website. These links do not influence our editorial content. Support us here.
