A recent report has unveiled findings from the ChatGPT taskforce, established by the European Data Protection Board (EDPB), indicating that OpenAI’s efforts to mitigate the risk of the AI generating factually inaccurate output fall short of compliance with EU data regulations.
While the EDPB acknowledged that measures taken to adhere to the transparency principle aid in preventing misinterpretation of ChatGPT’s output, they deemed these measures inadequate to meet the data accuracy principle. Given the centrality of accuracy in EU data protection regulations, the report highlighted concerns about ChatGPT’s probabilistic nature leading to potentially biased or fictitious outputs. Moreover, despite the probabilistic nature, end users are likely to perceive outputs as factually accurate, irrespective of actual accuracy.
Published on May 24th, the report mandates that OpenAI must integrate appropriate measures, both during the determination of processing methods and actual processing, to align with data protection policies, adhere to GDPR requirements, and safeguard data subject rights.
The report emphasized that GDPR compliance should not be shifted to data subjects through clauses in terms and conditions. Additionally, it underscored the taskforce’s role in exchanging information among supervisory authorities (SAs) regarding engagement with OpenAI, facilitating enforcement actions concerning ChatGPT, and identifying areas requiring a unified approach across various enforcement actions by SAs.
While investigations are ongoing, a comprehensive report has yet to be released. However, the represented positions in the report reflect a consensus among SAs on the interpretation of GDPR provisions concerning matters within the scope of their investigations.