ChatGPT Still Spreads Falsehoods, Says EU Data Watchdog
ChatGPT, the viral chatbot from OpenAI, still falls short of the European Union’s data accuracy standards, according to a new report by the EU’s privacy watchdog. Also read: Europe’s AI Act Gets Final Approval With Up To $38M Fines The European Union Data Protection Board’s (EDPB) report is not a legal document. However, it will […]
ChatGPT, the viral chatbot from OpenAI, still falls short of the European Union’s data accuracy standards, according to a new report by the EU’s privacy watchdog.
Also read: Europe’s AI Act Gets Final Approval With Up To $38M Fines
The European Union Data Protection Board’s (EDPB) report is not a legal document. However, it will inform a common approach for regulating ChatGPT across the 27 member states of the EU.
ChatGPT Fails to Comply With Data Standards
National watchdogs formed the ‘ChatGPT task force’ last year under the stimulus of ongoing cases in several European countries against the AI model. The task force falls under the EDPB.
In a report published May 24, the task force said:
“Although the measures taken to comply with the transparency principle are beneficial to avoid misinterpretation of the output of ChatGPT, they are not sufficient to comply with the data accuracy principle.”
Large language models, or LLMs, are notorious for “hallucinating” – tech speak for when AI chatbots spew out falsehoods, often with confidence.
Chatbots like ChatGPT or Google’s Gemini are powered by LLMs. Under Europe’s tough General Data Protection Regulation (GDPR), users can sue for being mispresented or denied the opportunity to correct inaccurate information about themselves.
According to the task force’s report, “Due to the probabilistic nature of the (AI) system, the current training approach leads to a model which may also produce biased or made-up outputs.”
“The outputs provided by ChatGPT are likely to be taken as factually accurate by end users, including information relating to individuals, regardless of their actual accuracy,” it added.
EU Doubles Down on Compliance
OpenAI has previously cited technical complexity for its failure to correct misinformation, but the European Union watchdog is doubling down.
“In particular, technical impossibility cannot be invoked to justify non-compliance with these requirements,” noted the task force.
In a newly opened case, an Austrian public figure is suing ChatGPT for failing to correct their date of birth. OpenAI said doing so would not be technically feasible, but this new report will add pressure on the AI model.
A data protection lawyer connected to the case, Maartje de Graaf, said that companies are currently unable to make AI chatbots comply with the EU law when processing data about individuals.
“If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around,” de Graaf stated.
Safety Concerns Loom at OpenAI
Experts say ChatGPT will require ambitious changes to avoid running afoul of the GDPR, which regulates data protection in the European Union.
The rules also provide a working model for data laws in Argentina, Brazil, Chile, Japan, Kenya, Mauritius, South Africa, South Korea, Turkey, and the UK.
Also read: OpenAI Scraps ChatGPT Voice After Scarlett Johansson Controversy
Meanwhile, OpenAI’s commitment to user safety has come into question.
The company has been hit by a spat of high-profile resignations, with the departure of cofounder Ilya Sutskever, Gretchen Krueger, Jan Leike, and a half-dozen others making a case for a culture shift.
Earlier this week, OpenAI announced the suspension of the ChatGPT voice ‘Sky’ after concerns it resembled the voice of actress Scarlett Johansson in the AI-themed movie, Her.
Cryptopolitan Reporting by Jeffrey Gogo
What's Your Reaction?