
OpenAI Under Fire for ChatGPT False Information Issues
It's been over two years since ChatGPT burst onto the scene, and despite OpenAI's advancements, the issue of "hallucinations" persists – where the AI presents false information as fact. Austrian advocacy group Noyb has filed a second complaint against OpenAI, citing an instance where ChatGPT falsely accused a Norwegian man of being a murderer.
According to the complaint, when asked about the man, ChatGPT claimed he was sentenced to 21 years for killing two of his children and attempting to murder his third. The AI even included real details like the number of children, their genders, and his hometown, further blurring the line between fact and fiction.
GDPR Violation Claim
Noyb argues that this response violates GDPR. Joakim Söderberg, a data protection lawyer at Noyb, stated, "The GDPR is clear. Personal data has to be accurate. And if it's not, users have the right to have it changed to reflect the truth. A small disclaimer about potential mistakes isn't sufficient justification for spreading false information."
Other instances of ChatGPT hallucinations include false accusations of fraud, embezzlement, child abuse, and sexual harassment.
Noyb's initial complaint in April 2024 involved an inaccurate birthdate. OpenAI's response was that it couldn't change existing information, only block its use in certain prompts, with ChatGPT relying on a disclaimer about potential inaccuracies.
The question remains: is a simple disclaimer enough to excuse the spread of potentially damaging false information by a widely used AI chatbot? We await OpenAI's response to this latest complaint.
Source: Engadget