DH Latest NewsDH NEWSUSLatest NewsNEWSTechnologyInternational

The US Federal Trade Commission initiates investigation into OpenAI for data mishandling

The US Federal Trade Commission (FTC) has launched an investigation into OpenAI, the company responsible for the popular ChatGPT app. The purpose of the investigation is to determine whether the app generates false information that could harm consumers and whether it mishandles user data.

OpenAI, which has the support of Microsoft, received a 20-page questionnaire from the FTC outlining the specific concerns. The questionnaire requests information about instances where users were subjected to false disparagement and asks OpenAI to disclose any measures taken to prevent such incidents in the future.

During a congressional committee hearing, FTC Chair Lina Khan acknowledged concerns related to the potential for ChatGPT to produce libelous output. While she did not explicitly mention the investigation, Khan mentioned that the agency had received reports of sensitive information being disclosed in response to inquiries, as well as cases of libel and defamatory statements. She emphasized that the FTC is focused on addressing fraud and deception.

The FTC’s main objective is to understand how these issues could negatively impact users. They are also examining OpenAI’s use of private data to develop its advanced language model.

It is important to note that an FTC investigation does not automatically lead to further action, and the case can be closed if the company under scrutiny adequately addresses the concerns raised by the agency.

However, if the FTC identifies any illegal or unsafe practices, it can require remedial actions and potentially initiate legal proceedings against OpenAI. Both OpenAI and the FTC have refrained from commenting on the investigation in response to requests for information.

ChatGPT, as an AI language model, possesses impressive capabilities and gained significant attention upon its release in November. Language models like ChatGPT, known as large language models (LLMs), can generate human-like content within seconds.

While the technology received admiration, there have also been reports of the models generating offensive, false, or nonsensical content, sometimes referred to as “hallucinations.”

shortlink

Post Your Comments


Back to top button