Artificial intelligence (AI) has made great strides in recent years and is increasingly being used in various fields. One of the most well-known chatbots is ChatGPT, developed by OpenAI. Millions of people use this AI application every day to get information or have conversations. However, OpenAI is currently under pressure from an investigation by the US Federal Trade Commission (FTC) due to possible violations of consumer protection laws.

Privacy and reputational threats.

The FTC has raised concerns about the protection of personal data and people’s reputations by the popular chatbot ChatGPT. According to a report in The Washington Post (, the agency requested information from OpenAI in a 20-page letter to better understand the risks associated with artificial intelligence. Neither the FTC nor OpenAI have yet officially commented on the allegations.

Generative AI and the use of personal data

The basis of ChatGPT and similar AI models is so-called generative AI, which uses large amounts of data, including personal posts on social media, to train the model. In addition, user input, also called “prompts,” is incorporated into further training of the AI. This practice has led to privacy concerns. Google is also facing a similar lawsuit, accusing the company of unauthorized use of personal and proprietary information to train its AI applications.

Concerns from European regulators

European regulators have also raised concerns, particularly regarding the use of personal data in chatbots and other AI-based services. In Italy, ChatGPT was temporarily blocked ( , but later unblocked. However, the final decision on the admissibility of the service is still pending. A major concern of privacy advocates is the dissemination of false information and reputation-damaging statements by services such as ChatGPT.

The FTC’s demands and the challenges for OpenAI

The FTC is demanding OpenAI provide detailed information about how ChatGPT was trained and what safeguards the company has in place to prevent potentially harmful false claims. While AI service providers emphasize that their models do not necessarily reflect the truth, they are nonetheless gaining traction and being integrated into various applications. However, the apparent eloquence of chatbots often leads to human users being misled. Even experts cannot always distinguish between fact and fiction, as illustrated by the case of two lawyers in New York who were fined for including unverified claims from ChatGPT in their briefs.

Outlook for increased regulation of AI applications.

The FTC’s investigation of OpenAI marks another step toward increased regulation of AI applications. Privacy and the protection of personal information are important concerns that must be addressed in the development and use of AI technologies. Hopefully, these investigations will lead to clear guidelines to ensure privacy protection and responsible use of AI systems.

Subscribe to our newsletter

and stay always updated on data protection.