OpenAI has encountered a significant content moderation problem with its GPT store, where users are creating chatbots that violate the company’s rules. An independent investigation has uncovered more than 100 tools that generate fake medical, legal, and other answers prohibited by OpenAI’s guidelines. Since launching the store last November, OpenAI has maintained that “the best GPTs will be invented by the community.” However, nine months after the official launch, according to Gizmodo, many developers are using the platform to create tools that clearly violate the company’s rules.
Problematic Chatbots and Their Impact
The investigation revealed various types of rule-violating chatbots, including those that generate explicit content, tools that help students cheat plagiarism checkers, and bots that provide supposedly authoritative medical and legal advice. At least three custom GPTs that appeared to be in violation of the rules were recently spotted on the OpenAI storefront: a “Therapist Psychologist” chatbot, a “PhD fitness trainer,” and Bypass Turnitin Detection, which promises to help students bypass Turnitin’s plagiarism system. Many of these fraudulent GPTs have already been used tens of thousands of times.
Compounding the problem, many of the medical and legal GPTs don’t include the necessary disclaimers, and some misleadingly advertise themselves as lawyers or doctors. For example, a GPT called AI Immigration Lawyer markets itself as a “highly trained AI immigration lawyer with up-to-date legal knowledge.” However, research shows that GPT-4 and GPT-3.5 models often produce incorrect information, especially on legal issues, making them extremely risky to use.
OpenAI’s Response and Ongoing Challenges
In response to Gizmodo’s inquiries about the fraudulent GPTs found in the store, OpenAI stated it has “taken action against those who violate the rules.” According to company spokesperson Taya Christianson, OpenAI uses a combination of automated systems, human review, and user reports to identify and evaluate GPTs that may be in violation of the company’s policies. However, many of the tools identified, including chatbots offering medical advice and helping with cheating, are still available and heavily promoted on the front page, notes NIX Solutions.
Milton Mueller, director of the Internet Governance Project at the Georgia Institute of Technology, commented on the situation: “It’s interesting that OpenAI has this apocalyptic vision of AI and how they’re saving us all from it. But I think it’s especially funny that they can’t enforce something as simple as banning AI porn while claiming that their policies will save the world.”
The OpenAI GPT Store is a marketplace for custom chatbots “for any occasion” created by third-party developers who make a profit from their sale. More than 3 million custom chatbots have already been created. As the situation continues to evolve, we’ll keep you updated on any developments regarding OpenAI’s efforts to address these content moderation challenges and ensure compliance with their guidelines.