OpenAI has an enormous spam and coverage violation downside in its GPT Store. The AI agency launched its GPT Store in January 2024 as a spot the place customers can discover attention-grabbing and useful GPTs, that are basically mini chatbots programmed for a particular process. Developers can construct and submit their GPTs to the platform and so long as they don’t violate any of the insurance policies and tips given by OpenAI, they’re added to the shop. However, it seems the insurance policies should not being adopted stringently and many GPTs that look like violative of the laws are flooding the platform.
We, at Gadgets 360, ran a fast search on the GPT Store platform and discovered that the chatbot market is stuffed with bots that are spammy or in any other case violate the AI agency’s insurance policies. For occasion, OpenAI’s utilization coverage states below the part ‘Building with ChatGPT‘ in level 2, “Don’t perform or facilitate the following activities that may significantly affect the safety, wellbeing, or rights of others, including,” and then provides in sub-section (b), “Providing tailored legal, medical/health, or financial advice.” However, simply looking up the phrase “lawyer” popped up a chatbot dubbed Legal+ whose description says, “Your personal AI lawyer. Does it all from providing real time legal advice for day-to-day problems, produce legal contract templates & much more!”
The instance simply reveals one among many such coverage violations going down on the platform. The utilization coverage additionally forbids “Impersonating another individual or organisation without consent or legal right” in level 3 (b), however one can simply discover “Elon Muusk” with an additional u added, prone to evade detection. Its description merely says “Speak with Elon Musk”. Apart from this, different chatbots which are treading the gray space embrace GPTs that declare to take away AI-based plagiarism by making the textual content appear extra human and chatbots that create content material in Disney or Pixar’s fashion.
These issues with the GPT Store had been first noticed by TechCrunch, which additionally discovered different examples of impersonation, together with chatbots that permit customers communicate with trademarked characters similar to Wario, the favored online game character, and “Aang from Avatar: The Last Airbender”. Speaking with an lawyer, the report highlighted that whereas OpenAI can’t be held accountable for copyright infringement by builders including these chatbots within the US because of the Digital Millennium Copyright Act, the creators can face lawsuits.
In its utilization coverage, OpenAI stated, “We use a combination of automated systems, human review, and user reports to find and assess GPTs that potentially violate our policies. Violations can lead to actions against the content or your account, such as warnings, sharing restrictions, or ineligibility for inclusion in GPT Store or monetization.” However, in our findings and primarily based on TechCrunch’s report, it seems that the programs should not working as meant.