#News

Why Does David Mayer Name Crash ChatGPT?

Why Does David Mayer Name Crash ChatGPT?

Date: December 03, 2024

ChatGPT, the AI chatbot used worldwide, has been showing unresponsive outputs when asked about certain people, including David Mayer.

AI chatbots are rapidly becoming the primary source of research, creative ideation, content writing, and other tasks. However, they come with limitations that either the developers pre-instruct them with or introduce as the demand emerges. One of them is reportedly the unresponsive output of ChatGPT whenever asked about a person named David Mayer.

ChatGPT has been crashing when asked to either name or address a person named David Mayer. The response that it begins to produce suddenly gets interrupted with a final ‘I’m unable to produce a response’ message. As more people became curious about this peculiar behavior of the AI chatbot, more names emerged that compelled it to show unresponsive outputs.

People quickly formed multiple controversial theories, the most viral of which is that the AI is trying to hide the names of top-secret individuals. The fact that ChatGPT literally freezes mid-response, irrespective of how twisted the prompt is, makes it even stronger to believe so. However, the identity of hidden people would be revealed if the chatbot bluntly refuses to respond, bringing them to the spotlight instead. So, no!

The curiosity of people who tried their hands on multiple angles to get David Mayer’s name also found more people who triggered the same behavior from the chatbot. This included, and is not limited to, Brian Hood, Jonathan Turley, Jonathan Zittrain, David Faber, and Guido Scorza. More people have been added to the list of those who freeze the chatbot as you’re reading this.

The actual reason behind this is nothing out of the ordinary and came out when people observed other people than David Mayer. All the listed people have either requested OpenAI or sent legal notices to the company for falsely describing their identities or confusing them with someone else.

One of them is Brian Hood, a former Australian Mayor who was falsely described by ChatGPT as the perpetrator of a decade-old crime that he had actually reported. His lawyers connected with OpenAI, and no lawsuits were filed. The end result, as Brian mentioned to an Australian morning newspaper, was that “the offending material was removed, and they released version 4, replacing version 3.5.”

Other people might have been involved in similar incidents or included those who wanted their names and information removed from online directories and archives. Searching the Internet about them won’t help, either.

Arpit Dubey

By Arpit Dubey LinkedIn Icon

Have newsworthy information in tech we can share with our community?

Post Project Image

Fill in the details, and our team will get back to you soon.

Contact Information
+ * =