Technology

ChatGPT talked nonsense for several hours

Want to experience ChatGPT madness? The wildly popular generative artificial intelligence (AI) interface that popularized the technology was unlocked for several hours on Tuesday, answering users’ questions with gibberish sentences, a reminder that these systems are still in their infancy.

OpenAI, the start-up that launched the program in late 2022, indicated on its site Wednesday morning that ChatGPT was operating “normally” again.

Irregular or unpredictable responses

Tuesday afternoon — San Francisco time, where it is based — the Silicon Valley company announced it was “investigating reports of unexpected responses from ChatGPT.” A few minutes later, she assured that she had “identified the problem” and was “in the process of resolving it”.

Many users have uploaded screenshots showing erratic or unpredictable responses from generative AI models. This advanced technology makes it possible to create all kinds of content (text, voice, video) in everyday language, on simple request, usually of astonishing quality.

“My GPT is ghosted”

On a forum for developers using OpenAI tools, a user named “IYAnepo” noticed the “strange” behavior of ChatGPT. “It generates words that don’t exist entirely, skips words, and creates sequences of tiny keywords that are unintelligible to me, among other anomalies,” he said. “You would think that I have made such instructions clear, but not so. I think my GPT is haunted (…).”

Another user, “scott.eskridge”, complained on the same forum that all his conversations with the language model were “rapidly devolving into nonsense for the past three hours.” He copied an excerpt from the responses of the interface: “Money for bits and lists is one of strangers and the Internet is where currency and spending person is one of friends and currency. The next time you see the system, exchange and fact, remember to give. »

OpenAI did not provide further details on the nature of the phenomenon, which reminds us that AI, even generative, has no awareness or understanding of what it is saying. Gary Marcus, an AI expert, also hopes the incident will be seen as a “wake-up call”. “These systems have never been static. No one has ever been able to build a security guarantee around these systems,” he wrote in his newsletter on Tuesday. According to him, “the need for a completely different technology, less opaque, more interpretable, easier to maintain and debug – and therefore easier to implement – remains paramount”.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button