Technology

Why does ChatGPT love the compliments as much as we do?

The chatbot gives more detailed answers if the user politely asks it a question. A way of acting that can be explained very rationally.

Would ChatGPT be more likely to give detailed answers if you were polite to him or offered him money? This is what users of the social network Reddit say. “When you compliment it, ChatGPT performs better!”, some of them testify, surprise. While another explains that a $100,000 prize for a famous conversational robot developed by Californian company OpenAI would have encouraged him. “try too hard” And “Do a great job”American media relays the tech crunch.

A point also raised by researchers from Microsoft, Beijing Normal University and the Chinese Academy of Sciences, who detailed in a report published in November 2023 that generative AI models perform better when requested politely or with a sense of purpose.

For example, by forming sentences like “It is crucial that I succeed in my thesis defense” Or “It’s very important to my career,” The robot creates more complete answers to a question posed by an Internet user. “There has been a lot of speculation for several months. Before the summer, users also confirmed that ChatGPT provided detailed answers when offered tips. Also notes Giada Pistilli, a philosophy researcher and head of ethics for the start-up Hugging Face, which defends the open source design of artificial intelligence resources.

Powered by ChatGPT Views

This way of working on a machine part finds its origin in the data with which it is fed. “You have to think of the data in question as a block of text, where ChatGPT reads and discovers that when two users are polite to each other they have a richer conversation.”Giada Pistilli explains, “If the user politely asks a question, ChatGPT will consider this scenario and respond in the same tone”.

“Models like ChatGPT use probability. For example, it is given a sentence that it must complete. Based on its data, it will find the best option to respond to it.”Adrien-Gabriel Chifu, doctor of computer science and teacher-researcher at the University of Aix-Marseille, concludes, “This is just math, ChatGPT is completely based on the data it was trained with”. Thus, the conversational robot is also capable of being almost aggressive in its response, if the user makes a sudden request and the way the model was constructed allows it to be as well.

Researchers from Microsoft, Beijing Normal University and the Chinese Academy of Sciences also speak. “emotional incentives” In his report. Behind this term, they include ways to address the machine that pushes it “Imitate human behavior as much as possible”Giada analyzes Pistili.

For ChatGPT to embody a role corresponding to the sought answer, it is sufficient to use specific keywords, which will trigger different avatars of the machine. “By giving him tips, he will perceive the action as an exchange of service and embody this service role”She takes it as an example.

A conversational robot can quickly adopt the conversational tone a professor, writer, or filmmaker might have, based on the minds of certain users and their requests. “That traps us: treating him like a human being and thanking him for his time and help as if we were a real person.”The philosophy researcher notes.

AI is capable of fooling us

There is a danger in considering these simulations of behavior entirely, carried out by computation in the literal sense of the machine, as real. “Humanity” From the robot. “Anthropomorphism has been explored since the 1970s in the development of chatbots”.Giada recalls Pistilli, who cites the case of the chatbot Eliza as an example.

Designed by computer scientist Joseph Weizenbaum in the 1960s, this artificial intelligence program mimics a psychiatrist at the time. “In the 70s, answers had not yet progressed, a person could explain to Eliza that he had a problem with his mother so that he could simply answer “I think you have a problem with your mother?” €Giada Pistilli notes. “But that was already enough to humanize and empower him for some users.”.

In the 1960s, computer scientist Joseph Weizenbaum designed an artificial intelligence program called Eliza, which simulated a psychiatrist.
Wikipedia

“In any case, the AI ​​mainly moves in our direction based on what we ask”Researcher and lecturer Adrien-Gabriel Chifu remembers. “Therefore, caution should always be exercised in the use of these tools”.

In pop culture, the idea that intelligent robots might one day be confused with humans is regularly invoked. Manga “Pluto”, by the Japanese Naoki Urazawa, published in 2000 and adapted into a series by Netflix in 2023, is no exception. In this work, robots live with humans, go to school and work just like them. It is increasingly difficult to distinguish them.

During episode 2, one of the robots is also surprised that another, more efficient one, manages to enjoy ice cream with the same pleasure as a human. “The more I pretend, the more I think I understand”, explains the latter. In this regard, researcher Adrien-Gabriel Chifu returns to the famous test of mathematician Alan Turing, which aimed to observe the ability of artificial intelligence to imitate humans. “Turing wondered how far machines could fool humans based on the data they had…and that was in the 1950s,” he concluded thoughtfully.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button