Health

ChatGPT won’t replace pediatricians: AI has low accuracy in childhood diagnoses

Experts assure that, even as AI advances, the clinical experience of pediatricians is essential given the low effectiveness of chatbots in diagnosing minors. (pictorial image information)

A recent study published in JAMA Pediatrics Point out that the use of artificial intelligence chatbots, viz ChatGPTTo diagnose medical conditions in children, is unreliable.

Research shows that this AI system has only 17% accuracy in diagnosing childhood diseases, which is a very low figure.

According to the research, this shows that the experience of pediatricians remains irreplaceable and highlights the importance of their clinical knowledge. Despite this, many experts in the health sector recognize that integration AI Medical care is probably imminent.

This artificial intelligence It is growing rapidly in the healthcare sector and is used for a wide range of applications.

The researchers highlight that ChatGPT responses are often too general, failing specific pediatric cases. (OpenAI)

These include analyzing large amounts of medical data to identify patterns that aid in the prevention and treatment of diseases, developing algorithms for more accurate diagnosis, personalized treatment for patients, and automating administrative tasks to improve the efficiency of health services.

However, a recent study Cohen Children’s Medical Center in NY Got a new version of ChatGPT It is not yet ready to diagnose diseases in children. Children are different from adults because they change a lot with age and they cannot really tell what is happening to them.

In experiment with ChatGPT, scientists used texts from 100 real children’s health cases and asked the system to try to tell what disease they had. Two expert doctors then looked at whether the responses to the artificial intelligence were good, bad or better.

A dedicated doctor provides medical care to a child at the hospital, where specialty care, health and pediatrics are intertwined to ensure the well-being of young children. (pictorial image information)

sometimes, ChatGPT It says an illness that had something to do with it, but it wasn’t true because it was so common. For example, ChatGPT Thought that the child has some kind of lump in the throat, but in fact he has a genetic disease that also affects the ears and kidneys and can cause this lump to appear.

Of the 100 cases they tested, ChatGPT He got the correct answer only in 17th. In 11 cases he did not get the perfect answer, and in 72 he was completely wrong. In addition, 83 times it was not found to be correct, 47 times the illness it said it was. To do with the correct body part, but still, the diagnosis was wrong.

The researchers noted that AI She was not very good at understanding things that experienced doctors knew. For example, a child with autism may have scurvy because he does not eat enough vitamin C.

This can happen because sometimes people with autism eat slightly different foods and lack vitamins. Doctors know they notice these vitamin problems in children in countries where they usually eat well. Also Chatbot He didn’t see it and thought the child had another, much less common illness.

Common ChatGPT errors include overly general diagnoses and lack of connection between symptoms and medical conditions. (pictorial image information)

The chatbot didn’t do very well in this test, but the researchers said it could improve if it was better trained with specialized medical books instead of the sometimes misleading information from the Internet.

They added that if the chatbot can use updated medical data, it will make better diagnoses. They call this “Arrange” system so that it works in a more optimized manner.

“This presents an opportunity for researchers to test whether specific training and tuning of medical data works to improve the diagnostic accuracy of chatbots based on large language models,” the study’s medical authors conclude.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button