Technology

This AI image of a disproportionate rat penis was validated in a scientific study

A study was hastily retracted from a scientific journal two days after it was posted online. In question? Improper use of midjourney by writers. All images were AI-generated with clear captions and transparency issues.

The study was intended to be purely serious. Accepted on December 28, 2023, it was published in the scientific journal Frontiers on February 14, 2024. It was also examined by other scientific experts according to the principle of peer review. Only, here it is: On February 16, the paper was retracted.

So what happened that in the space of two days, the work of three Chinese researchers (Xinyu Guo, Dingjin Hao and Liang Dong) was thus rejected? Methodological problem? Exaggerated results? Insufficiently representative samples? In fact, it is clearly an image problem.

Midjourney to illustrate a scientific paper

It appears that the authors have used To generate all visuals for Artificial Intelligence (AI) studies. There are three of these, two of which stand out as collages of several images showing specific phases of what the study seeks to explain. Specifically, the tool used to create these images is Midjourney.

study, titled ” Cellular functions of spermatogonial stem cells in relation to the JAK/STAT signaling pathway »(Cellular functions of spermatogonial stem cells in relation to the JAK/STAT signaling pathway) is no longer available at Frontiers. Instead, two warning messages appear on the publisher’s website.

Source: ScreenshotSource: Screenshot
Warnings on the article. // Source: Screenshot

However, we can still see a copy taken on February 15 by the Internet Archive site. The use of AI was not hidden: it was clearly mentioned throughout the text (“ Images in this article were created by Midjourney ”, we read just before the first image). On the other hand, this accuracy was not repeated in every legend.

It is unknown if the images were used as a basis for creating these scenes. It is also not known whether there have been changes after generation with editing software such as Photoshop, for example, to edit this or that part. A layman’s eye untrained in science or AI may find these illustrations completely credible.

Source: ScreenshotSource: Screenshot
Of course, this is not to measure everything. // Source: Screenshot

The type of prompt used – that is, the instruction used to render the result – is also a mystery, as is the number of attempts required to achieve these results. We assume a certain skill in writing orders to arrive at such renderings, and furthermore, they obviously pass the threshold of Peer review.

Text that means nothing

However, with more careful observation you can find flaws in the images, especially in the text. Historically, Midjourney has always struggled to create meaningful writing on images, even with the latest version 6 — despite everything, real progress has been made in recent months.

Source: ScreenshotSource: Screenshot
This may fool the general public, but some text makes no sense. // Source: Screenshot

Therefore, we can see captions that do not make much sense, and are understatements: “iollotte sserotgomar cell”, “retat”, “dck”, “disilced” “Tramioncation of zoepens”, “Stats poflecation” , etc. Some letters are not formed and we have to try to guess them. Elsewhere, perfectly formed words have no meaning.

If we refer to the mentions at the bottom of the page on the site, the study was edited by Amugam Kumaresan, an Indian researcher working for the National Dairy Research Institute. It was also received by Frontiers on November 17, prior to the alpha release of Midjourney v6.

An Indian fellow at the National Institute of Animal Nutrition and Physiology, Binsila B. The study was proofread by Krishnan and Jingbo Dai from the American NGO Northwestern Medicine. These legendary concerns were apparently not noticed or taken seriously.

For those responsible for the study, Xinyu Guo and Dingjun Hao are in the Department of Spine Surgery at Hong Hui Hospital. He also works at Jiaotong University in Xi’an. Liang Dong is affiliated with this hospital.

Source: ScreenshotSource: Screenshot
In order to achieve the most believable visuals you have to recognize certain skills. // Source: Screenshot

In an early warning notice, Frontiers said on February 15, “ declares to be aware of the problems (…). The article has been removed while an investigation is conducted and this notice will be updated accordingly after the conclusion of the investigation. ” On the 16th it was decided to withdraw this operation pure and simple.

This article does not meet the editorial and scientific rigor of Frontiers in Cell and Developmental Biology ”, it is explained. This is really ” Concerns about the nature of AI-generated statistics » which led to this result after reporting to the magazine’s editorial staff by part of the readership.

Rules on the use of AI in scientific subjects

In its Guidelines for Authors, Frontiers provides rules governing the use of AI technologies (ChatGPT, Jasper, Dall-E, Stable Diffusion, Midjourney, etc.). Write and edit manuscripts » which scientists wish to publish. These include transparency, accuracy and anti-plagiarism obligations.

Frontiers adds that the author is responsible for verifying the factual accuracy of any content created by generative AI technology. “Figures produced or edited using generative AI technology should be checked to ensure that they correctly reflect the data presented in the manuscript. »

In this scenario, ” Acknowledgment of this use should be made in the Acknowledgments section of the manuscript and in the Methods section if applicable. This explanation must list the name, version, model and source of the generative AI technology ” Elements that are not obviously inserted on the page.

Finally, Frontiers also asks authors to forward all prompts. This is to find out how the artificial material seen in the study, whatever it may be, was created. But even here, the study seems to ignore these elements, unlike other, more transparent ones.

education” Group trust dynamics during a risky driving experience in the Tesla model »(Group trust dynamics during a risky driving experience in the Tesla model), also published on Frontiers and peer reviewed, also uses AI images. But the signs are more numerous and clear.

Source: ScreenshotSource: Screenshot
Each image is appropriately captioned, with a link to the official Midjourney guide. // Source: Screenshot


Subscribe for free to Artificialless, our newsletter on AI, tested by Numerama, designed by AI!



Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button