ChatGPT that has taken the world by storm has made its formal debut in the scientific literature — racking up at least four authorship credits on published papers and preprints.

Journal editors, researchers and publishers are now debating the place of such AI tools in the published literature, and whether it’s appropriate to cite the bot as an author. Publishers are racing to create policies for the chatbot, which was released as a free-to-use tool in November by tech company OpenAI in San Francisco, California.

Recently, ChatGPT has been named as an author on a number of scientific articles. Scientists are concerned about this because they think that by designating a language model as an author, the work of human researchers is being minimized. Numerous scientists are also worried that this might result in a reduction in the value of scientific research and the labor that goes into it.

As language models lack the capacity to comprehend the research or contribute meaningfully to it, several experts are also doubting the authenticity of research articles that include ChatGPT as an author. They contend that doing so might cause the field to lose credibility and that it is crucial for researchers to be open about the use of language models in their study.

This discussion emphasizes the value of knowing the strengths and weaknesses of language models in scientific research as well as the ethical decisions that need to be taken when employing them. It also demonstrates the necessity of precise standards and procedures for the application of language models in academic studies.

ChatGPT listed as author on research papers: many scientists disapprove