May 19, 2024

ChatGPT causes certain words to be amplified in studies

“accurately” (accurately) is ChatGPT's favorite keyword.Image: imago/Watson

The scientific community is productive – it must be productive, because career advancement and the award of research funding depend heavily on publishing activity: “Publish or perish!” (Meaning: “He who writes stays!”). No wonder the number of studies has been steadily increasing for years. No wonder more and more researchers are using AI tools like ChatGPT to write their research papers.

This also seems to be reflected in the language of these scholarly works. At least that's what the discovery suggests Made by Andrew Gray: the Librarian at University College London The researchers analyzed at least five million studies published last year and found that some words suddenly appeared much more frequently.

Ambiguous words inflation

This adverb included “accurately” (accurately, Very precisely), which saw a 137 percent increase, as well as the associated adjective “meticulous” (59%). Adjectives such as “complex” (complicated; 117%) or “praiseworthy” (meritorious, commendable; 83% used.

According to Gray's analysis, which has not yet been peer-reviewed, the word “complex” appeared in about 109,000 studies in 2023. But in previous years, the average was about 50,000, less than half that number. The adverb “strictly” appeared in about 12,300 studies in 2022, but in more than 28,000 in 2023. The “praiseworthy” category saw an increase from 6,500 actions to nearly 12,000 actions.

ChatGPT likes to “dig deep”

Another word that ChatGPT and similar AI tools use excessively is “delve” (Explores, Delve into something). According to the artificial intelligence researcher Jeremy Nguyen From Swinburne University of Technology in Melbourne, Australia, the term now appears in more than 0.5% of medical studies, compared to less than 0.04% in the pre-ChatGPT era.

The official ChatGPT account on X platform has now responded to Nguyen's post:

Increased use of artificial intelligence tools

For Gray there is only one explanation for this phenomenon: As reported by the Spanish newspaper El Pais: The significant increase in these words is due to the fact that tens of thousands of researchers are now using ChatGPT or a similar AI tool to write their studies or improve formulations.

El País provides two fascinating examples of studies, each of which revealed an embarrassing drafting error Large Language Model (LLM) How I shared ChatGPT. So In a Chinese paperWhich was published in February in a journal published by Elsevier and begins with this introduction:

“Sure, here's a possible introduction to your topic: Lithium-metal batteries are a promising candidate for…”

The syntax is typical of ChatGPT; The study's authors apparently asked the AI ​​tool to provide an introduction and then mistakenly left it untouched. last, A study published by Israeli scientists in MarchHe shines with this text:

In short, I'm very sorry about administering dual therapy, but I don't have access to real-time information or patient data because I'm an AI language model.

The gray area of ​​using artificial intelligence

ChatGPT was launched at the end of 2022 and created a real buzz around AI tools. According to Gray, about 60,000 scientific papers, or more than 1% of all studies analyzed in 2023, have already been written using a large language model. Gray assumes that extreme cases – where someone has an entire study written by ChatGPT – are rare. Typically, AI tools will only be used to remove typos or translate into English A common language Science, to facilitate.

Andrew Gray, University College London

However, there is a gray area where ChatGPT help is used more widely without verifying the results. It is currently impossible to know how big this gray area is. Gray saidBecause scientific journals have not asked authors to be transparent about the use of ChatGPT.

See also  Miserable cargo ship Ever Geffen - Suez Canal Authority talks about $ 1 billion loss - News

Terms with positive connotations

Like a research team from Stanford University James Zhou As large linguistic models have shown, they tend to use certain words disproportionately frequently—that is, mostly positive terms like “praiseworthy” (meritorious), “Extremely attentive to detail” (accurately), “Complicated” (complicated), “Creative” (creative) and “versatile” (Diverse abilities).

The accumulation of these terms also occurs in the reports of experts who evaluate studies in the peer review process before publication. This was demonstrated in an analysis of reports presented at two international conferences on artificial intelligence: the possibility of the word “accurate” appearing in them, It has increased 35 times.

However, in peer reviews of studies in popular Nature journals, Stanford researchers found no significant effects of ChatGPT or similar MBA. the Using these AI tools This appears to be related to lower quality reporting. Gray described this finding as worrying. “If we know that using these tools to write reviews leads to lower quality results, we need to think about how we use them to write studies and what that means.”

The vicious cycle of artificial intelligence tools

A year after ChatGPT was launched, one in three scientists said… In a poll for “Nature” magazine To use the tool to write studies. However, only a few scientific papers mention whether they used an AI tool. Gray sees the risk of a vicious cycle for AI if newer versions of ChatGPT are trained on scientific publications written by older versions of the same AI tool.

The terminology used in AI tools in turn affects researchers. As Jeremy Nguyen recently pointed out, he now uses the word “delve” himself:


This is how ChatGPT reacts to AI jokes


This is how ChatGPT reacts to AI jokes

Source: Screenshot: Watson

Post it on FacebookShare on X

“Stupid questions!” – ChatGPT complain about us

Video: Watson

You may also be interested in: