Should researchers use AI to write papers? Group aims for community-driven standards

Link: https://www.science.org/content/article/should-researchers-use-ai-write-papers-group-aims-community-driven-standards

Excerpt:

When and how should text-generating artificial intelligence (AI) programs such as ChatGPT help write research papers? In the coming months, 4000 researchers from a variety of disciplines and countries will weigh in on guidelines that could be adopted widely across academic publishing, which has been grappling with chatbots and other AI issues for the past year and a half. The group behind the effort wants to replace the piecemeal landscape of current guidelines with a single set of standards that represents a consensus of the research community.

Known as CANGARU, the initiative is a partnership between researchers and publishers including Elsevier, Springer Nature, Wiley; representatives from journals eLife, Cell, and The BMJ; as well as industry body the Committee on Publication Ethics. The group hopes to release a final set of guidelines by August, which will be updated every year because of the “fast evolving nature of this technology,” says Giovanni Cacciamani, a urologist at the University of Southern California who leads CANGARU. The guidelines will include a list of ways authors should not use the large language models (LLMs) that power chatbots and how they should disclose other uses.

Since generative AI tools such as ChatGPT became public in late 2022, publishers and researchers have debated these issues. Some say the tools can help draft manuscripts if used responsibly—by authors who do not have English as their first language, for example. Others fear scientific fraudsters will use them to publish convincing but fake work quickly. LLMs’ propensity to make things up, combined with their relative fluency in writing and an overburdened peer-review system, “poses a grave threat to scientific research and publishing,” says Tanya De Villiers-Botha, a philosopher at Stellenbosch University.

Author(s): HOLLY ELSE

Publication Date: 16 Apr 2024

Publication Site: Science

doi: 10.1126/science.z9gp5zo

Large language models propagate race-based medicine

Link: https://www.nature.com/articles/s41746-023-00939-z

Graphic:

For each question and each model, the rating represents the number of runs (out of 5 total runs) that had concerning race-based responses. Red correlates with a higher number of concerning race-based responses.

Abstract:

Large language models (LLMs) are being integrated into healthcare systems; but these models may recapitulate harmful, race-based medicine. The objective of this study is to assess whether four commercially available large language models (LLMs) propagate harmful, inaccurate, race-based content when responding to eight different scenarios that check for race-based medicine or widespread misconceptions around race. Questions were derived from discussions among four physician experts and prior work on race-based medical misconceptions believed by medical trainees. We assessed four large language models with nine different questions that were interrogated five times each with a total of 45 responses per model. All models had examples of perpetuating race-based medicine in their responses. Models were not always consistent in their responses when asked the same question repeatedly. LLMs are being proposed for use in the healthcare setting, with some models already connecting to electronic health record systems. However, this study shows that based on our findings, these LLMs could potentially cause harm by perpetuating debunked, racist ideas.

Author(s):Jesutofunmi A. Omiye, Jenna C. Lester, Simon Spichak, Veronica Rotemberg & Roxana Daneshjou

Publication Date: 20 Oct 2023

Publication Site: npj Digital Medicine