Search

Do You Have a Conflict of Interest? This Robotic Assistant May Find It First - The New York Times

rumputhijauau.blogspot.com

What should science do about conflicts of interest? When they are identified, they become an obstacle to objectivity — a key tenet and a cornerstone of academia and research — and the truth behind what scientists report is called into question.

Sometimes a conflict of interest is clear cut. Researchers who fail to disclose a funding source with a business interest in the outcome are often likely to undermine the legitimacy of their findings. Additionally, when an author of a paper has worked extensively on other research with an editor of a journal, the conflict of interest can look glaringly obvious. (Such a case led one journal to retract two papers in 2017.)

But other cases are more subtle, and such conflicts can slip through the cracks, especially because the papers in many journals are edited by small teams and peer-reviewed by volunteer scientists who perform the task as a service to their discipline. And scholarly literature is growing fast: The number of studies published annually has increased by about 3 percent each year for the last two centuries, and many papers in the past two decades have been released in pay-to-publish, open-access journals, some of which print manuscripts as long as the science is solid, even if it’s not novel or flashy.

With such problems in mind, one publisher of open-access journals is providing an assistant to help its editors spot such problems before papers are released. But it’s not a human. Software named the Artificial Intelligence Review Assistant, or AIRA, checks for potential conflicts of interest by flagging whether the authors of a manuscript, the editors dealing with it or the peer reviewers refereeing it have been co-authors on papers in the past.

The publisher, Frontiers, which is based in Switzerland, rolled out the software in May to external editors working for its dozens of journals. The software also checks for other problems, such as whether a paper is about a controversial topic and requires special attention or if its language is clear and of high enough quality for publication.

The tool cannot detect all forms of conflict of interest, such as undisclosed funding sources or affiliations. But it aims to add a guard rail against situations where authors, editors and peer reviewers fail to self-police their prior interactions.

“AIRA is designed to direct the attention of human experts to potential issues in manuscripts,” said Kamila Markram, a co-founder and the chief executive officer of Frontiers. “In some cases, AIRA may raise flags unnecessarily or potentially miss an issue that will then be identified at later stages in the review process by a human.”

Still, “it looks promising,” said Michèle B. Nuijten, an assistant professor at Tilburg University in the Netherlands who has studied questionable research practices.

Dr. Nuijten helped create statcheck, an algorithm that flags statistical errors in psychology papers by recalculating the reported p-values, a commonly used but frequently criticized measure of statistical significance. She said that it was a good idea to have standardized initial quality checks in place, and that automation had a role to play.

“Peer reviewers cannot pick up every mistake in scientific papers, so I think we need to look for different solutions that can help us in increasing the quality and robustness of scientific studies,” she said. “A.I. could definitely play a role in that.”

Renee Hoch, manager of the publication ethics team at the Public Library of Science, or PLOS, which like Frontiers is an open-access publisher, said her organization also used software tools to detect potential conflicts of interest between authors and editors, but not reviewers. Instead, referees are asked to self-report problems, and action is taken on a case-by-case basis.

Dr. Hoch, however, said that an A.I. tool like AIRA that highlights a reviewer’s potential conflicts would be useful in relieving some of the burden associated with manually conducting these checks.

Springer Nature, the world’s second-biggest scholarly publisher, is also developing A.I. tools and services to inform peer review, said Henning Schoenenberger, the company’s director of product data and metadata management.

Despite the rise of A.I. tools like statcheck and AIRA, Dr. Nuijten emphasized the importance of the human role, and said she worried about what would happen if technology led to the rejection of a paper “out of hand without really checking what’s going on.”

Jonathan D. Wren, a bioinformatician at the Oklahoma Medical Research Foundation, echoed that sentiment, adding that just because two researchers had previously been co-authors on a paper didn’t necessarily mean they couldn’t judge each other’s work objectively. The question, he said, is this: “What kind of benefits would they have for not giving an objective peer review today — would they stand to gain in any sort of way?”

That’s harder to answer using an algorithm.

“There’s no real solution,” said Kaleem Siddiqi, a computer scientist at McGill University in Montreal and the field chief editor of a Frontiers journal on computer science. Conflicts of interest can be subjective and often difficult to unveil. Researchers who have often crossed paths can be most suitable to judge each other’s work, especially in smaller fields.

Dr. Wren, who is also developing software for screening manuscripts, said A.I. might be most useful for more mundane and systematic work, such as checking whether papers contain ethical approval statements.

S. Scott Graham, who studies rhetoric and writing at the University of Texas at Austin, agreed. He developed an algorithm that mines conflicts of interest statements mentioned in manuscripts to determine whether journals receiving advertising revenue from pharmaceutical companies have a bias toward publishing pro-industry articles.

He noted, however, that his tool was highly dependent on two things: that authors would initially declare their conflicts of interest and that journals would publish such disclosures — neither of which is guaranteed in cases where malice is intended.

“The limitation of any A.I. system is the available data,” Dr. Graham said.

“As long as these systems are being used to support editorial and peer review decision making, I think that there’s a lot of promise here,” he added. “But when the systems start making decisions, I start to be a little more concerned.”

Let's block ads! (Why?)



"conflict" - Google News
November 24, 2020 at 04:51AM
https://ift.tt/3kS1k5y

Do You Have a Conflict of Interest? This Robotic Assistant May Find It First - The New York Times
"conflict" - Google News
https://ift.tt/3bZ36xX
https://ift.tt/3aYn0I8

Bagikan Berita Ini

0 Response to "Do You Have a Conflict of Interest? This Robotic Assistant May Find It First - The New York Times"

Post a Comment


Powered by Blogger.