Schneider contributes to NISO Recommended Practice on retracted science

Jodi Schneider
Jodi Schneider, Associate Professor

The National Information Standards Organization (NISO) has announced that its draft Communication of Retractions, Removals, and Expressions of Concern (CREC) Recommended Practice (NISO RP-45-202X) is now available for public comment. The Recommended Practice is the product of a working group made up of cross-industry stakeholders, including Associate Professor Jodi Schneider, that was formed in spring 2022. The Alfred P. Sloan Foundation provided funding for the Working Group as well as for the Reducing the Inadvertent Spread of Retracted Science (RISRS) project, which is led by Schneider and has informed Working Group deliberations and decisions.

Retracted publications are research outputs that are withdrawn, removed, or otherwise invalidated from the scholarly record. There are a number of reasons why publications may be retracted, but in all cases, correcting the record requires that these decisions be clearly communicated and broadly understood so that the research-whether retracted due to error, misconduct, or fraud-is not propagated.  The goal of the NISO Recommended Practice is to detail how participants (publishers, aggregators, full-text hosts, libraries, and researchers) may easily ensure that retraction-related metadata can be transmitted and used by both humans and machines. Researchers who discover a publication can then readily identify the status of the research reported.

"Developing a systematic cross-industry approach to ensure the public availability of consistent, standardized, interoperable and timely information about retractions was one of the recommendations of RISRS, and we could not be more delighted that CREC has been undertaken by the NISO Working Group," said Schneider.

NISO recently hosted a public webinar, which included Schneider and CREC Working Group co-chairs Caitlin Bakker and Rachael Lammey. The draft Recommended Practice is available for public comment through December 2.

Research Areas:
Updated on
Backto the news archive

Related News

Ocepek and Sanfilippo co-edit book on misinformation

Assistant Professor Melissa Ocepek and Assistant Professor Madelyn Rose Sanfilippo have co-edited a new book, Governing Misinformation in Everyday Knowledge Commons, which was recently published by Cambridge University Press. An open access edition of the book is available, thanks to support from the Governing Knowledge Commons Research Coordination Network (NSF 2017495). The new book explores the socio-technical realities of misinformation in a variety of online and offline everyday environments. 

Governing Misinformation in Everyday Knowledge Commons book

Faculty receive support for AI-related projects from new pilot program

Associate Professor Yun Huang, Assistant Professor Jiaqi Ma, and Assistant Professor Haohan Wang have received computing resources from the National Artificial Intelligence Research Resource (NAIRR), a two-year pilot program led by the National Science Foundation in partnership with other federal agencies and nongovernmental partners. The goal of the pilot is to support AI-related research with particular emphasis on societal challenges. Last month, awardees presented their research at the NAIRR Pilot Annual Meeting.

iSchool participation in iConference 2025

The following iSchool faculty and students will participate in iConference 2025, which will be held virtually from March 11-14 and physically from March 18-22 in Bloomington, Indiana. The theme of this year's conference is "Living in an AI-gorithmic world."

Carboni joins the iSchool faculty

The iSchool is pleased to announce that Nicola Carboni has joined the faculty as an assistant professor. He previously served as a postdoctoral researcher and lecturer in digital humanities at the University of Geneva.

Nicola Carboni

Youth-AI-Safety named a winning team in international hackathon

A team of researchers from the SALT (Social Computing Systems) Lab has been selected as a winner in an international hackathon hosted by the Berkeley Center for Responsible, Decentralized Intelligence. The LLM Agents MOOC Hackathon brought together over 3,000 students, researchers, and practitioners from 127 countries to build and showcase innovative work in large language model (LLM) agents, grow the AI agent community, and advance LLM agent technology.