Abstract
Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be valuable for achieving those goals. In the interests of fostering a wider conversation about how generative AI may be used, we have developed a preliminary set of recommendations for its use in scholarly publishing. We hope that the recommendations and rationales set out here will help the scholarly community navigate toward a deeper understanding of the strengths, limits, and challenges of AI for responsible scholarly work.
Original language | English (US) |
---|---|
Pages (from-to) | 39-43 |
Number of pages | 5 |
Journal | Ethics and Human Research |
Volume | 45 |
Issue number | 5 |
DOIs | |
State | Published - Sep 1 2023 |
Funding
David Resnik's contribution to this editorial was supported by the Intramural Research Program of the National Institute of Environmental Health Sciences (NIEHS) at the National Institutes of Health (NIH). Mohammad Hosseini's contribution was supported by the National Center for Advancing Translational Sciences (NCATS) (through grant UL1TR001422). Veljko Dubljević's contribution was partially supported by the National Science Foundation (NSF) CAREER award (#2043612). The funders have not played a role in the design, analysis, decision to publish, or preparation of the manuscript. This work does not represent the views of the NIEHS, NCATS, NIH, NSF, or U.S. government.
Keywords
- ChatGPT
- LLM
- accountability
- bioethics
- community of scholars
- generative AI
- humanities
- journal publishing
- transparency
ASJC Scopus subject areas
- Health(social science)