Blog

Law Technology Digest

Hat Tip - Law Technology Digest

It's the first hat tip of 2023 to Stephen Abram (and I know it won't be the last). I think one of the greatest side benefits of the current fascination ChatGPT is the deeper dive we're getting into the transparency of AI. This paper by Catherine Gao, Frederick Howard, Nikolay Markov, Emma Dyer, Siddhi Ramesh, Yuan Luo and Alexander Pearson sets as its background, "Large language models such as ChatGPT can produce increasingly realistic text, with unknown information on the accuracy and integrity of using these models in scientific writing." The authors conclusion? "ChatGPT writes believable scientific abstracts, though with completely generated data. These are original without any plagiarism detected but are often identifiable using an AI output detector and skeptical human reviewers. Abstract evaluation for journals and medical conferences must adapt policy and practice to maintain rigorous scientific standards; we suggest inclusion of AI output detectors in the editorial process and clear disclosure if these technologies are used. The boundaries of ethical and acceptable use of large language models to help scientific writing remain to be determined." Read more at info docket: New Preprint: "Comparing Scientific Abstracts Generated by ChatGPT to Original Abstracts Using an Artificial Intelligence Output Detector, Plagiarism Detector, and Blinded Human Reviewers"


More from the CCBJ Blog


More from the CCBJ Blog