Computer Science Seminar by Eamon Duede: Tail Novelty, Knowledge Collapse, and Useful Frictions in Science
Speaker: , assistant professor of philosophy, Purdue University
Title: Tail Novelty, Knowledge Collapse, and Useful Frictions in Science
Abstract:
Research activity within academic science is steered and channeled through an interplay of incentives that derive from the norms and institutions that govern and support the research ecosystem. Large language models (LLMs) represent a major shock this system, in that they dramatically reduce the time and effort associated with numerous aspects of scientific knowledge production. Some of these costs previously have served as useful frictions in shaping research and publication strategies. Given the rapid pace at which scientists are integrating LLMs into their workflows, scientific norms and institutions are unlikely to adapt in a timely fashion. The same balance of costs and rewards that pushed effort in productive directions prior to LLMs can encourage inefficient or even detrimental activity once LLMs—even hypothetical ones that work perfectly—become available. To illustrate, we focus on the task of writing up research results for publication. Such labor is all too often viewed by scientists themselves as a form of busywork that detracts from the serious matter of “doing scienceâ€. There are numerous valid epistemic reasons why we might object to that view, and why one might resist allowing LLMs write or assist in writing scientific. But even setting these aside and imagining a future generation of LLMs that are as good or better than humans at scholarly writing, the introduction of mechanical communication surrogates into the publication pipeline is likely to create problems. We present a simple graphical modeling framework that reveals how, by changing the cost of reporting scientific activity, LLMs alter not only which results are reported but also what research strategies investigators choose to pursue in the first place.