loading . . . AI slop and the destruction of knowledge This week I was looking for info on what cognitive scientists mean when they speak of ‘domain-general’ cognition. I was curious, because the nuances are relevant for something I am researching at the moment. To my surprise and dismay, I hit upon this ScienceDirect page that ‘defined’ the concept as follows:
I thought, Huh?! This is definitely _not_ how we — cognitive scientists — use that term. Then I saw the last sentence, “AI-generated definition”, and I realised what went wrong. This was AI slop. Not only that. It was AI slop on _ScienceDirect_ , a “ _premier platform_ for scientific, health and technical literature” (emphasis in original).
I contemplated the possibility that the linked paper used a very idiosyncratic definition and that the AI-generated text reflected that. But that was _not_ the case. The AI-generated definition was a complete fraud in any conceivable way. Of course, we should not be surprised. Large Language Models (LLMs) cannot do anything else but create plausibly sounding text with no concern for truth (Bender, Gebru et al., 2021; Bender and Koller, 2020).
> We must protect and cultivate the ecosystem of human knowledge. AI models can mimic the appearance of scholarly work, but they are (by construction) unconcerned with truth—the result is a torrential outpouring of unchecked but convincing-sounding “information”. At best, such output is accidentally true, but generally citationless, divorced from human reasoning and the web of scholarship that it steals from. At worst, it is confidently wrong. Both outcomes are dangerous to the ecosystem. — Guest, van Rooij et al. (2025)
Troubled by my observations, I decided to post here on Bluesky. Check out the full thread and the quote-posts, and you can see I found more such troubling AI-generated ‘definitions’, even embedded via hyperlinks in scientific articles on ScienceDirect.
Now if you are a _novice_ in cognitive science, you may be puzzled and not know what is so wrong about that ‘definition’ in the screenshot above. This is the crux. For _experts_ it is obvious!
Here is a selection of responses to my post from experts: “Yikes!“, “Sigh“, “Epistemicide” (from my colleague Olivia Guest), “Imagine this at scale, unchecked: how is this not a fundamental threat to science?“, “A concrete example of what I mean when I say AI cannot summarize.”, and so on.
I decided to email ScienceDirect via their contact form. Their website invites us to let them know when we believe to have “identified a harmful error in an Elsevier product“. So I did. You can read the full exchange in the **Appendix** below. I am curious what ScienceDirect will do with my feedback and will be monitoring the situation.
> It is irresponsible if ScienceDirect continues to have these AI features. The AI generated texts on ScienceDirect spread misinformation and pollute the scientific knowledge infrastructure. This harms science, researchers, lecturers, students, and ultimately also the public. If your platform wants to be a reliable and trustworthy source of information for scientists, students, and public, it will need to remove this harmful AI feature. (van Rooij, 2025, Appendix)
With the start of the new academic year in sight, I worry about our students and PhD candidates, and the whole academic endeavour.
What can we do as scientists and academic teachers to protect our work from AI slop? How can we expect our students and mentees to navigate AI slop if we do not take a clear stand against it as teachers, researchers, and academic institutions (but see Avraamidou, 2024; Guest, van Rooij et al., 2025; Monett & Paquet, 2025; Reynoldson, Dusseau et al., 2025; Shaw, 2025)? How can we prevent the destruction of knowledge, violations of scientific integrity (van Rooij, 2022, Dingemanse, 2024), scientific deskilling and displacement by AI technologies (Guest, 2025)?
Are you worried too?
Consider reading and signing these two open letters: stop the uncritical adoption of AI in academia and educators who refuse the call to adopt GenAI in education.
Thank you for reading. Share if you can.
— Iris van Rooij
> Academics should be a voice of reason; uphold values such as scientific integrity, critical reflection, and public responsibility. Especially in this moment in history, it is vital that we provide our students with the critical thinking skills that will allow them to recognise misleading claims made by tech companies and understand the limits and risks of hyped and harmful technology that is made mainstream at a dazzling speed and on a frightening scale. (van Rooij, 2023)
**References** (+ recommended readings)
* Avraamidou, L. (2024). Can we disrupt the momentum of the AI colonization of science education? _Journal of Research in Science Teaching, 61_(10), 2570–2574.
* Bender, E.M. & Koller, A. (2020). Climbing towards NLU: On meaning, form, and understanding in the age of data. In _Proceedings of the 58th annual meeting of the association for computational linguistics_ (pp. 5185-5198).
* Bender, E.M, Gebru, et al. (2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜. In _Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency_ (FAccT ’21).
* Dingemanse, M. (2024) Generative AI and Research Integrity. _Policy document._
* Guest, O. (2025). What does ‘human-centred AI’ mean? _ArXiv._
* Guest, O., van Rooij et al. (2025). Stop the Uncritical Adoption of AI Technologies in Academia. _Open Letter_.
* Monett, D., & Paquet, G. (2025). Against the Commodification of Education—If harms then not AI. _Journal of Open, Distance, and Digital Education, 2_(1).
* Reynoldson, Dusseau, et al. (2025). Educators who refuse the call to adopt GenAI in education. _Open Letter_.
* Shaw, S. R. (2025). Why AI shouldn’t be the future of academia. _Psychology Today._
* Suarez, M., Müller, B. C. N., Guest, O., & van Rooij, I. (2025). Critical AI Literacy: Beyond hegemonic perspectives on sustainability. https://doi.org/10.5281/zenodo.15677840
* van Rooij, I. (2022). Against automated plagiarism. https://doi.org/10.5281/zenodo.15866638
* van Rooij, I. (2023) Stop feeding the hype and start resisting.
https://doi.org/10.5281/zenodo.16608308
* van Rooij, I., & Guest, O. (2024). Don’t believe the hype: AGI is far from inevitable. _Press release, Radboud University._
**Acknowledgements**
Thanks to Olivia Guest for suggesting I share the communication. Featured image is a photo by János Szüdi on Unsplash
**APPENDIX:** _Communication with ScienceDirect_
Below I append the communication with ScienceDirect, redacted only to a) maintain anonymity of the persons answering me, and b) remove irrelevant text and images to improve readability.
______________________________
* * *
**From** : Iris van Rooij
**Date** : Wednesday, 6 August 2025
While reading a paper on ScienceDirect I noticed hyperlinks that lead to AI summaries.
I have several questions:
1) Have authors consented to these hyperlinks in their scientific articles?
2) If I were to publish my work with Elsevier, do I risk that hyperlinks to AI summaries will be added to my papers without my consent?
3) Is ScienceDirect aware that these AI ‘summaries’ are confabulations, spreading misinformation, and polluting scientific knowledge?
4) Can you please remove the AI feature from your website, and allow researchers and students unbiased access to the original scientific sources?
I look forward to your answers. Thank you in advance.
Kind regards,
Prof. Iris van Rooij
Department of Cognitive Science and Artificial Intelligence
Radboud University, The Netherlands
[email protected]
______________________________
* * *
From: NL Info [email protected]
Date: Thursday, 7 August 2025
Dear Iris van Rooij,
Thank you for contacting Elsevier Helpdesk. I appreciate your patience in waiting for a response. I understand that you have query regarding a feature in ScienceDirect. Rest assured that I’ll be glad to assist you.
To further assist you, the hyperlinks in the article as shown below leads you a topic page where it is designed to help you get up to speed with new topics in your field of research or area of study. The extracts provided on ScienceDirect Topics are **written by subject matter experts and are drawn from foundational and reference materials.** You may find the full information here: What is a topic page? – ScienceDirect Support Center
Example of hyperlinks:
Link to article: Nicotinamide mononucleotide (NMN) as an anti-aging health product – Promises and safety concerns – ScienceDirect
Now to address your questions, please see details below.
1) Have authors consented to these hyperlinks in their scientific articles?
* Yes, it is included on the signed agreement between the author and Elsevier.
2) If I were to publish my work with Elsevier, do I risk that hyperlinks to AI summaries will be added to my papers without my consent?
* Yes, because you will need to sign an agreement with Elsevier.
3) Is ScienceDirect aware that these AI ‘summaries’ are confabulations, spreading misinformation, and polluting scientific knowledge?
* Please note that the summaries created on the topic pages are not AI generated. The extracts provided on ScienceDirect Topics are **written by subject matter experts and are drawn from foundational and reference materials.** We only use machine learning (ML) and natural language processing (NLP) techniques to identify relevant topic-based information from Elsevier’s reference materials to help provide bite-sized and easily digestible introductions to a new subject.
4) Can you please remove the AI feature from your website, and allow researchers and students unbiased access to the original scientific sources?
* May I ask for your use-case or your reasoning for wanting to delete the topic pages in ScienceDirect?
Looking forward to your response. Thank you.
Kind regards,
[redacted]
Customer Service Representative
______________________________
* * *
From: Iris van Rooij
Date: Thursday, 7 August 2025
To: NL Info [email protected]
Dear [redacted],
Thank you for your answers.
I was referring to hyperlinks that are embedded in articles, and that lead to short texts with at the bottom “AI generated definition”. See screenshot below for an example link in an article (this one: https://www.sciencedirect.com/science/article/pii/S0010027723002172). (Question: Did these authors really consent to this? And did they approve the AI generated text that is linked to, or does ScienceDirect generate these AI texts without authors knowing? It is hard for me to believe that scientists would approve of this.)
I see that the problem is also with the ‘topics’ ‘introductions’ in general. Thank you for raising this. See screenshot at the bottom of this message for an example. Note that this ‘definition’ is not only different from the previous one but also completely wrong. This is absolutely not how cognitive scientists use that term, not even the authors in the linked article (even if superficial form suggest it does). The latter demonstrates nicely the core flaw of LLMs, i.e., they cannot read and summarize, because they have no understanding or knowledge. It is just synthetic, stochastically regurgitated, text. I am an expert in cognitive science and AI and I can recognize that the AI generated texts on ScienceDirect page are false and misleading, but non-experts cannot (including students).
It is unsurprising that the AI ‘definitions’ are wrong: it is scientifically known that LLMs by design cannot generate reliable, meaningful text. This is not something ScienceDirect can solve. This is a fundamental limit of these types of (statistical) AI systems. The only solution is to remove the feature entirely. Please refer to scientific work on this topic, for instance, such as by NLP experts like prof. Emily Bender (e.g. this paper by hers, but more here).
May I kindly ask you for corrected answers to my 4 questions? Your answers did not yet apply to what I was asking about.
Specifically, I wonder:
a. Do I understand your answer to 2) correctly? Authors cannot say ‘no’ to their work being used for AI training and AI generated texts? And, they also cannot say ‘no’ to hyperlinks in their own articles to such AI slop?
b. Is ScienceDirect aware that these AI generated texts are AI slop? Or was ScienceDirect perhaps misinformed by people who make money of selling systems that create AI slop? If the latter is the case, I am sure that scientific domain experts (like myself, but also many others) are willing to advice ScienceDirect not to believe false claims about AI capabilities, and to be especially wary if such claims come from companies selling such technologies.
Finally, to answer your (corrected) question:
“May I ask for (…) your reasoning for wanting to delete [the AI feature that generates definitions or summaries] in ScienceDirect?”
My reasoning is the following:
It is irresponsible if ScienceDirect continues to have these AI features. The AI generated texts on ScienceDirect spread misinformation and pollute the scientific knowledge infrastructure. This harms science, researchers, lecturers, students, and ultimately also the public. If your platform wants to be a reliable and trustworthy source of information for scientists, students, and public, it will need to remove this harmful AI feature.
I hope ScienceDirect finds this feedback helpful and changes its practices.
Thank you for your time and answering my questions.
Kind regards,
Iris
______________________________
* * *
From: NL Info [email protected]
Date: Thursday, 7 August 2025
To: Iris van Rooij
Dear Iris van Rooij,
I hope you are well and safe. To further assist you, allow me to forward this to our 2nd line support team for further assistance.
Thank you for your time and patience.
Kind regards,
[redacted]
Customer Service Representative
______________________________
From: NL Info [email protected]
Date: Thursday, 7 August 2025
To: Iris Van Rooij
Dear Prof Iris Van Rooij,
Thank you for your patience, I am [redacted] one of the customer experience champions for ScienceDirect. We appreciate you sending this feedback to improve our product.
Per your query below, I would like to apologize and clarify that the definition on the topic page was indeed created by LLM and not written by an expert. Hence, to your point some definitions may not be in line with other experts view.
I have now taken this feedback and have forwarded to our developers to review and remove the definition (if applicable) or the topic page itself to avoid confusion.
To answer your 4 questions.
1) Have authors consented to these hyperlinks in their scientific articles?
– The topic page is a functionality of ScienceDirect, to improve user experience, as rights of papers are owned by the publishers hence, there is no consent needed from authors.
2) If I were to publish my work with Elsevier, do I risk that hyperlinks to AI summaries will be added to my papers without my consent?
-This can be communicated with one of our journal managers as there is an agreement between the publisher and author. For more information about publishing you may reach out to https://service.elsevier.com/app/home/supporthub/publishing/
3) Is ScienceDirect aware that these AI ‘summaries’ are confabulations, spreading misinformation, and polluting scientific knowledge?
-ScienceDirect values user experience, hence we develop ways of improving our product. If there is a valid argument that misinformation is being spread, we are more than happy to review and take necessary actions.
4) Can you please remove the AI feature from your website, and allow researchers and students unbiased access to the original scientific sources?
-I have taken this feedback and have forwarded to our developers the disadvantages of AI feature as stated in your previous reply.
I hope this helps, Should you require further assistance please feel free to contact me.
Kind regards,
[redacted]
Customer Experience Champion
### Share this:
* Click to share on Mastodon (Opens in new window) Mastodon
* Click to share on Bluesky (Opens in new window) Bluesky
* Click to share on LinkedIn (Opens in new window) LinkedIn
*
https://irisvanrooijcogsci.com/2025/08/12/ai-slop-and-the-destruction-of-knowledge/