loading . . . Leaving Academia.edu: ‘AI’, Coercion, and Information Loss [**Update** : The following text was first published under the title ‘Leaving Academia.edu: “AI” and Information Loss’ on 19 September 2025 and updated on 26 September 2025. The updates are marked and appear in square brackets.]
1. Yesterday I deleted my Academia.edu account. This has a direct impact on my work here. I would like to reflect on this in this post. But first things first: What happened? Already back in May, I noticed that some people were leaving the platform for the reason that it would scrape all of its users’ content for ‘AI’ training. At the time, I couldn’t verify this. However, not much later more and more screenshots were shared of Academia.edu offering unsolicited and unconsented ‘AI’-generated podcasts of articles that users had uploaded to the platform. Just a few weeks ago, scholars started sharing tips on how to opt out of Academia.edu’s AI features. Yesterday, the whole thing took on a new dimension.
2. Several screenshots circulated of the platform’s new Terms of Service (ToS), which state the following: ‘By creating an Account with Academia.edu, you grant us a worldwide, irrevocable, non-exclusive, transferable license, permission, and consent for Academia.edu to use your Member Content and your personal information (including, but not limited to, your name, voice, signature, photograph, likeness, city, institutional affiliations, citations, mentions, publications, and areas of interest) in any manner, including for the purpose of advertising, selling, or soliciting the use or purchase of Academia.edu’s Services.’ The same can be found under Academia.edu’s Privacy Policy 3.3. That was the immediate reason why I and many others who work in fields related to my own deleted their accounts yesterday. What does it mean for our work?
3. Quite a few people, myself included, were not happy about having to delete their accounts. I have used the platform almost daily over the past ten years or so. Since I started using it, it has been a very useful repository for quickly accessing the work of colleagues, discovering new research, and exchanging messages and ideas. Almost all of the posts on this blog were availble as PDFs on my profile, and I link to articles on the site countless times in the Bibliography. The platform has provided my work on the Sinai diaspora manuscripts with an important level of visibility, and people have repeatedly contacted me about uploads there. It has also taken me quite some time over the years to craft my profile. All that effort has now gone to the bin.
4. Back in May, I decided to migrate my content, including PDFs of the blogs published here, from Academia.edu to Knowledge Commons. The effort involved in this migration was so great that I only did it sporadically, but I hadn’t managed to finish it yet. I will build something new on Knowledge Commons or elsewhere. But now, at least in the medium term (and perhaps also in the long term), some information has become inaccessible or been lost completely (e.g. messages or material from conferences that was no longer available elsewhere). This also applies to links in the Bibliography, which may no longer work because accounts no longer exist. It also applies to networks and contacts that would have enabled further exchange of information.
5. Of course, we were all aware that Academia.edu was never entirely kosher. It is a for-profit company, after all, and its business model right from the start consisted in ‘harvesting free labor from academics’, as Justin Lehmiller pointed out already six years ago.1 [**Update** : As early as 2017, authors such as Sarah E. Bond had called for the scholarly community to leave the platform.2 The main reasons were that it reinforces class politics in academia and is not truly open, both in terms of access and in terms of deciding what happens with your work and data.] Since then, we have seen a number of experiments. For example, the introduction of a ‘premium’ version, which now sold services to its users that were previously free, such as certain functions of its Analytics plug-in, deliberately capitalising on academic vanity. Another reason that led some to leave the platform a few years ago was _Academia Letters_ , a format that essentially functioned like a predatory journal, as Aleksi Aaltonen described.3 He also pointed out how so-called ‘junior’ scholars in particular could be susceptible to the promise of quick publications and citations, but that Academia.edu would not only not help them in this regard, but could potentially even harm academic reputation. In short, the platform is about money, not the needs and values of academia. Still, one could think that the benefits outweigh the downsides. I’m sure that many will continue to view it that way. For me, however, the point has now been reached where I must admit that using the platform contradicts the purpose of research itself and leads into a clear ethical conflict.
6. Some colleagues have suggested replacing uploads with documents that refer to other platforms or explain why their work is no longer on Academia.edu. However, this is not just a question of access to research, but also of whether the use of platforms that coerce users into providing their research for ‘AI’ training in order to monetise it is compatible with research integrity. I.e. whether its use contradicts the purpose and ethics of research itself. I think it does. Let me first say something about coercion. As noted above, the platform started offering ‘AI’-generated products without asking authors for consent. ‘AI’ training is the default setting that had to be turned off. Now, when you create a new account, you agree to the datafication and reuse of personal characteristics (even your name, voice, or signature!). I’ve also seen many posts from people complaining that they can’t delete their account without agreeing to the new ToS.
7. None of this is totally surprising. It fits in with how ‘AI’ is currently being forced into all kinds of academic work. There is the well-known framing that this is simply the case with new, progressive technologies. Adapt or get left behind. I have already heard from colleagues from time to time that this development is simply unstoppable. That we must therefore try to position ourselves within it somehow. But ‘AI’ is much more than just a technology – this and the fact that, strictly speaking, there is no ‘artificial intelligence’ (beyond the marketing phrase) is why I refuse to talk about it outside of quotation marks. It consists of megalomaniacal (non-digital) infrastructures, ideas, and ideologies. Not to acknowledge this would be intellectually dishonest. Its ideological side includes the notion of unstoppability. Another term for this is ‘technological determinism’. Some things _seem_ inevitable, even though, or rather because, they are forced upon us. This is accompanied by a certain type of pressure that is already beginning to change research practices.
8. To give you just one example: As I wrote in one of my last posts (here), I am planning to apply for a larger research project on Sinai’s diaspora manuscripts. For this reason, a few months ago I attended a workshop organised by my university’s research funding department, which specifically addressed the requirements and procedures of the funding format I’m interested in. The workshop had around 30 participants from all sorts of fields. At some point, the question arose as to whether ChatGPT could be used when writing the application. The bottom line was that the funding body sees no problem with using ChatGPT and that it should be considered a ‘useful tool’. The questions asked in this context suggested that many felt under pressure. After all, many of us would be competing directly with each other for the same funding. So if some people use ChatGPT and may have a clear advantage as a result, then I would be putting myself at a disadvantage if I didn’t. This pressure is perhaps even greater if the language in which the application must be submitted, German or English, is not one’s native language. This is not just about a competitive academic system. It is about the livelihoods of individuals and families (some participants were simultaneously looking after small children during the online workshop).
9. I will not use ChatGPT for writing my application. But it’s not simply a matter of being for or against it. It is about the pressure that is suddenly being put on researchers in all disciplines to use a so-called ‘tool’, the suitability of which for the purposes for which it is used has not been critically reflected at all. The topic of my application also includes a historical perspective on colonialism, how it enabled the extraction of cultural objects from the Global South, and how this colonial legacy continues to exist today in Western collection contexts. The ‘AI’ industry not only perpetuates a colonial legacy itself, but also fuels neo-colonial practices.4 By using ‘AI’, in particular generative ‘AI’, I would find myself in a conflict of interest. [**Update** : This is now being explicitly addressed by major publishers, e.g. in this statement by De Gruyter Brill, which outright rejects certain uses of generative ‘AI’.] Incidentally, there seemed to be no awareness of this at all at the workshop, even though the sustainability declaration to be made in the application and ChatGPT were discussed in quick succession.
10. I have now talked about coercion, pressure, and the ideology of inevitability that we are currently facing. But what developments in connection with Academia.edu also highlight is the danger of knowledge and information loss as a result of these phenomena. I have not listened to any of Academia.edu’s ‘AI’-generated podcasts, but I’m sure that they will have the same problems as other similar ‘AI’-generated products: They are guided not at all by academic considerations and interests, but solely by statistical models. These are completely intransparent, non-participartory, and biased – not only in the selection, but also in the reproduction of the data they’re using. This has nothing to do with academic research anymore. But it is what Academia.edu will do with our research. ‘AI’ is now sold as progress, but it leads to a potential loss of knowledge and information by letting remix idiosyncratic machines existing information and knowledge. This is already causing uncertainty. As author Courtney Milan wrote yesterday on Bluesky: ‘The main function of AI is to make all text on the internet untrustworthy unless you personally know the source’.5 In other words, it is not (and not meant to be) sustainable in terms of preserving knowledge. That is one reason why it contradicts principles of research. And it should be reason enough not to not buy into the current hype and not to participate in it.
11. At least we can now observe a growing awareness of these issues. Olivia Guest, Iris van Rooij and 16 other co-authors recently published a position paper entitled ‘Against the Uncritical Adoption of “AI” Technologies in Academia’ . They argue in great detail in which ways the use of ‘AI’ is incompatible with research integrity. I hope that the paper will be widely read and that it will strengthen critical awareness among many colleagues. One final word: Anyone who works in manuscript studies or philology knows that these are thoroughly digital disciplines. Not all of us work with specialised technologies that fall under the umbrella of ‘AI’, but many do. Accusations of technophobia and illiteracy in this field would therefore be completely inappropriate. It is precisely our technical literacy and deep historical awareness in dealing with technology that must make us critical. [**Update** : We must not perpetuate the narrative of inevitability, nor allow ourselves to be intimidated by coercion and technological determinism. For the protest is being heard and can bring about change. The wave of account deletions prompted Richard Price, the founder of Academia.edu, to leave a comment on _Daily Nous_ , where Justin Weinberg also wrote about the situation.6 According to Price, the controversial paragraph has now been removed from the new ToS.]
1. Justin Lehmiller, “Why I Deleted My Academia.edu Account and Why Your Should, Too”, _Sex & Psychology_ (16 September 2019), URL: https://www.sexandpsychology.com/blog/2019/9/16/why-i-deleted-my-academia-edu-account-and-why-you-should-too/ [↩]
2. See Sarah Bond, “Dear Scholars, Delete Your Account at Academia.edu”, _Forbes_ (23 January 2017, updated 10 December 2021), URL: https://www.forbes.com/sites/drsarahbond/2017/01/23/dear-scholars-delete-your-account-at-academia-edu/. Bond also has a very useful collection of articles on the topic, going back at least to 2015, on her website: Sarah E. Bond, “#DeleteAcademiaEdu: The Argument for Non-Profit Repositories”, _Sarah E. Bond_ (24 January 2017), URL: https://sarahemilybond.com/2017/01/24/deleteacademiaedu-the-argument-for-non-profit-repositories/. [↩]
3. See Aleksi Aaltonen, “I Deleted My Academia.edu Account and You Should Too”, _Medium_ (2 April 2021), URL: https://aleksiaaltonen.medium.com/i-deleted-my-academia-edu-account-and-you-should-too-681acbda2d9a. [↩]
4. [**Update** : The following two books describe very well why ‘AI’ industry is neo-colonial: Kate Crawford, _Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence_ (New Haven and London: Yale University Press, 2021); Ingo Dachwitz, Sven Hilbig, _Digitaler Kolonialismus: Wie Tech-Konzerne und Großmächte die Welt unter sich aufteilen_ (Munich: C.H. Beck, 2025).] [↩]
5. https://bsky.app/profile/courtneymilan.com/post/3lz4ixqxzpk24 [↩]
6. Justin Weinberg, “What Will Academia.edu Do with Its New Rights to Your Name, Likeness, and Voice? (update)”, _Daily Nous_ (22 September 2025), URL: https://dailynous.com/2025/09/22/what-will-academia-edu-do-with-its-new-rights-to-your-name-likeness-and-voice/#comment-480049. [↩]
* * *
OpenEdition suggests that you cite this post as follows:
Peter Tarras (September 19, 2025). Leaving Academia.edu: ‘AI’, Coercion, and Information Loss. _Membra Dispersa Sinaitica_. Retrieved September 29, 2025 from https://doi.org/10.58079/14ptq
* * *
* * * * *
https://medisi.hypotheses.org/7839