loading . . . #ECIL The dimensions of AI literacy I'm at the ECIL2025 conference in beautiful Bamberg where I (and Pam McKinney) will be liveblogging (and the wifi works!) In the conference introduction Serap Kurbanoğlu, Sonja Špiranec and Joumana Boustany welcomed us, including a reflection on how the information world and information literacy have developed since the first conference 2013. The local chair Fabian Franke also welcomed to the host institution, University of Bamberg and the city.
The first keynote speaker is my colleague from the School of Information, Journalism & Communication, University of Sheffield, UK, Dr Andrew Cox, speaking on The dimensions of AI Literacy. Last month CILIP published this new report by Cox looking at librarians and AI and this year also this book was published. I give my usual caveat that this is my live perception of what was said.
He started by indentifying that people characterise AI in a lot of different ways, from spy to magician. Blackbox, echo chamber, funhouse mirror or map of society are 4 ctitical metaphors that are used. Also people say things like "it's just a tool" although Cox felt it wasn't just a tool (as a tool may have serious affordances). AI is also difficult to visualise except as a robot, and the tabloid media catastrophises it (bringing in popular distrust of scientists). It can also be seen as a buried infrastructure.
When you come to feelings - people tend to be confused and ambivalent. Governments and organisations, however, have developed AI strategies and given it in importance.
AI has proliferated in the library and information environment, in particular generative AI. A recent Yougov (in the UK) survey asked people what they trust AI to do - and finding information and doing simple maths came top. Surveys have also shown that young people use AI chatbots at least weekly and by 2025 only 12% of students said they weren't using gen AI at all. There is, though, evidence that there is awareness that gen AI may give false or biased results and 20% thought using AI was unfair to those who aren't using AI.
An important question is how information behaviour is changing because of gen AI. Anecdotally - people are preferring them to search engines, using overviews, librarians are getting requests for hallucinated items - and this could lead to a fall in library resources and enquiries.
Turning to the question of AI Literacy: whilst governments say that AI should be "transparent" there are barriers (such as it is fast changing, sinking into infrastructure, and the way AI is trained etc. is obscured). Therefore AI Literacy is definitely needed.
There are dilemmas in using gen AI. Firstly, the information dilemma: AI is both wonderful and dreadful at searching. It may translate, summarise, adapt to your prompts, all very quickly and can help with everyday tasks. To illustrate the problem, Cox showed how he uploaded a picture of a dog, saying "this is a cat" and the AI sycophantically agreed that it was. Also there is lack of reproducability (illustrated in multiple evaluations of a poem). Cox also noted how the instructions to co-pilot told it to be always warm and cheerful. Associated is the information quality (including bias, lack of diversity, issues of privacy, lack of knowledge of any resources outside its training). Cox mentioned how it is constantly prompting you to avoid the "boring" task of close and extended reading.
Secondly there is how AI is also both good and terrible at supporting learning. On the positive side, it can engage in dialogue and personalise learning, also it can support international students and people with disabilities. Potentially damaging aspects include "cognitive offloading" (loss of critical thinking, loss of skills, and discouraging people from engaging with other people (peers, teachers) to learn. So this needs conversation about these "superceded" skills, which may actually be important rather than "boring" and avoidable.
Thirdly there are the social aspects. Benefits, for example in diagnosing illness, have been identified. There are, too, arguments that AI "if managed effectively" can have a
positive effect on climate (e.g. managing responses, forecasting
improvements). On the other hand, the adverse impact of AI is debated, and Cox presented some evidence e.g. that production of visuals has more impact than text. A key issue is that the AI companies do not release enough information to be able to judge impact. Cox felt that the wider "infrastructural harms" seem to be of more concern than individual prompts. For example, there is the severe environmental impact of mining rare materials. Cox showed a map "Cartography of generative AI", showing the various processes and impacts. He also mentioned Chilean activists protesting about water rights: this also brings up the point that these exploitation issues are not new, but associated with other technologies too.
So - the professional dilemma - how can we deal with gen AI in a balanced way? AI literacy models have proliferated (Cox presented his own very briefly). There is an European Commission/ OECD framework for primary & secondary education Empowering learners for the age of AI see e.g. here. Cox pointed out some of its strengths and weaknesses. He identified the UNESCO frameworks as more critical. Cox also mentioned the Good Things Foundation's material. "Critical Thinking" is often mentioned in these frameworks: we need to think about what "critical" means. There is a gap between the models and how/ what you teach in the classroom.
Cox finished by summing up some of his main points including needing to integrate AI literacy with promotion of wider digital skills, and the need to speak up about information issues (whilst accepting that AI does have some benefits).
Questions raised at the end of the talk including the trend of people becoming "checkers" rather than "creators", and whether we would be better elaborating existing frameworks (e.g. of Information Literacy) rather than creating new ones.
Photo by Sheila Webber: beautiful Bamberg, September 2025. http://dlvr.it/TNCqZk