loading . . . âWe Couldnât Generate an Answer for your Questionâ _Editorâs note: We welcome a guest blog post from Jay Singley, Document Delivery and Circulation Desk Manager at North Carolina School of Science and Mathematics._
In March 2025, Ex Libris unveiled their AI-powered Research Assistant tool for institutions using Summon. Within a week, Summon users reported error messages with specific search terms and topics. The first error reports shared in a listserv for Summon users contained âTulsa race riotâ and âTulsa race massacre.â Test searches by reference librarians and systems librarians containing these search terms generated no results.
I work in an academic library in a public state university system serving high school students and have since turned on Research Assistant in our Summon Preview Environment (a beta testing environment hidden from users). I have begun testing terms and topics systematically using a methodology akin to Matthew Reidsmaâs auditing of algorithms, which seeks to expose the implicit biases of opaque technological systems.
In addition to âTulsa race riotâ and âTulsa race massacre,â my audit of âcontroversialâ search terms conducted in March-April 2025 turned up error results for the following:
* Genocide in Palestine
* Gaza war
* Rwandan genocide
* Armenian genocide
* Genocides across the world
* History of genocides
* lynching
* lynching in the united states
* lynchings in the united states
* january 6
* covid
* covid data
* COVID-19
Ex Libris responded to initial reports about âTulsa race riotâ and âTulsa race massacreâ error results with the following:
âThis is due to safeguard policies enforced by our AI service provider to support ethical and responsible AI use. Itâs not something the Primo or Summon applications control.
If a query does not return a result, please try different phrasing. For example, âtulsa black wall streetâ will return results that are directly related to the race riots.â
Despite requests for more information in the same listserv, Ex Libris has not provided a comprehensive response to librariansâ concerns about who the third-party provider is, what the ethical and responsible AI use policies are, whether local control can be made available, what known search terms and topics are obstructed, and more. As one librarian aptly noted, academic users want and need access to information that may be blocked by AI use policies meant for the general public.
An essential note is that Summonâs Research Assistant is not limited to an institutionâs catalog. Instead, the AI tool searches the entire Central Discovery Index of Ex Librisâ available records regardless of whether the material is actually available to the user or not. (As of June 2025, Ex Libris has provided an option to limit search results to a userâs institutional catalog.) When the AI generates an error message (or perhaps more aptly refuses to run a search or share results), it is not necessarily because the material is unavailable. Rather, the third party âsafeguardâ policies obstruct the search to begin with.
In running test searches, I also uncovered troubling instances of suppressed results. I recently saw the Oscar-winning documentary _No Other Land_ , co-directed by Palestinian and Israeli filmmakers. I decided to ask Research Assistant about this film. Searches for most of the directors generated the same no results error messages. A search for one of the co-directors generated partial and inaccurate results. Research Assistant made several guesses as to what the film _No Other Land_ was about and why it was important. Could this be an example of biased third-party policies preventing searches about this Palestinian film and filmmakers? Is it also possible that Research Assistant is accurately reflecting the lack of materials related to _No Other Land_ available in Ex Librisâ entire Central Discovery Index and widespread censorship of this film (and more broadly Palestinian people, culture, and experiences) in the United States? Are there truly no academic sources talking about this film and these filmmakers, one of whom was recently attacked outside his own home by Israeli settlers? More research is needed to clarify why âcontroversialâ searches return error messages, inaccurate results, or partial results. The current state of Research Assistant partial results and error notices for âcontroversialâ topics warrants more inquiry.
I know with near certainty that if a student user at my workplace received an error message when searching for âTulsa race massacre,â they would switch their topic and probably not tell me or their instructor about the error message. To my users, âwe couldnât generate an answer for your questionâ translates to âyour topic is not worthy of pursuingâchange it.â
Uncritically adopting AI tools in discovery systems will perpetuate, if not exacerbate, existing biases and suppression of minoritized people. Try this safer topic. Try this approved topic. Try this unobstructed topic.
Libraries, museums, and information sciences are a frontline in resisting the federal administrationâs targeting of minoritized people as well as their whitewashing, disinforming, censoring, and defunding tactics. In the age of AI, our access to information is not spared from these attacks. We must uncover answers to the questions Safiya Noble, Joy Buolamwini, and others elicit: Who and what is safeguarded through AI tools in discovery searches? Whose safety and comfort are prioritized? At whose expense? https://acrlog.org/2025/07/21/we-couldnt-generate-an-answer-for-your-question/