Artificial intelligence and library technology

NOTE: Product descriptions are not necessarily unbiased. They are taken from a mixture of vendor, library and other websites If you would like to contribute to HELibTech please send us your content via our content submission form. Alternatively feel free to contact Paul Verlander (HELibTech Community Editor) to discuss further.

AI capabilities that are part of library technology products such as library systems.

Discovery

Clarivate/Ex Libris  

Primo Research Assistant

“The Primo Research Assistant offers a new way of searching by combining the power of generative AI with intuitive conversational discovery. Say goodbye to clunky keyword searches and Boolean queries! Now, users can engage in natural language conversations with our Primo Research Assistant to uncover trusted library materials.

 Based on a RAG (Retrieval Augmented Generation) architecture and grounded in the extensive Ex Libris CDI (Central Discovery Index), Primo Research Assistant provides a starting point to users for their research, including source referencing and appropriate attribution for the authors. Searching with the Primo Research Assistant surfaces an overview based on information from library sources. Those sources….. making it easy for users to click and continue their research and learning journey, explore more results, refine their original query, or pursue a suggested related research question.”

TDNet

TDNet AI

“TDNet AI is an intuitive and intelligent search tool powered by Generative AI that delivers insights with unprecedented efficiency. TDNet AI draws on content included in TDNet Index, a very comprehensive index featuring over 550 million scholarly information resources from a variety of reputable publishers. By harnessing to power of Natural Language Processing and Artificial Intelligence, TDNet AI allows users to input search queries in everyday language without worrying about specific keywords or Boolean operators. Within the TDNet Discover search box, simply type your query naturally. TDNet AI then identifies relevant results from the TDNet Index, understanding the intent and meaning behind your search. Additionally, TDNet AI synthesizes key findings from the top search results into concise summaries, complete with references and links to relevant articles. This enhances the depth and speed of information retrieval to an unprecedented level”.

EBSCO

AI Insights

Ai Insights is available in EBSCOhost and EBSCO Discovery Service (EDS) February 2025. AI Insights is a summarization feature available in search results where once the AI Insights button is clicked on a specific article, 2-5 Insights are generated for that article. When the AI Insight button is clicked, a prompt is sent to a Large Language Model (LLM) prompting the AI to summarize the full text article into 2-5 relevant insights into the article. We ground the AI response on the full text (with publisher permission) to reduce hallucinations. No AI training is done on the full text article. The user query is not used in AI Insights.

AI Data Use: AI Insights does not use user data. The data provided to the AI is a prompt created by EBSCO. At the time the user clicks the AI Insights button, the prompt is used to create an AI Insight. User information is not stored during this process. The EBSCO prompt is used to direct the AI to summarize 2-5 Insights from that specific article. The AI response is grounded on the full text during Retrieval Augmented Generation (RAG) to improve quality and reduce hallucinations. No AI training is done on the full text article. The user query is not used in AI Insights.

Model Selection: Based on internal testing, Anthropic’s Claude Sonnet was selected as providing the most accurate scholarly prose, detailed response, highest quality, lowest latency, best security guardrail options, and cost. We have built the AI Insights feature using Bedrock, an AWS platform for developing AI features, because Bedrock has robust quality, security, privacy, and environmental safeguards.

Quality Assessments: AI Insights uses a three-step quality assessment process where first internal Subject Matter Expert (SME) librarians and clinicians review a sample of AI Insights. We also have beta and customer reviews, as well as end-user feedback reviews. These are done on a rolling basis to check for model degradation and identify model improvements.

EBSCO uses RAG-based AI to increase quality and reduce hallucinations. As well as also implementing guardrails to limit profanity, slander, and unprofessional language in AI generated text.

Metadata

Clarivate/Ex Libris

The AI Metadata Assistant

“The Alma AI Metadata Assistant streamlines cataloging by suggesting metadata for bibliographic records, which catalogers can review, refine, or dismiss. Using AI-driven insights from Large Language Models (LLMs) and vision-based tools, it makes cataloging faster and more accurate, allowing experts to focus on higher-level tasks. The AI Metadata Generator enriches bibliographic records in the Alma Community Zone. Currently enriching ProQuest™ Ebook Central titles, it aims to improve 100,000 records by the end of 2024, saving time and reducing costs for content providers while ensuring high-quality metadata for libraries.” Harnessing Academic AI – Insights from Clarivate

The AI Metadata Assistant in the Metadata Editor

“What is the AI Metadata Assistant? The AI Metadata Assistant uses a Large Language Model generative AI to process information about a library resource and suggest relevant metadata to the cataloger to help make the cataloging process quicker and more efficient. The cataloger can then review the suggested data and accept, correct or dismiss it, as well as add more complex, expert metadata and library-specific metadata. The AI Metadata Assistant can process images of a library resource along with other provided information, extract the text and meaning, and return it structured according to cataloging standards. It can be used for creating new bibliographic records, as well as enriching existing brief records. Phase Iof Alma’s AI Metadata Assistant supports creating and enriching MARC 21records in the English language – more cataloging and resource languages and formats will be added in future phases, as we work with the community to evaluate the AI’s capabilities and quality of metadata. The subjects provided are validated against Library of Congress vocabularies, with plans to increase the selection of authority vocabularies in future phases.

*Specto, a digital collection platform*

“Helps institutions manage digital objects, including images and texts. It integrates AI to simplify metadata creation and automate classification, improving discovery and cataloging efficiency. ” Harnessing Academic AI – Insights from Clarivate

OCLC

WorldCat

From: Implementing AI to further scale and accelerate WorldCat de-duplication. OCLC announcement 04 February 2025. https://www.oclc.org/en/news/announcements/2025/ai-worldcat-deduplication.html

“In August 2023, we implemented our first machine learning model for detecting duplicate bibliographic records as part of our ongoing efforts to mitigate and reduce their presence in WorldCat. In the lead up to this, we had invited the cataloging community to participate in data labeling exercises, from which we received feedback from over 300 users on approximately 34,000 duplicates to help validate our model’s understanding of duplicate records in WorldCat. This initiative led to the removal of ~5.4 million duplicates from WorldCat for printed book materials in English and other languages like French, German, Italian, and Spanish. We’ve now enhanced and extended our AI model to de-duplicate all formats, languages, and scripts in WorldCat. Leveraging the labeled data collected from community participation, we’ve tuned and optimized the AI machine learning algorithm, completed extensive internal testing, and engaged WorldCat Member Merge libraries to provide external verification of the algorithm’s performance. “

Research support, literature discovery and writing content for students and researchers

ChatGPT 

“ChatGPT is a generative artificial intelligence chatbot developed by OpenAI and launched in 2022. It is currently based on the GPT-4o large language model (LLM). ChatGPT can generate human-like conversational responses and enables users to refine and steer a conversation towards a desired length, format, style, level of detail, and language. It is credited with accelerating the AI boom, which has led to ongoing rapid investment in and public attention to the field of artificial intelligence (AI)” Wikipedia(https://en.wikipedia.org/wiki/ChatGPT

Consensus

(Information from UCL LibGuide) https://library-guides.ucl.ac.uk/generative-ai/genai-search-tools#s-lg-box-wrapper-19525449

“Consensus is an AI / LLM-powered search scientific academic search engine. It sources data from the Semantic Scholar dataset (which includes 200 million peer-reviewed documents across all domains of science). It analyses the most relevant papers and generates a summary of key findings.  It also includes an integrated ChatGPT-style assistant, Consensus Copilot, which will answer questions, draft content, create lists, and more.” 

Elicit

“Analyze research papers at superhuman speed. Automate time-consuming research tasks like summarizing papers, extracting data, and synthesizing your findings.”

(From University of Arizona website)  https://libguides.library.arizona.edu/ai-researchers/elicit

“Elicit, developed by Ought, is an AI tool to find ‘seed articles’ and to  mine for keywords/subject headings.  When you enter a question, it returns alternate questions that can lead to further “seed” articles.”

Enago Read

“Simplifies literature reviews by delivering smart AI-driven summaries, key insights, real-time discovery, and a copilot that empowers you to master the literature with superhuman speed.”

Grammarly

“Responsible AI that ensures your writing and reputation shine. Work with an AI writing partner that helps you find the words you need⁠—⁠to write that tricky email, to get your point across, to keep your work moving.

Keenious

(Information from UCL LibGuide) https://library-guides.ucl.ac.uk/generative-ai/genai-search-tools#s-lg-box-wrapper-19525449

“Keenious is a tool to analyse the content of a document using large language models and generates a list of relevant documents. It can be embedded in Microsoft Word or Google Documents or use an uploaded file. The results are weighted by perceived relevance and similarity. Suggested papers can then be filtered by age, citation counts, or open access status, and the list can be exported for analysis or for import into a citation management tool. This can be used to find similar research to a paper you are reading, or to help identify potential works connected to your own ongoing research.”

OpenRead

“AI search: Gain valuable insights from a vast repository of over 300 million papers spanning nearly every academic discipline or tap into trillions of web sources for comprehensive research.”

PaperDigest

“Based in New York City, Paper Digest has been dedicated to helping people use the least time to stay current with the latest tech trends, generate tech contents & reason over tech data. Since 2018, millions of users from thousands of universities, companies and government agencies have been using our services to readwriteget answers and more.

No Hallucinations: For research, every word counts. To avoid generating unjustified results, we offer a “unique” AI literature review generator that does not rely on any large language models (including our own), as their tendency to produce unjustified results is surprisingly high. This solution offers a lot of literature review examples and gives citations for every sentence it generates.

Up-to-Date Data: Paper Digest platform builds on an industry-scale technology knowledge graph with real-time updates from hundreds of sources. All new papers are sorted by potential impact. Subscribers will receive daily updates on latest hot papers in their areas.

Under Your Control: Paper Digest covers papers, as well as patents, grants, clinical trials, software, venues, and domain experts. Users can precisely control what functions (e.g. literature review, research copilot, academic reading, academic writing, etc.), languages and sources (research areas, document genres, venues, time ranges, authors, etc.) to use.”

 ResearchRabbit

(Information from Oxford Brookes University https://www.brookes.ac.uk/library/how-to/use-ai-tools-for-research?utm_source=chatgpt.com

“This tool allows researchers to build collections of academic papers and visualize connections between them, facilitating comprehensive literature reviews. “

Scite

(from UCL LibGuide) https://library-guides.ucl.ac.uk/generative-ai/genai-search-tools

“Scite is a wide-ranging AI-supported tool. Its core feature is identifying and classifying citations based on whether the text surrounding them supports the cited work, is in contrast to it, or merely mentions it in passing. This allows it to factor this into citation-based searches and metrics, in a way that is not possible with most citation databases and may mean you get more relevant and useful results. Scite also offers an AI “search assistant”, which tries to generate an answer to a question with citations to supporting literature. This is generally of good quality, and it is good for a summary overview, but should be treated with caution - it may have omissions and inaccuracies, and we would not recommend using it as your only search method. AI assistant tools like this often select the papers to highlight in a very idiosyncratic way and may miss key papers.”

ScholarAI

“Find, analyze, and organize academic papers with ease. Streamline your research, boost productivity, and gain insights faster with ScholarAI. Find over 200M+ peer-reviewed papers in seconds with AI-driven search and personalized recommendations. ScholarAI makes discovering reliable research fast and easy.”

SciSpace

From Oxford Brookes University https://www.brookes.ac.uk/library/how-to/use-ai-tools-for-research?utm_source=chatgpt.com “Combining literature search capabilities with summarization features, SciSpace helps researchers quickly assess the relevance of academic papers. “

Semantic Scholar

“Semantic Scholar provides free, AI-driven search and discovery tools, and open resources for the global research community. We index over 200 million academic papers sourced from publisher partnerships, data providers, and web crawls. With Semantic Scholar, researchers can understand a paper at a glance. Our system extracts meaning and identifies connections from within papers, then surfaces these insights to help Scholars discover and understand research.”

AI for user insights

AkroNova: AI-Powered User Insights

The voice of the user is critical input into the process of sustaining and transforming services. However current methods of gathering user insights can be:

  • Resource intensive
  • Time intensive.
  • Expensive
  • Inadequate

There is now a smarter AI powered way to understand your users. An easier, immediate risk-free approach. For example, design, set-up and undertake a focus group and review insightful outputs in less than 60 minutes. It’s this simple…

  1. Define the ‘audiences’ that you want to engage with
  2. Automatically generate rich individual personas based on your audience
  3. Start asking questions
  4. Refine your questions depending on the user feedback

 Understanding AI

To understand where AI should be used and will be most successful, one must understand what AI really is. AI, or machine learning, refers to a broad set of algorithms that can solve a specific set of problems, if trained properly.

The success of artificial intelligence depends on data.

The AI bucket consists of:

  • Big data
  • Analytics
  • Machine learning
  • Natural language processing
  • Data visualisation
  • Decision logic

Reproduced from the blog by Nick Ismail [ Information Age [blog] 23 April 2018 ]

Components of AI

A composite including:

  • Big data
  • Analytics
  • Machine learning
  • Natural language processing
  • Data visualisation
  • Decision logic

Smith, A. (2016). Big Data Technology, Evolving Knowledge Skills and Emerging Roles. Legal Information Management, 16(4), 219-224.

Common AI terms

Large language models (LLM)

“A large language model (LLM) is a type of machine learning model designed for natural language processing tasks such as language generation. LLMs are language models with many parameters, and are trained with self-supervised learning on a vast amount of text. The largest and most capable LLMs are generative pretrained transformers (GPTs). Modern models can be fine-tuned for specific tasks or guided by prompt engineering. These models acquire predictive power regarding syntax, semantics, and ontologies inherent in human language corpora, but they also inherit inaccuracies and biases present in the data they are trained in”. (Wikipedia)

Retrieval-Augmented Generation 

(RAG) is a technique that grants generative artificial intelligence models information retrieval capabilities. It modifies interactions with a large language model (LLM) so that the model responds to user queries with reference to a specified set of documents, using this information to augment information drawn from its own vast, static training data. This allows LLMs to use domain-specific and/or updated information. Use cases include providing chatbot access to internal company data or giving factual information only from an authoritative source. (Wikipedia)

Natural language processing

What is NLP (natural language processing. IBM [Website] 2024?

“(NLP) is a subfield of computer science and artificial intelligence (AI) that uses machine learning to enable computers to understand and communicate with human language.NLP enables computers and digital devices to recognize, understand and generate text and speech by combining computational linguistics—the rule-based modeling of human language—together with statistical modeling, machine learning and deep learning.NLP research has helped enable the era of generative AI, from the communication skills of large language models (LLMs) to the ability of image generation models to understand requests. NLP is already part of everyday life for many, powering search engines, prompting chatbots for customer service with spoken commands, voice-operated GPS systems and question-answering digital assistants on smartphones such as Amazon’s Alexa, Apple’s Siri and Microsoft’s Cortana. NLP also plays a growing role in enterprise solutions that help streamline and automate business operations, increase employee productivity and simplify business processes.”

The following are taken from:

AI in the UK: ready, willing and able? HOUSE OF LORDS Select Committee on Artificial Intelligence. Report of Session 2017–19 HL Paper 100 16 April 2018.

Algorithm

A series of instructions for performing a calculation or solving a problem, especially with a computer. They form the basis for everything a computer can do, and are therefore a fundamental aspect of all AI systems.

Expert system

A computer system that mimics the decision-making ability of a human expert by following pre-programmed rules, such as ‘if this occurs, then do that’. These systems fuelled much of the earlier excitement surrounding AI in the 1980s, but have since become less fashionable, particularly with the rise of neural networks.

Machine learning

One particular form of AI, which gives computers the ability to learn from and improve with experience, without being explicitly programmed. When provided with sufficient data, a machine learning algorithm can learn to make predictions or solve problems, such as identifying objects in pictures or winning at particular games, for example.

Neural network

Also known as an artificial neural network, this is a type of machine learning loosely inspired by the structure of the human brain. A neural network is composed of simple processing nodes, or ‘artificial neurons’, which are connected to one another in layers. Each node will receive data from several nodes ‘above ’it, and give data to several nodes ‘below’ it. Nodes attach a ‘weight’ to the data they receive, and attribute a value to that data. If the data does not pass a certain threshold, it is not passed on to another node. The weights and thresholds of the nodes are adjusted when the algorithm is trained until similar data input results in consistent outputs.

Deep learning

A more recent variation of neural networks, which uses many layers of artificial neurons to solve more difficult problems. Its popularity as a technique increased significantly from the mid-2000s onwards, as it is behind much of the wider interest in AI today. It is often used to classify information from images, text or sound

See the useful resources section for further information on AI from a range of authoritative sources.


Table of contents


Back to top

0.04g of CO2/viewWebsite Carbon
 Cleaner than 96% of pages tested

Content on HELibTech is licensed under CC0 1.0 Universal. Please refer to re-use permissions on third party content linked to by HELibTech.