Library AI Evaluations goes public
“AI is not a done deal. We’re building the road as we walk it, and we can collectively decide what direction we want to go in, together.” Dr. Sasha Luccioni, AI & Climate Lead at Hugging Face
The Library has just soft-launched its new public-facing AI Evaluations Guide. The guide shares how we are critically engaging with emerging AI through a library lens to support informed decision-making, surface strategic risks, and demonstrate principled practice.
Why we made the guide public
Originally developed to support internal capability building, our growing collection of AI Evaluations has now evolved into a living public resource. As interest grew (in Deakin and across the sector), it made sense to make Library work more visible, adaptable, and reusable.
The release supports our commitment to:
- Transparency in how the Library is assessing the implications of AI across learning, research, and digital services
- Advisory leadership that helps our community evaluate and engage with AI critically and constructively
- Open practice that allows others to see, consider and potentially adapt our approach
As a Library interested in information and knowledge practices, we need to model the kind of evaluative judgement and transparency we want our students, educators, and professional peers to apply in their own contexts.
What’s in the guide?
The AI Evaluations Guide is designed to be clear, consistent, and practical. It is guided by and reflects Deakin’s GenAI Principles. Our evaluations are not endorsements. They are informed perspectives designed to support confident and principled engagement.
Each evaluation outlines:
- Key AI functions and intended use
- Relevance to scholarly, library, and information contexts
- Pedagogical, ethical, and data-related considerations
- Strategic reflections, limitations, and risks
- Advisory statements to support critical engagement
We focus on AI that our community is already encountering through freely accessible systems, subscription tools, or library-supported platforms. This means evaluating what’s visible and used in practice, not just the newest or most hyped technologies.
The evaluations are grounded in expertise across our Library, from expert searching and copyright to digital literacies instruction. The guide also documents our evaluation approach, including alignment with Deakin’s GenAI Principles and key digital capability frameworks.
The completed evaluations, so far, include:
- Consensus
- Elicit
- EndNote AI functions
- EBSCO AI functions
- ProQuest Research Assistant
Why does this matter now?
The AI landscape is expanding rapidly. From writing and search to transcription and visual generation, AI is becoming part of everyday academic work, even when users don’t always recognise it. The need for structured, credible, and values-based guidance has never been greater.
This guide is one way Deakin Library is stepping into the AI space. Our evaluations provide language to support AI-related conversations, models for thoughtful critique and informed uptake, and resources that bridge the gap between policy and day-to-day decisions.
How to use the guide in your work?
The AI Evaluations Guide is a practical resource built for action, not just a collection of reports. You can use it to:
- Support conversations with your community
Refer to evaluations when asked about an AI system, especially in teaching or curriculum settings where AI is being trialled or queried. - Build into learning and capability resources
Embed evaluations into student workshops, professional learning sessions, consult or enquiry support materials. They help scaffold practical discussions around responsible AI use, critical thinking, and digital capability development. - Explore AI yourself and share what you learn
Try the AI functions we’ve evaluated. Reflect on what works and where you feel uncertain. Then share those experiences with your colleagues. This grassroots insight helps us all support the Deakin community more effectively.
What’s next?
This initiative is dynamic and continuously evolving. New evaluations will be published approximately every few weeks. Feedback rounds will be integrated into our update cycle to incorporate insights from students, staff, and the wider community. The guide’s usability and navigation will be shaped by iterative design, in collaboration with the Library learning designers.
“As AI goes mainstream …, library workers are increasingly having to make day‑to‑day decisions about adoption, advice, and policies. Their stakeholder communities will be in various states of enthusiasm or resistance, interest or apathy, knowledge or learning.” Dr. Lorcan Dempsey, Professor of Practive and Distinguished Practioner in Residence at University of Washington