Leveraging AI to help with open knowledge
Welcome to our fourth post celebrating the lead-up to International Open Access Week (20–26 October). This week, we turn our minds towards the considerations presented by Artificial Intelligence (AI) as a disruptor, exploring its opportunities and interactions with open knowledge. Our guest author, Lisa Grbin, Open Education Librarian, together with input from our local AI experts, helps us navigate this.
In the meantime, remember to register for our Open Access Week event, Who owns our knowledge? A conversation on 23 October.
What is the state of play?
It can feel like there are new AI tools are arriving every day – chatbots, image, audio or multimedia generators, AI-powered search tools, summarisation or note-taking tools and research-specific tools, and so much more. It’s almost impossible to keep up! Their proliferation, variation, coverage and uses are incredibly varied. The same goes for their levels of accessibility, sustainability, equity and ethics.
Many AI tools operate on a subscription or ‘freemium’ basis. Some are free, but the market is varied. Affordability can separate and isolate those who ‘have’ with those who ‘have not’ (or cannot). Interestingly, AI functionality is also being added to a lot of software and resources that we may use every day. For example, Natural Language Searching in library databases and AI summarisation tools across different platforms, like Zoom and Teams. Hand-in-hand with this innovation are issues and questions around AI bias, output quality and environmental impact.
Another element to this is how companies that provide or own the AI tools use the content you upload or input (e.g. the prompts, content, questions) and what information they produce?
It’s important to remember that, in using these tools, you need to remember to comply with copyright and privacy considerations with AI prompts and uploads, especially in generalist AI Large Language Models (LLMs) like ChatGPT or Gemini.
There is a complex interplay of benefits alongside the use considerations for these tools, with guidance and best practice for genAI tool usage still developing and evolving.
Does genAI make everything open?
The short answer is no.
The algorithms behind these tools can crawl the Internet, ingest and learn from content that is not behind a paywall, or in some cases beyond. The content that is crawled or used for training is not necessarily open access. Additionally, many publishers are now on-selling content to AI companies; something technically allowed when authors have signed over their rights. Most authors never imagined their work being used this way and feel blindsided. At the same time, ensuring that AI systems are trained on high-quality, well-sourced content could improve the accuracy and usefulness of AI-generated outputs, making it all the more important that authors are part of the conversation.
When you use these tools, while the content you generate is based on all the training data ingested, the tools are essentially creating or authoring a new output. Unlike what we discussed in our first blog post, this does not mean you own the content. If you check the fine print, you may be surprised by how your information is being used and who owns the outputs.
Opening AI: ways to use it
There are some things to consider when using AI tools in ways that are more open, transparent and ethical:
- Treat outputs as if AI is a curator and assistant to make knowledge more findable and useful. Check the sources and the references provided, though be careful. Sometimes AI hallucinates to create fake (but convincing) resources.
- Think about the prompts you use, and be open and clear about how you report your use of AI.
- Check the outputs. What is included? What is missing? What could be improved?
- Be mindful of where AI is being used and consider how it may change your practices, decisions and thinking. See AI use in authorship and peer review.
- To support your considered use of AI, check out Deakin’s genAI principles.
Finally, read the curated list of Deakin AI guides and modules, which are designed to support your AI literacy.
AI towards open
Deakin Library has entered a development partnership with the AI-enabled Open Educational Resources (OER) discovery platform, Sylla. Sylla uses AI to align, compare and suggest OER alternatives to expensive, traditional educational and instructional materials.
Watch this space as we work together to build more open, sustainable and equitable pathways to your resources. If you are interested in learning more or participating, get in touch with your Librarians.
Authoring Open AI content
Some genAI tools do let you retain your copyright, but it is best to seek advice and further information before putting these in practice (and knowing when you may do so, and when you may not).
Resources and guidance from the Library
There are many resources available from Deakin to help you navigate and explore this dynamic space. Check them out via the Using Generative AI website.
Provocation
How are you navigating this rapidly changing technological landscape? Do you acknowledge any work you do with genAI? If you have OA works listed online, how do you feel about them being ingested?
Stay tuned for next week’s blog post when we take a look at open knowledge in action to support research, student and community engagement. To keep the conversation going, be sure to register for our Open Access Week event, Who owns our knowledge? A conversation.