.

Do we need to change assessment design for AI?

In this post, CRADLE PhD candidate Mert Pekel reflects on the expert panel session CRADLE held as part of our International Symposium 2025. The session also doubled as webinar #2 in our New Directions in AI Research and Practice series and discussed changing university assessment practices for a world with AI.

Mert is a cotutelle PhD candidate at CRADLE and the Centre for Global Learning (GLEA) of Coventry University.


Last month, at the conclusion of the CRADLE International Symposium on Assessment Design in Higher Education: Changing practices for a world with AI, CRADLE hosted a public panel that featured reflections on the symposium discussions.

Chaired by Professor Phill Dawson, the three panellists, Associate Professor Nicole Pepperell (UTS), Dr Jess Luo (The Education University of Hong Kong), and Dr Zachari Swiecki (Monash), focused on the current practices around GenAI in assessment, and what future assessment practices in higher education (HE) may look like. Phill Dawson chaired a fun and stimulating discussion, and in this blog post, I will try to capture some of the main points of the discussion.


The main topic of discussion, perhaps because it is so instrumental in driving both debate and research, was the increasing sense of urgency around redesigning assessment in HE, and the resulting pressure on the university teachers to overhaul their assessment practices as a go-to universal solution. Jess Luo highlighted the collective responsibility of assessment redesign efforts that could minimize tensions between the desire towards labour-intensive assessment practices and maintaining high-level research outputs. In a context where a healthy symbiosis of people, policies, tools, structures and assessment design processes may present the opportunity for meaningful educational improvements, the idea of placing the entire responsibility of redesigning assessment on teachers may risk turning assessment redesign into, in Jess’ words, ‘a box ticking’ activity, thus missing out on the potential improvements in assessment in HE.  

One of the challenges in regard to assessment change in the face of AI challenge is ensuring that knee-jerk blanket decisions don’t harm already disadvantaged students.

The nuanced needs of learners in an ecosystem where AI tools are ubiquitous was, unsurprisingly, therefore another key point of the discussion. Nicole Pepperell highlighted the double-edged sword of AI in this context, reminding us of the potential usefulness of AI in enabling learners to experience varying levels of difficulty. Nicole offered the term ‘friction’ to capture this idea, the necessarily challenging part of learning which can’t be avoided but which, if too substantial, can equally harm learning. Phill Dawson expanded on this view by questioning if we might not need a radically different understanding of assessment itself.

Zachari Swiecki explored this idea by considering what really needs to be assessed in higher education in a world with AI. After all, many simple questions may not be as simple as they once were. In this segment of the conversation, the panellists shared a common view on building and maintaining relationships and trust among learners, teachers and HE institutions.

How does this influence my research?

I left the deeply engaging discussion with a whole lot of new ideas and perspectives regarding the role of GenAI in assessment on the learner, teacher, and institutional levels thanks to the in-depth discussion of the panellists and stimulating questions of the audience on various aspects of learning and assessment in a world of AI. As a doctoral researcher working on a related topic, this panel was perhaps especially timely. My research is focusing on GenAI engagement in the context of written assessment tasks, particularly for learners with dyslexia. At the moment, very little is known about how dyslexic students engage with AI, or the ways that it might be used to support their learning or, equally, the ways it might cause harm.

The panel prompted me to think more about this gap in the broader context of the need for layered and contextually relevant approaches in assessment to foster inclusive physical and digital learning spaces where GenAI is omnipresent.

I suspect that no matter what aspect of AI and assessment interests us, it will turn out that we can’t expect to solve the AI and assessment challenge without making sure it’s a solution for everyone.

About Mert Pekel


Mert Pekel

Mert is exploring neurodivergent students’ use of generative AI for academic literacies in a cotutelle project with CRADLE and the Centre for Global Learning (GLEA) of Coventry University, UK. He gained his Master’s in English as a Second Language and has 14 years ESL teaching experience as well as Turkish as a foreign language through the Fulbright FLTA Program.


Missed the webinar? Catch up on our YouTube channel or on our New Directions in AI Research and Practice page.


Thursday 23 October at 2pm


WebinarTitleDate
1Student perspectives on AI in higher education29 August 2025Watch now
2Assessment design in higher education:
Changing practices for a world with artificial intelligence
17 September 2025Watch now
3Secure assessment tasks in a time of GenAI23 October 2025Register now

CRADLE has been busy researching GenAI – find our latest publications on our blog.



Discover more from CRADLE Blog

Subscribe to get the latest posts sent to your email.





back to top