Learning analytics and evaluative judgement at EARLI SIG1 2018
1 October 2018
I recently had the good fortune to participate in the EARLI SIG1 (Assessment and Evaluation) 2018 conference in Helsinki in Finland: “SIG 1 is interested in addressing the need for pedagogically driven models of assessment and electronic assessment that inform policy, support teachers in their work and that allow pupils and students to take more control of their own learning and become more reflective.”
I’ll describe two parts of the meeting – the learning analytics plenary panel, and the evaluative judgement symposium.
Phill Dawson (CRADLE, Deakin) chaired the closing plenary panel, which included myself, Dragan Gašević (Monash), Samuel Greiff (Luxembourg) and Benő Csapó (Szeged). As part of this panel, I reflected on three days’ worth of discussing learning analytics, the core theme of the conference. Dragan Gašević’s initial keynote discussed more challenges then solutions – and there seemed to be somewhat of a consensus that we are still establishing how the power of big data can best inform assessment.
For me, it seemed that one of the outstanding issues is how sense is made of the trace data. Sometimes, the data are clearly meaningful, such as strategies for completing problems. At other points, data such as mouse clicks or time watching videos are more loosely connected with learning. In the round table I facilitated, I noted that many within the assessment community are not familiar with the field of learning analytics and vice versa. In particular, the value of analytics in offering overall cohort views could be explored further. I am looking forward to growing maturity within the field.
Another, quite different, experience was being in the audience for a symposium about evaluative judgement, chaired by Rola Ajjawi (CRADLE, Deakin). Joanna Tai (CRADLE, Deakin) kicked off with an overview of what evaluative judgement is, and why it is important to understand quality of work – both your own and others. Ernesto Panadero (Autónoma de Madrid) followed on with a discussion about the inter-relationship between evaluative judgement and self-regulated learning. Finally, Jessica To (HKU) described an empirical study, looking at the development of evaluative judgement.
Mien Segers (Maastricht) did a fabulous job as discussant. She asked about the context of evaluative judgement and to consider how these judgements are socially situated. (This of course was music to my ears, as my chapter in the evaluative judgement book* explores what I believe is the strongly situated notion of evaluative judgement.) This sparked a very interesting exploration among participants about the situated nature of evaluative judgement, including its relationship to feedback literacy. I feel that the development of more empirical work with evaluative judgement will really help outline how we can help build learners’ understanding of quality work.