20 October 2017
CRADLE hosted a diverse group of international, national and local assessment /digital learning experts at the recent two-day International Symposium in Melbourne. In this blog post, four researchers reflect on the impact the Symposium has had on their thinking and future research directions.
CRADLE Fellow Dr Bhavani Sridharan (Deakin’s Faculty of Business and Law) sharpened her focus on the need for research around business curriculum and assessment reformation in the context of artificial intelligence (AI) and robotic revolutions, and the looming question of ‘will robot take my jobs?’
There is currently a clear gap in industry stakeholders’ expectations and higher education stakeholders’ delivery of some of the key competencies that are immune to AI revolutions. There is also a compelling need for preparing students for the jobs of the future to survive the competitive open/global higher education market. In fact, this symposium reminded me of Nokia CEO Steve Ballmer’s statements during Microsoft’s acquisition of Nokia – “we didn’t do anything wrong, but somehow, we lost”. The reasons for Nokia failures quoted include: “the world changed too fast”; “the opponents were too powerful”; and “we failed to adapt and listen”.
To avoid being in this kind of predicament, especially with the foreboding outlook of giant companies entering the higher education market, I feel that it is imperative we explore the variation in utility, understanding and interpretation of these competencies between multiple key stakeholders. For example, big data is becoming ubiquitous and unique to each domain, but research into embedding literacy around effectively making use of such data into the business curriculum is not fully explored. This literacy is crucial to make sense and take decisions in the dynamic, ambiguous and uncertain contemporary world of business. This competency is clearly for humans and not for robots owing to the complexity and dynamic nature of business world. However, such contextualised competencies embedded in business curricula is yet to become a mainstream phenomenon. Hence, my approach would be to explore design thinking models and agile methods to identify the gaps; ascertain barriers and enablers; and then to develop a framework for implementing to realise the vision of developing contextualised competencies currently missing in business curricula.
CRADLE Fellow Dr Matthew Dunn (Deakin’s Health Faculty) reflected on the timeliness of the Learning Analytics-focused sessions and posed a health-based “big-data” scenario:
Abelardo Pardo, Simon Knight, and Dragan Gašević all gave fascinating talks about the pros and cons of using Learning Analytics (LA) in the higher education environment. The take-home message I received was that LA has the potential to generate data which can be used to design new types of assessments but also support existing assessments. In connection, LA can help identify or convey patterns about large numbers of students. Abeldaro gave the example of an exercise science subject that might have 2,000 students. Imagine if they all wore a Fitbit®. We could use LA to collate that data together and we now suddenly have a real-world data set, not only for the students to use and get practice in analysing and interpreting, but for us to use to provide feedback as well.
Another major point of discussion on the day focus on need for us educators to ask: “What data do we have, and what data do we want?” Is LA another tool that adds another layer of complexity? Data can inform a diagnosis (e.g. 15% of my students did not understand the assessment task requirements) but people need to act. How do I use the data I collect? The consensus seemed to be that LA could have useful implications for how we design assessments and provide feedback but we need to start with a purposes for why we are collecting the data.
Chad Gladovic, PhD researcher into work-integrated assessment, reflects on the re-imagining more transparent and scaleable assessment practices:
The assessment in digital and non-digital environment is not new but always vocal, challenging and an interesting topic to discuss. We didn’t just re-imagine assessment but rather we re-assessed our practices and understanding of education. One of the first barriers we experienced in our discussion was notion that we are pre-conditioned with regulatory and institutional requirements preventing us to incorporate into assessment one of the main ingredients for assessment, creativity. This notion led us to the realisation that we need more transparency in the design of assessment and that students should be our partners. We also challenged the concept of scaffolding and argued that scaffolding should create lifelong learners. Scaleability of assessment topic raised questions such as: What is it possible to scale? Are there things that can’t be scaled? Can we measure skills like creativity, self-management and global citizenship? We didn’t find all the answers but we identified areas for further research.
The symposium addressed many domains of my research but one of the most thought-provoking was around peer and self-assessment. This is closely aligned with my research area of evaluative judgement and challenged me to question if the solution is in technology or people. Self-assessment seems to be one of the main drivers allowing students to embrace evaluative judgement as an integral part of their profession, allowing them to make judgements of their own work and the work of others.
Rachelle Esterhazy, visiting CRADLE Fellow and PhD researcher into learning feedback, reflects on the socio-cultural dynamics of technology and feedback:
It was a great pleasure to join the discussions around assessment and technologies during this year’s CRADLE symposium. It left me with a better understanding of the complexity and diversity of the field and many thought-provoking ideas that resonated well with my own research agenda around feedback in higher education.
Taking a sociocultural perspective, I believe that these emerging digital tools have the power to change our learning and teaching practices in a far-reaching and substantial manner. This includes the way students interact with each other and their teachers, what roles the different participants take in different educational settings and what resources are made available. In the context of feedback, technology can give us the possibility to rearrange responsibilities and power dynamics in ways that enable more dialogical interactions and feedback for learning. Several contributions in the symposium gave an impressive account of how this could be done. One idea I am taking home is that of engaging digital learning environments in which assessment and feedback are embedded within the learning activities and become integral parts of the student’s learning experience. This seems to be a promising way to give students more agency in the assessment and feedback process, and support continuous feedback dialogues about the students’ learning together with the involved peers, teachers and technologies.