The federal government have released their discussion paper on performance funding for universities: http://www.deewr.gov.au/HigherEducation/Documents/HIEDPerformanceFunding.pdf
Overall, it raises lots of pertinent questions and had it not been released just before Christmas Eve to a change weary sector, it might have provided the opportunity for the sector to have genunine input into how teaching perfromance is measured in Australia.
I haven’t spent many many months, as the government has, thinking about this. But I’ve read the paper and have a few comments.
There are some good points I think. The 2011 facilitation payment will be welcome and a step in the right direction to help universities position themselves to meet targets later on. Well, let’s face it, any money is good money or we’ll make it feel good somehow.
I agree with focus on first year because of its impact on the student experience and on learning but the first year of study for many pathway students from Vocational Education and Training (VET) and other ‘non-traditional’ avenues into higher education is the second year. This is because these students get credit (or ‘advanced standing’) for the study done previously – this appears not to have been included in the thinking here.
The paper proposes the use of completion rate to measure attainment with a way of calculating the former ‘to be developed’. In the meantime, the proxies of retention and progress will be used, despite their very problematic nature. They are particularly problematic when we consider students who take ‘non-linear’ pathways through university. Success can be defined in many ways and this proposed way is very narrow.
The proposal to use the self report Course Experience Questionnaire (CEQ) scale for generic skills as a measure of outcomes is very disappointing – the limitations of self report and of a post university test that does not have a matching pre university test make this a meaningless measure. How will we know if the student already had most or all of the generic skills they (think they) have on graduation before they started uni or whether the university education they received facilitated the development of the skills they (think they) have? A pre-test, post-test, using a valid and reliable instrument, is a pretty standard concept for measurement but appears to have been overlooked by the experts.
The proposal to use the proportion of staff with Graduate Certificates in Higher Education is silly – we have no clear evidence that completing these certificates increases the quality of teaching or learning. I have a bit of a rant about that in Campus Review this week: http://www.campusreview.com.au/pages/section/article.php?s=Comment&idArticle=13897 (you might have to subscribe first).
The graduate skills assessment (GSA) argument is based on work done by the Australian Council for Educational Research (ACER) A DECADE ago. And there is no recognition of the socio-emotional development of students and how this might be measured. There’s a great article in the latest edition of the American magazine Change on this: http://www.changemag.org/ (see the article on NSSE).
I hope that, despite the timing of the release of this paper, and cynicism in some quarters about the seeming inevitability of the ‘proposed’ indicators, that individuals and institutions take the opportunity to contribute to this important discussion.