A look at Google Scholar metrics

I remember when I finished my PhD in 2015, Google Scholar (GS) was quite new and not exactly accepted by academia. Its novelty was that it captured references to one’s work that weren’t otherwise indexed: reports, theses, and journals that were not part of particular databases. There were arguments that these were not “real” citations and that the numbers couldn’t be trusted. Of course, this broader consideration might be a better indicator of the impact of one’s research beyond the academe, which is something that governments seem to be keen on. GS individual profiles are now de rigueur – and they definitely make one feel better about one’s citation counts than, say, Scopus, where citations seem more difficult to come by due to what is indexed.

Five years on, it seems like GS is now very much a part of the metrics landscape. The 2020 metrics were released on July 7 and, from what I can gather of its inclusion guidelines, as long as Google’s crawler systems can access the relevant journal information (and there have been more than 100 papers over the relevant period), it will be included in the GS calculations. The 2020 metrics cover papers published in the period 2015–2019 inclusive, and citation counts are for all papers up to June 2020.

GS ranks journals on the basis of their “h5-index”, which is “the largest number h such that h articles published in 2015-2019 have at least h citations each”. So if a journal’s h5-index is 26, this means they published 26 articles in the period 2015-2019 that had 26 or more citations, and the remaining papers they published in that time period had less than 26 citations. This means that the total number of articles published doesn’t come into the calculation – just the ones that are well-cited. (So you could be a journal that has published 1000 articles, or 26 articles, and still get an h5-index of 26!)

They also report an “h5-median”, which is “the median number of citations for the articles that make up its h5-index”. This gives you a bit more of an idea of the distribution of citations across those 26 articles – if this number is close to the h5-index, say, 30 – then we know that 50%, or 13 of the papers, received between 26 and 30 citations, and that the other half received above 30 citations. Similarly, if the h5-median was 75, then we’d know that while 13 papers received between 26 and 75 citations, those other 13 papers received in excess of 75 citations each.

So, how does this play out in higher education? Luckily for us, GS has a separate higher education category. You can click on the “h5-index” number for the journals to find out more about the papers that contributed to the calculation of that number. You can check it out for yourself – Studies in Higher Education is #1 in Higher Education and #4 in the more general Education category, with an h5-index of 59. Ranked #2, Higher Education is close behind with an h5-index of 54, and is also #5 in Education. Assessment and Evaluation in Higher Education (AEHE) is #3 with an h5-index of 43 – and also comes up as #3 in “Academic & Psychological Testing” and #16 in Education.

Google Scholar h5 index ranking of higher education journals

While the article I led on evaluative judgement comes up as #5 in Higher Education, it’s in AEHE that the CRADLE team have made an impact, with 4 of the top 10 cited articles since 2015 having CRADLE authors. Number 11 is also CRADLE led. CRADLE’s Phill Dawson also noted in a tweet that there has been a big rise in impactful studies on feedback in AEHE:

Assessment & Evaluation in Higher Education - 2019 top-cited publications

So, what’s the point of checking out GS metrics, or indeed any metric system? As a researcher looking to publish work, metrics can help in figuring out which journals might be more likely to be perceived as a source of quality publication, and therefore read – prestige by association with other well-cited articles. As a researcher trying to present my best self in a grant, a speaker bio, or promotion application, this information helps me to construct an image of myself as someone who publishes in top journals, has their finger on the pulse, and whose work is read by others (even if it’s not my work that contributed to the ranking!!).

As a consumer of research, metrics might also help me figure out if an article I am reading might be high quality – or not. Highly ranked journals generally receive more submissions, so it is more competitive to publish in them, and that means that editors have to be more stringent in their quality control. For instance, I’ve heard that at some journals, if any reviewer returns a report more significant than “minor revisions”, the paper will be rejected as they have such a huge pipeline of quality submissions. While again it is quality by association, it is a heuristic available to us in making a quality appraisal. I would, however, suggest that each piece of research be assessed on its own merits – and citation count doesn’t mean that a piece of work is good, just that lots of people have found the need to cite it!

Articles from the CRADLE team referred to in this post:

Ajjawi, R., & Boud, D. (2017). Researching feedback dialogue: an interactional analysis approach. Assessment & Evaluation in Higher Education, 42(2), 252–265. DOI:10.1080/02602938.2015.1102863

Boud, D. & Soler R. (2016) Sustainable assessment revisited. Assessment & Evaluation in Higher Education, 41:3, 400-413. DOI:10.1080/02602938.2015.1018133

Carless, D., & Boud, D. (2018). The development of student feedback literacy: enabling uptake of feedback. Assessment and Evaluation in Higher Education, 43(8), 1315–1325. DOI:10.1080/02602938.2018.1463354

Dawson, P. (2017). Assessment rubrics: towards clearer and more replicable design, research and practice. Assessment & Evaluation in Higher Education, 42(3), 347–360. DOI:10.1080/02602938.2015.1111294

Dawson, P., Henderson, M., Mahoney, P., Phillips, M., Ryan, T., Boud, D. & Molloy, E. (2019) What makes for effective feedback: staff and student perspectives, Assessment & Evaluation in Higher Education, 44:1, 25-36, DOI:10.1080/02602938.2018.1467877

Tai, J., Ajjawi, R., Boud, D., Dawson, P., & Panadero, E. (2018). Developing evaluative judgement: enabling students to make decisions about the quality of work. Higher Education, 76(3), 467–481. DOI:10.1007/s10734-017-0220-3



Category list: News, Reflections, Research


Comments
1 Comment

Join the conversation

Your email address will not be published. Required fields are marked *

back to top