Tags

,

By Cally Guerin

The International Academic Identities conference at Sydney University this week has been abuzz with discussions about “Academic life in the measured university: pleasures, paradoxes and politics”. Our identities as academics within universities are measured, monitored, surveyed in all sorts of ways, from the time that is allotted to various elements of our workloads, to how much money we can generate through student enrolment and grants, and, of course, our research output.

A number of papers at this conference focused on research writing, and lots of them experimented with various presentation modes. In “Becoming ACADEMICWRITINGMACHINE”, Eileen Honan, Linda Henderson and Sarah Loch spoke about the unrelenting pressure to write and publish, with Linda’s poetic rendering of experience providing a moving way into understanding the lived experience of contemporary academic identities. Harry Rolfe mapped the intricate networks of co-authoring and collaboration at his university. Jeanette Fyffe and Susan Martin performed a poignant, heart-rending version of TS Eliot’s “Love Song of J. Alfred Prufrock” in a satirical take on the experience of being a researcher whose life is “measured out in HERDC points” (for non-Australian scholars, this is the national system for counting university research output). Helen Sword and Marion Blumenstein presented their current work on measuring the emotional elements of writing in “Measures of Pleasure” – watch out for Sword’s new book coming out soon.

Doctoral writing itself is measured in multiple ways in the “measured university”. One of the big concerns for PhD candidates in Australia is the measurement of time to completion – to the day ­– as indicated by the successful examination of the written thesis. This comes down to the specific number of days of candidature.

Then, the scholarly publications by doctoral candidates are measured in several ways. Institutions care about where the publication appears, so that journal impact factors and the reputation of scholarly publishers become important for the authors. How those publications are taken up by other scholars is also measured by bibliometrics that analyse citation rates such as the h-index and Google Scholar citations. Increasingly, altmetrics are regarded as legitimate ways of measuring alternative citations and uptake of academic research. This measurement includes mention of academic research that appears in news media stories, Twitter, Facebook, blogs, policy documents and webpages. (In a note of caution, Hall jokingly warns researchers to be wary of the K-index – K for Kardashian, by which he refers to the dangers of being known for one’s celebrity status rather than for the quality of one’s actual research.) Understanding the significance of these measures is also relevant for authors in a tight job market.

There is also now a greater focus on measuring the impact of research in terms of economic effects (including any commercialisation of research outcomes) and of social effects (such as policy change or interventions in practice). Social impact is not easy to measure, as the UK experience of the REF impact assessment through case studies has demonstrated.

As with any conference, I was able to attend only a fraction of the stimulating, provocative papers being presented, and heard lots of enthusiastic reports about what was going on in the other rooms during the parallel sessions. To find out more, check out the abstracts on the program. If you were there, it would be great to hear about the sessions you attended and what you learnt about research writing.

 

Advertisements