Welcome to our first guest post by Dr Michelle Picard. Michelle is Director of Researcher Education & Development in the School of Education at Adelaide University. Amongst other things, she convenes the Integrated Bridging Program-Research for international doctoral students.
by Michelle Picard
School of Education, University of Adelaide
I started thinking about using some kind of marking grid for assessing doctoral theses when it became clear to me that students (and sometimes their supervisors) are often not really sure whether their theses are ready to be submitted for examination. Even having read completed theses by others that have been accepted for the award of PhD, it can be very difficult to measure one’s own writing in relation to such documents. Has the work reached an appropriate standard and is it now ready for submission? And even then, it’s perfectly possible to receive conflicting reports from examiners who have very different ideas of what makes the grade for a PhD.
This is an issue for us all, but the complications are perhaps highlighted in contexts where developing nations are attempting to increase research capacity and thus suddenly engaging with the international research community in much greater numbers than previously. With limited numbers of experienced supervisors, students and supervisors are sometimes relying on a certain amount of guesswork about what examiners are looking for. Consequently, assumptions can go either way – sometimes substandard theses are regarded as suitable for submission, and other times unrealistically high standards are imposed on students.
I have been working with my colleague Lalitha Velautham on developing marking rubrics and assessment matrices to see if we can establish some clearer and more uniform method for helping students and their supervisors make decisions about when a thesis is ready for submission. As academic language and learning academics we were looking for a way to interact effectively with our disciplinary colleagues and began by developing a research proposal assessment matrix. We took as our starting point the Researcher Skills Development Framework developed at Adelaide University, since this document articulates the various skills and actions required to demonstrate autonomy in research, and also referred to the literature review assessment presented by Boote & Beile (2005).
Rubrics for assessment are organised under the following categories, and articulate the extent to which the document provides evidence that the student understands how to:
1. Embark on an enquiry and determine a need for knowledge;
2. Find/generate needed information using an appropriate theoretical framework and/or methodology;
3. Evaluate information/data and the process to find/generate this data;
4. Organise information and develop ideas;
5. Synthesise and apply new knowledge; and
6. Communicate knowledge effectively and ethically, using an appropriate format.
There are some substantial challenges in attempting to create a thesis assessment matrix, not least of which is the accommodation of disciplinary differences, particularly in relation to emerging forms of practice-based and professional doctorates. Also, the notion of academic independence remains a central pillar of much university work, and a document such as we are proposing is likely to bump up against resistance to anything that is perceived as ‘managerialism’ or attempts to instruct academics in how to do their job. As demonstrated by Kiley and Mullins (2002; 2004), experienced examiners often prefer to rely on a holistic approach to assessing a thesis. However, we hope to make the matrix available merely as a supplementary tool to help decisions about when a thesis is ready for submission to examiners, rather than a prescriptive marking criteria or an attempt to pre-empt examiners’ decisions.
Perhaps there will always be an element of subjectivity and ‘art’ in the assessment of doctoral theses. In a spirit of transparency and inclusiveness, however, the hope is that by articulating at least some of the ways in which one might assess a thesis we can minimise some of the unevenness of the system.