What was the activity?
On a module evaluation survey, first-year undergraduate students on a ‘Principles of Marketing’ module noted the challenges they face to understand assessment criteria, expectations and marking rubrics. The students also highlighted unclear or inconsistent explanations of assessment language provided by module tutors as a contributing factor.
The teacher and student populations in higher education are increasingly diverse – culturally, socially and linguistically – which makes the transparency of assessment language increasingly important. Language is fundamental to students’ ability to access the curriculum and participate in disciplinary discourse; therefore it is fundamental to their academic performance. Assessment criteria and marking rubrics are commonly used to enhance transparency of assessment (Bearman and Ajjawi, 2021). However, students’ unfamiliarity with assessment language can mean criteria is opaque and valued knowledge remains implicit (Tierney and Simon, 2004). Accordingly, students often describe rubric language as ‘confusing’ and perceive rubrics to be less helpful in clarifying aspects of assessment than teachers do (Fang and Wang, 2011; Bell et al., 2013; Li and Lindsey, 2015). Fostering familiarity with assessment language and assessment literacy amongst all students can address inequity in students’ education experience (Felten and Finley, 2019), contributing to reduced attrition and performance gaps (Butcher, 2020).
This intervention sought to embed a more inclusive approach to assessment at the module level by enhancing both students’ and teachers’ knowledge of language and, in turn, assessment literacy.
Firstly, 473 students of diverse academic and cultural backgrounds evaluated the module’s assessment criteria and marking rubrics. Students highlighted terms they considered to be opaque and suggested ways their understanding could be improved.
Secondly, the module leader created a glossary. Twenty-eight students across 5 focus groups refined the definitions and indicated preferences for the layout, visual presentation and downloadable document as opposed to digital format in the virtual learning environment.
Thirdly, the teaching team were introduced to the module assessment brief, marking rubric, glossary and rationale at the start of the module. Teachers discussed the glossary definitions, mapped the glossary words to the assessment criteria to establish the link between the two documents and collectively outlined a plan to guide a discussion of the assessment criteria and glossary in class with students.
Fourthly, a new cohort of students was introduced to the assessment brief, criteria and marking rubric in the first week of the module. Then, students used the marking rubric and glossary in the third week of the module to assess peers and provided constructive written feedback.
How did it impact students?
Qualitative survey comments revealed that students found the glossary useful to understand assessment language and teachers’ use of assessment terms in class. Students noted increased confidence in using assessment language in discussion and to provide feedback, demonstrating the benefits of linguistic resources in student socialisation and participation in disciplinary discourse (Bond, 2020). Students noted that easy access to the glossary facilitated group discussions when members had differing ideas about how to translate assessment expectations into evidence of performance.
How did it impact teaching colleagues?
Anecdotal feedback at the end of the module revealed colleagues perceived enhanced understanding of assessment language and its role in inclusive and accessible assessment. Using the glossary as a benchmark allowed tutors to identify discrepancies and gaps in individual students’ understanding and fostered deeper discussion of assessment literacy.
Any advice for others?
- Define assessment terms, assumptions and skills or competencies.
- Dialogue to establish shared understanding is helpful but may not be the best use of class time. Collective codification provides a concrete reference and allows for immediate in-depth class discussion.
- Use glossaries as tools to support teaching and learning and engage all stakeholders in dialogue. They should be dynamic representations of language relevant to a discipline, and assessment, educational context and set of learners.
References
Bearman, M. and Ajjawi, R. (2021) ‘Can a rubric do more than be transparent? Invitation as a new metaphor for assessment criteria’, Studies in Higher Education, 46(2), pp. 359-368. https://doi.org/10.1080/03075079.2019.1637842
Bell, A., Mladenovic, R. and Price, M. (2013) ‘Students’ perceptions of the usefulness of marking guides, grade descriptors and annotated exemplars’. Assessment & Evaluation in Higher Education, 38(7), pp. 769-788. https://doi.org/10.1080/02602938.2012.714738
Bond, B. (2020) Making language visible in the university. Bristol Multilingual Matters.
Butcher, J. (2020) ‘Widening participation in higher education’, in Holiman, A. and Sheehy, K. (eds.) Overcoming Adversity in Education. London: Routledge, pp.139-150.
Fang, Z. and Wang, Z. (2011) ‘Beyond rubrics: Using functional language analysis to evaluate student writing’. Australian Journal of Language and Literacy, 34(2), pp.147-165.
Felten, P. and Finley, A. (2019) Transparent design in higher education teaching and leadership: A guide to implementing the transparency framework institution-wide to improve learning and retention. Stylus Publishing, LLC.
Li, J. and Lindsey, P. (2015) ‘Understanding variations between student and teacher application of rubrics’, Assessing Writing, 26, pp. 67-79. https://doi.org/10.1016/j.asw.2015.07.003Tierney, R. and Simon, M. (2004) ‘What’s still wrong with rubrics: Focusing on the consistency of performance criteria across scale levels’, Practical Assessment, Research, and Evaluation, 9(2), pp. 1-7. https://doi.org/10.7275/jtvt-wg68