Your questions on meaningful assessment answered

Photo by Amol Tyagi on Unsplash

A guest panel of staff and students responds to reader-submitted questions.

In this edition, our panel answers your questions on meaningful assessment. The questions pivot around issues relating to inclusion, the extent to which exams can be considered meaningful, what enjoyable assessment methods might look like and how to use feedback in effective, meaningful ways. Sharing their perspectives are students Mike Warren (BSc Computer Science for Cyber Security) and Shir Grunebaum (MSc Occupational Therapy), and staff members, Dr Andrea Macrae (English and Modern Languages), Dr Katherine Richardson (University of Leeds), Dr Mary Davis (Oxford Brookes Business School), Professor Berry O’Donovan (Oxford Brookes Business School), and Claire Jones (Oxford Brookes Business School).

1. What is the relationship between meaningful assessment and inclusion? How can assessment be made more inclusive? Adrian Wallbank ( Oxford Brookes University)

Mike: “For assessment to be inclusive, it needs to be accessible to everyone. This could include adding a level of choice in what tasks a student completes, and how the assessment is taken (in other words online options). Meaningful assessment has to push students to think critically and creatively, understanding the information they have learned and applying it to solve problems.”

Shir: “Structuring assessments in a more inclusive manner ensures a better opportunity for success for students with diverse learning styles. Inclusive assessments are those that provide an optimal layout to ensure that students with perceptual or learning difficulties are able to access the appropriate information in a clear and engaging manner. In addition, through the use of images and diagrams, where suitable, students who benefit from visual learning opportunities are provided with an increased opportunity for academic success. For instance, as a visual learner, I learn best through reading or seeing pictures and will often develop mind maps or infographics to help me prepare for assessments. When assessments provide opportunities to engage with images or diagrams, I find that I am able to apply what I am learning more successfully. 

When developing assessments, it is imperative that examiners are mindful of the use of inclusive language. When providing students with case studies, it is valuable for evaluations to include a variety of names from different sociocultural backgrounds and lived experiences. By ensuring that assessments demonstrate diversity, students are better able to engage with them and feel that the assessment content is relevant to their lives. It is also valuable when assessments consider a variety of lived experiences and use generic language to accommodate the diverse needs of students.” 

Andrea: “The use of a variety of modes of assessment, when properly supported, decreases the risk of disadvantaging students for whom the tasks involved in conventional independent written work are disproportionately challenging. Additionally, diverse assessment appeals to a wider range of interests, and it supports students in developing a broader range of skills and experience, enabling more students to thrive beyond university. Assessment practices can also be decolonised, for example through deprioritising systems of knowledge and citation rooted in the Global North. A shift in assessment emphasis from individualism to collaboration, and from product to process, can also work to foster more sustainable and inclusive behaviours.”

Katherine: “For assessment to be meaningful to everyone, it must be inclusive. ‘Designing in’ inclusivity from the outset, such as through the Universal Design for Learning (UDL) approach (see the CAST introduction to UDL here), makes it accessible to all, equitable and avoids disadvantaging certain students. This can be achieved in a variety of ways. You might consider using an assessment menu to offer several options which meet the same assessment criteria but via a different medium (such as a podcast, video presentation or a poster).”

Mary: “There’s a strong relationship between meaningful assessment and inclusion. Arguably, for any assessment to be meaningful, it needs to be accessible and inclusive for all learners. Advance HE recommend using their transforming assessment framework and involving change and processes at programme levels through ‘inclusive assessment labs’ where all assessments on a programme would be discussed for the benefit of all. One consideration with inclusive assessment is to offer a diverse range of assessment methods across a programme, so that students are not disadvantaged by a particular method (for example all essay-based assessment) (see UCL toolkit). Assessment can also be made more inclusive by mapping the assignment brief to the UDL guidelines provided by CAST.”

Berry: “It is important to try to create assessments that don’t disadvantage particular students.  There is quite a lot of advice on enhancing the relevance of assessment tasks to encourage student engagement and even on offering alternative assessments to improve accessibility.  However, whilst valuable, both of these methods require careful management and often, additional resources. In my view, the most effective and sustainable way to enhance the inclusivity of assessment is to improve students’ understanding of assessment and feedback.  To some extent, understanding can be improved by using accessible language and ensuring assessment briefs are laid out in a logical and clear manner.  To my mind, however, the intentional development of students’ assessment literacy is the most valuable thing we can focus on.  Any assessment task becomes more accessible to all students if they feel confident that they know how to go about its completion.”

2. Do students need to feel affiliation to an assessment task at inception for them to fully invest in the assessment process? Lisa Wakefield (University of Leicester)

Mike: “I think that it can be easier to complete a task when you properly understand what the reason for doing a particular task might be in the real world. To me, understanding the point of doing something makes it easier to become invested in it. To be invested in the assessment process, I think it’s important to know what skills the assessment is designed to test.”

Shir: “Students do not necessarily need to feel an affiliation with an assessment task at inception for them to fully invest in the assessment process. However, this is indeed very beneficial. As a student, when provided with the opportunity to contribute to assessment structures, I have found myself engaging with the preparation for the assessment more diligently. However, this may not always be possible. It is also valuable for students to have the opportunity to ask critical questions, be provided with examples, and provide feedback on the evaluation structure to the examiner or lecturer prior to the assessment. This is particularly meaningful for assessments that are writing-based. When students are provided with opportunities to ask questions, seek clarification, and address any areas of uncertainty or disconnect, it affords a superior opportunity for student investment in assessments.”

Andrea: “No, but providing scaffolded support to help students engage with an assessment task is important, as is a clear explanation of what they will gain from engaging with the assessment task (including skills development). Giving students opportunities to practice helps to build their confidence. New or novel assessment tasks should ideally be introduced gently, with equity, accessibility and inclusivity in mind.”

Katherine: “Feeling an affiliation to an assessment task can be really beneficial and there are steps we can take to foster this. Authentic and meaningful assessment can support students to see the purpose and value of the assessment (Kay Sambell and Sally Brown’s website provides a wealth of examples and resources). Co-creation, working with Students as Partners (Healey and Healey, 2019) in the design of the task or through co-writing the assessment criteria can be very valuable.”

Berry: “As a student, I certainly didn’t feel initial engagement with assessment tasks! I still find that I engage more with any task, particularly writing, after I have started and ordered my thoughts and can see more clearly how my work is developing. Because of this, I think it is important to help students initially to see why a task is relevant and interesting and support their initial connection with the work.  In my own practice, I find the challenge here is to know when to stop and give students sufficient space to enable their own independent thinking and not coach them all the way through the process.”

Claire: “I would say absolutely yes, but the trick (challenge) is to ascertain how to make that connection. As an MBA student, I found it immensely motivating and engaging to be able to focus my assignments on my own organisation. As a member of staff, for some of my undergraduate modules, the assessment basically involves students applying course concepts (for example management theory, strategy analysis) to a chosen organisation. I tend to invite students to nominate as many organisations as they like in week 1 of the module; I then reduce that long list to a shortlist of 5-6 for them to choose from for the assignment. Students are strongly encouraged to choose their organisation early on, so that they can link their weekly learning back round to the specific context of that organisation. We would also link seminar activities to the shortlisted organisations. The benefit of the shortlist is that I retain some control over the chosen organisations and it enables me to provide meaningful support throughout the module (for example, by highlighting relevant news articles and by focusing on those organisations or industries in the examples I give in teaching sessions).”

3. Can exams be thought of as meaningful assessments? Anon.

Mike: “I think that exams can be a meaningful form of assessment. However, I feel that they have to allow you to demonstrate the thinking and skillsets you have developed, rather than just regurgitating information. Written exams also demonstrate students’ ability to communicate, both in understanding the question and answering coherently to a deadline.”

Shir: “Exams can undoubtedly be meaningful assessments. Often in academia, exam success relies on students being able to memorise and reproduce information from memory. However, it is significantly more meaningful when exams use the knowledge gained from the course, harness critical thinking, and allow the students to share their knowledge in a relevant and appropriate methodology. For example, in healthcare courses, students are often required to memorise facts and identify them in multiple choice exams. However, it is much more meaningful for students to be provided with case studies in which the student’s critical thinking and ability to make appropriate decisions on a patient’s treatment are what is meaningful and graded. When students have the opportunity to engage with case studies in exam assessments, they develop much deeper and much more relevant skills that will foster their success in their future careers.”

Katherine: “It is helpful to look at assessment from a programme level to ensure coherence, progression and balance (including formative assessment), rather than viewing it as a series of tasks independent of each other. Asking questions such as: is this inclusive for all students? How does it support students in terms of employability? What is the progression from module/level? Considering the advantages and challenges or limitations of a range of methods can be really helpful.”

Mary: “I suggest that they are commonly not thought of as meaningful assessments, but some research suggests exams absolutely can be meaningful if presented with an emphasis on new educational opportunities (Remseal, Estrada and Corrial, 2022). When exams are created as assessment for learning, rather than of learning, a more meaningful association can be made. It is worth communicating clearly to students the purpose of exam assessment so that they are able to see its contribution to their learning.”

Berry: “Yes, I think exams can be meaningful, but it is far more difficult to enhance their real-world relevance and accessibility than for coursework – particularly within a traditional view of exams as timed, written tests and in programmes where, legitimately, exams make up the bulk of assessment methods.  The bottom line, in my view, is that creating and managing meaningful exams takes significant resourcing, and in the past, exams were used as a relatively inexpensive and less time-consuming assessment method. Meaningful exams might entail dialogic processes, flexibility in timing, and student choice over mode or even assessment methods. The design challenges are high.”

Claire: “We have all experienced exams that felt too much like a memory test and not nearly enough like a test of our ability to apply our knowledge in the context of meaningful analysis and evaluation. I would say that there are ways to make sure that exams are meaningful. My insights come from my experience in the Business School as a member of staff and also as a student (on the MBA programme). As an MBA student, open book exams were deployed on some modules. For example, on our marketing module we were allowed to study and prepare notes on a case study on Skoda (the case study was released a few weeks before the exam); we were also allowed to bring four pages of notes into the exam. This moved the exam experience along the spectrum from a memory test to a test of our analytical abilities, and was a much more satisfying experience too. From my experience as a programme lead and module leader, I have found time-controlled assessments (TCA) to be a good option, but this format needs to be discussed in detail with students so that they understand how to approach the TCA.”

4. As students what has been your most enjoyable assessment method so far in your studies and why? (Did this improve your marks too?) Jane Headley (Harper Adams University)

Mike: “I like practical coursework tasks. As a computing student, a lot of my assessments are building a web page or programme with a list of functions and expectations. It’s clear what you have to do, and the real world applications are easy to see. Tasks often include a final report to explain the design or programming choices, benefits and limitations, which pushes you to review your work and think about whether you have met the requirements of the brief, and perhaps revise the project if not. These tend to be the assessments I do better in.”

Shir: “As a student, the most enjoyable assessment method thus far has been an objective structured clinical examination, otherwise called an OSCE. While these exams are healthcare specific, their qualities can be widely applied to other disciplines. OSCE exams are used to test clinical skill performance and competence through case-study scenarios. In OSCE evaluations, students rotate around stations to demonstrate various competencies as they apply to specific case scenarios. Through the OSCE examination, I was able to demonstrate many important skills that are essential for healthcare practitioners, including my clinical knowledge, decision-making abilities, professional reasoning, communication, as well as interpersonal skills. While OSCE exams are healthcare specific, the applicability of case-study-based examinations can be useful in several disciplines and academic programs. Students today need to be able to demonstrate a variety of skills that surpass their memorisation abilities, as such, by evaluating their communication, organisation, critical thinking, and communication skills, exams such as OSCE assessments are highly valuable.”

Andrea: “Anecdotally, students have told me they have enjoyed producing podcasts, infographics, and creative re-writing when encountered for the first time, and reported that this was because of the novelty, the opportunity to be creative, and the sense of accomplishment in trying something new and managing it better than they had initially expected.”

Katherine: “This was a group task: planning a lesson for an English primary school class (with very limited knowledge of French) about the water cycle and teaching it in French (using the Content and Language Integrated Learning methodology – teaching the content through the medium of another language).  I remain amazed and inspired by the children – they were engaged, enthused, learnt the key concept, and also some French too. This taught me about having high expectations of learners, the importance of planning and benefits of working in a group.”

Claire: “As I reflect on my experiences as a student (MBA, MA and now EdD), I think overwhelmingly the assessment experiences which stand out are those which I enjoyed because I was clear about what was expected of me. That is partly down to how the assessment is constructed and explained, but also relates to having opportunities to have meaningful discussions with the module leader and also with my peers about what good work looks like. This can be particularly important in assessments which are more individual in nature, for example, many of my assessments on the MBA were linked to my own employment context.”

5. What can we do to encourage students to use assessment feedback to support their work? David Bryson (University of Derby)

Mike: “The depth of feedback is limited on a lot of assessments. Comments may be sparse and explanations of marking are often limited to the associated rubric checklist. Anonymised marking can also be an issue as it makes it a lot harder to seek out the individual marker to get more detailed feedback. I would personally like to see a lot more individual comments to explain where I have fallen short but also what I’ve done well.”

Shir: “Assessment feedback should be provided in a timely and relevant matter and provide clear and applicable areas for improvement. Often as students, we receive feedback that is very specific to a submission or exam, but cannot be more widely applied to other assessments. For example, on a specific essay, students may be provided with feedback on the specific argument they have developed but are not provided with feedback on their overall ability to communicate their argument or their ability to use appropriate literature to support these arguments. As such, it is very difficult to use assessment feedback to support future submissions.”

Andrea: “As assessors, we are often steered towards focussing on what the strengths or weaknesses of a piece of work are, without encouraging student reflection on why those strengths or weaknesses came about, or how they might relate to other contexts. That translating/transferring work is the key to the ongoing relevance and value of feedback, but that work is complex, and it can’t be achieved through comments within the margins of Turnitin alone. We can support students in doing that crucial translating/transferring work, by devoting class time to iterative dialogue and discussion of feedback, facilitating ‘feed forward’ style reflection. This is standard practice in class at A level and equivalent, and is critical to helping students understand the relevance and value of feedback across the much more disparate modules and assessments they encounter at university.”

Katherine: “Some of the steps we can take are to build feedback into the programme, make frequent reference to it to show how formative feedback/feedforward runs through it, and discuss the benefits of engaging with feedback. Drawing on students from previous cohorts to share their experiences can be really effective too, alongside practical steps such as: 

  • dedicating time in sessions to model how to use feedback to inform future work (videos can also be useful here) 
  • building in opportunities in sessions/tutorials/independent study to reflect on feedback and identify how to draw on it. 

Giving students agency (for example, they decide which aspect/s they would like formative feedback on), can also raise engagement and considering assessment as dialogue through dialogic feedback (Norton and Rao, 2020) is well worth exploring.”

Mary: “Students have been found to be more likely to engage with feedback when it is relevant, there is an opportunity to engage with it, specific ways to improve are suggested and the feedback is limited to priority areas (Nicol et al., 2006 in Wilson, 2012). One way to encourage engagement is to promote feedback dialogue. This is when students are required to respond to tutor comments with their own reflections on how they will address the feedback whilst reflecting on difficulties with following the feedback.”

Claire: “As a tutor, I have mainly worked with undergraduates, and I would say that this needs to be built into subsequent teaching sessions. Mid-semester assessment feedback is likely to be developmental in nature and should be discussed in a subsequent teaching session; end of module feedback can be discussed in a subsequent module. In a L5 module that I used to run, I would set aside time in a seminar to discuss the feedback from the week 7 assessment – I would break it down into two distinct aspects: 

a) what are the key areas for improvement? 

b) what are you actually going to do/what are your sources of support/information/who can you talk to? 

I did this because I found that students were actually quite good at identifying the key issues in their work, but they might not always know what to do with that and how to approach that developmental work. For example, I might advise students to be more critical in their analysis but also where they can go to discuss what that actually means and how they can work on it (for example, study skills books, centres and tutors)!

We can also encourage students to discuss their feedback with their academic adviser. I encourage my students to do a kind of feedback audit, of the feedback that they have received across all of the modules in that semester or even academic year, in order to identify the things that are going well (that’s always quite a nice thing to do!) as well as the key areas for improvement. This can reveal key issues which are cropping up across multiple modules, even in subjects like mine which are quite multidisciplinary. Not all students engage with this approach, but I would say that those who do find it useful and quite motivational as that process of synthesis brings clarity.”

Would you like to submit a question or share your perspective as a panellist in our next issue? Please see our Contribute to Teaching Insights page for details on how to get involved.

Sincere thanks to our contributors

Photo of Andrea Macrae

Andrea Macrae

Photo of Berry O'Donovan

Berry O’Donovan

Photo of Claire Jones

Claire Jones

Photo of Mary Davis

Mary Davis

Katherine Richardson

Photo of Shir Grunebaum

Shir Grunebaum

Mike Warren

References

CAST (2018) Universal Design for Learning Guidelines, version 2.2. http://udlguidelines.cast.org

Healey, M. and Healey, R.L. (2019) Student engagement through partnership: A guide and update to the Advance HE framework, Available at: https://www.researchgate.net/publication/338096919_Students_as_Partners_Guide_Student_Engagement_Through_Partnership_A_guide_to_the_Advance_HE_Framework (Accessed: 8 June 2022).

Nicol, David J. & Macfarlane-Dick, D. (2006) ‘Formative assessment and self-regulated learning: a model and seven principles of good feedback practice’, Studies in Higher Education, 31(2), pp. 199-218. https://doi.org/10.1080/03075070600572090

Rao, N. and Norton, L. (2020) ‘Dialogic feedback: what we know and what remains to be known’, in On Your Marks: Learner-focused Feedback Practices and Feedback Literacy. Advance HE, pp. 160-168. Available at: https://hira.hope.ac.uk/id/eprint/3126/    (Accessed 24 June 2022)

Remesal, A., Estrada, F.G. and Corrial, C.L. (2022) ‘Exams for the Purpose of Meaningful Learning: New Chances with Synchronous Self-Assessment’ in Gómez Ramos, J.L. and Gómez-Barreto, I.M. (eds.) Design and Measurement Strategies for Meaningful Learning. IGI Global, pp.192-211. 

Wilson, Ann, (2012) ‘Student engagement and the role of feedback in learning’, Journal of Pedagogic Development, 2(1), pp.15-19. Available at: https://www.beds.ac.uk/jpd/volume-2-issue-1/student-engagement-and-the-role-of-feedback-in-learning/ (Accessed 24 June 2022)

PrintTwitterLinkedInEmail

Author

  • Dr Adrian J. Wallbank

    Adrian joined Brookes in October 2021 as a part-time Lecturer in Educational Development. Adrian previously established, directed and led the Integrated Foundation Year and Academic Writing and Communication programmes at Royal Holloway and has taught across a spectrum of both staff and student-facing programmes since 2008 in areas as diverse as eighteenth-century poetry and philosophy, Romanticism, academic writing and literacies and educational development. Adrian has particular research and teaching interests in academic writing, dyslexia and inclusion, writing in the disciplines / writing across the curriculum pedagogies, neurodiversity, transition pedagogies and Universal Design for Learning, one-to-one pedagogies, the philosophy of Higher Education and didacticism / persuasion. As a successful, dyslexic academic, Adrian is passionate about inclusion and works tirelessly to help enable both students and staff to achieve their full academic and professional potential. Adrian is a Senior Fellow of the Higher Education Academy and Associate Fellow of the Dyslexia Guild.

    View all posts

How to cite

Wallbank, A. (2022) Your questions on meaningful assessment answered. Teaching Insights, Available at: https://teachinginsights.ocsld.org/your-questions-on-meaningful-assessment-answered/. (Accessed: 5 October 2024)

Post Information

Posted in Edition 2, Your Questions Answered