Meaningful assessment: What is it and why does it matter?

compass against pages of paper in half light
Photo by AbsolutVision on Unsplash

The Oxford English Dictionary (OED) defines meaningful as something that has “a serious, important, or recognisable quality or purpose” (the Collins dictionary summarising the latter as “useful”) – which immediately starts to answer the second part of our question. We might also ask at this point, “meaningful to whom?” And if we start with meaningful to the student, this matters because we know that if students do not see an assessment task as “important” and/or “useful”, they are more likely to do it superficially and spend less time on the task, and even use it as an argument to justify cheating or plagiarising in some cases (Ashworth et al., 2006).

So, what makes an assessment meaningful to a student?

[From Rust, 2020] There are arguably three linked, but different, qualities that can help to make an assessment meaningful as follows.

  • Validity – does the task truly assess what you claim it assesses? Writing a 500 word essay on “How to give safe injections” would not actually assess whether a nursing student could give safe injections.
  • Authenticity – is it a ‘real’ world task? Does it look like something someone might ever be expected to do in a setting outside the university? And such tasks can be even more effective if they can actually be undertaken in a real world setting such as a placement, or by undertaking a ‘live’ project (so literally doing it ‘for real’).
  • Relevance – can the student see why this topic is important and why they need to know this? How does it fit with the rest of the subject and the bigger picture (the rest of their course, society, their intended career)? It’s even better if there can be personal relevance and it can be something which the student is personally interested in, wants to know more about or be able to do. So, can there be elements of choice about the task/s undertaken?

Both authenticity and relevance should make the activity meaningful to the student, and there is strong research evidence that, if it is meaningful, the likelihood that the student will be motivated to engage with the task will be increased. This is the exact antithesis of, in common parlance, an ‘academic exercise’, namely something that is essentially pointless!

Regarding authenticity, Marilyn Lombardi (2007, pp. 3-4) provides a very useful checklist which distils ten design elements of the authentic learning experience as follows.

  1. Real-world relevance: Authentic activities match the real-world tasks of professionals in practice as nearly as possible. Learning rises to the level of authenticity when it asks students to work actively with abstract concepts, facts, and formulae inside a realistic— and highly social—context mimicking “the ordinary practices of the [disciplinary] culture.
  2. Ill-defined problem: Challenges cannot be solved easily by the application of an existing algorithm; instead, authentic activities are relatively undefined and open to multiple interpretations, requiring students to identify for themselves the tasks and subtasks needed to complete the major task.
  3. Sustained investigation: Problems cannot be solved in a matter of minutes or even hours. Instead, authentic activities comprise complex tasks to be investigated by students over a sustained period of time, requiring significant investment of time and intellectual resources.
  4. Multiple sources and perspectives: Learners are not given a list of resources. Authentic activities provide the opportunity for students to examine the task from a variety of theoretical and practical perspectives, using a variety of resources, and requires students to distinguish relevant from irrelevant information in the process.
  5. Collaboration: Success is not achievable by an individual learner working alone. Authentic activities make collaboration integral to the task, both within the course and in the real world.
  6. Reflection (metacognition): Authentic activities enable learners to make choices and reflect on their learning, both individually and as a team or community.
  7. Interdisciplinary perspective: Relevance is not confined to a single domain or subject matter specialization. Instead, authentic activities have consequences that extend beyond a particular discipline, encouraging students to adopt diverse roles and think in interdisciplinary terms.
  8. Integrated assessment: Assessment is not merely summative in authentic activities but is woven seamlessly into the major task in a manner that reflects real-world evaluation processes.
  9. Polished products: Conclusions are not merely exercises or substeps in preparation for something else. Authentic activities culminate in the creation of a whole product, valuable in its own right.
  10. Multiple interpretations and outcomes: Rather than yielding a single correct answer obtained by the application of rules and procedures, authentic activities allow for diverse interpretations and competing solutions.

There is however a health warning. Students can be very risk averse; ill-defined, complex problems are very worrisome. Big projects are high stakes assessment tasks. Collaborative assessment, where all receive the same mark for the final outcome, is considered by most students as unfair. The tension is that what they want (clarity, safety, low-stakes, certainly, fairness, equality and objectivity) can easily result in systems-driven, impersonal and inauthentic assessment tasks. The challenge, therefore, is to give sufficient guidance, support and scaffolding to overcome and negate their fears and objections.

Meaningful feedback

A further aspect to making an assessment meaningful must be the effectiveness of the feedback the student gains from the experience in helping them see the strengths and weaknesses in what they’ve done, and subsequently what they need to do next time in order to do better. Nicol and Macfarlane-Dick (2006, p.205) distil good feedback practice into the following seven principles.

  1. Helps clarify what good performance is
  2. Facilitates the development of reflection and self-assessment in learning
  3. Delivers high quality information to students about their learning
  4. Encourages teacher and peer dialogue around learning
  5. Encourages positive motivational beliefs and self-esteem
  6. Provides opportunities to close the gap between current and desired performance
  7. Provides information to teachers that can be used to help shape the teaching

More succinctly, Val Shute (2008, p.175) argues that for feedback to be effective, there are three requisite conditions – motive, opportunity and means. The student needs to be motivated to engage with the feedback, and this is more likely if they can see they will have the opportunity to put the feedback into practice (and less likely if the feedback is accompanied by a mark or grade, which will become their main, and possibly only, focus of attention).  They also need the means (help, support, guidance) to address their shortcomings.

And what about being meaningful to others?

Nicol and Macfarlane-Dick’s last feedback principle neatly brings us back to the question of meaningful to whom – in this case not just the student but the teachers. In considering whether assessment is meaningful to teachers and the wider system beyond there are two major concerns.

Firstly, there is the issue of reliability. There is considerable and consistent literature that the marking of student work is not very reliable either between different markers, or, even more worryingly, by an individual marker with different marks given to the same piece of work marked at different times. (Hartog and Rhodes,1935; Laming et al.,1990; Wolf, 1995; Leach, Neutze, and Zepke, 2001; Elander and Hardman, 2002; Newstead, 2002; Baume, Yorke, and Coffey, 2004; Norton, 2004; Hanlon et al., 2004; Read et al., 2005; Price, 2005; Shay, 2004 and 2005; Brooks, 2012; O’Hagan and Wigglesworth, 2014; Bloxham et al., 2015) [this list is not exhaustive].

Secondly, there is the prevalence of recording the outcome of an assessment as a mark or grade – most commonly a percentage mark. “What does a score of 55%, for example, actually mean? Two students can have the same score while having very different strengths and weaknesses. The number ignores and obscures this detail … Aggregation of criteria is a further problem. If one criterion is inadequately met can that be mitigated by another criterion being met well? And could a student go on failing to meet that criterion but always manage to pass because the aggregated mark is a pass?” (Rust, 2011, p.2)

The combination and aggregation of marks has serious implications for the validity (and therefore meaning) of assessment.

But perhaps most worryingly of all when it comes to grading, and this brings us back to the question of meaningfulness for the student, Dahlgren et al (2009) has shown that if a piece of work is to be graded – anything beyond simple pass or fail – students are more likely to take a surface approach, and are much less likely to see the task as a learning opportunity.


Baume, D., Yorke, M. and Coffey. M. (2004) ‘What is happening when we assess, and how can we use our understanding of this to improve assessment?’, Assessment and Evaluation in Higher Education, 29(4), pp.451–477.  ​

Bloxham, S., Hudson, J., den Outer, B. and Price, M. (2015) ‘External peer review of assessment: An effective approach to verifying standards?’, Higher Education Research and Development, 34(6), pp. 1069–1082.

Brooks, V. (2012) ‘Marking as judgment’, Research Papers in Education, 27(1), pp. 63–80.

Dahlgren, L.O., Fejes, A., Abrandt-Dahlgren, M. and Trowald, N. (2009) Grading systems, features of assessment and students’ approaches to learning, Teaching in Higher Education,14(2), pp. 185-194.

Elander, J. and Hardman, D. (2002) ‘An application of judgement analysis to examination marking in psychology’, British Journal of Psychology, 93, pp.303–28.

Hanlon, J., Jefferson, M., Molan, M. and Mitchell, B. (2004) An examination of the incidence of ‘error variation’ in the grading of law assessments. United Kingdom Centre for Legal Education. Available at: (Accessed: 10 June 2022)

Hartog, P. and Rhodes, E. C. (1935) An Examination of Examinations, London: Macmillan 

Laming, D. (1990) ‘The reliability of a certain university examination compared with the precision of absolute judgments’, Quarterly Journal of Experimental Psychology, 42, pp. 239-254.

Leach, L., G. Neutze, and Zepke, N. (2001) ‘Assessment and empowerment: Some critical questions’, Assessment & Evaluation in Higher Education 26(4), pp. 293–305.

Lombardi, M.M. (2007) ‘Authentic Learning for the 21st Century: An overview. ELI paper 1’, Educause Learning Initiative, Available at: (Accessed 10 June 2022)

Newstead, S.E. (2002) ‘Examining the examiners: Why are we so bad at assessing students?’, Psychology Learning & Teaching, 2, pp. 70-75.

Nicol, D. and Macfarlane-Dick, D. (2006) ‘Formative assessment and self-regulated learning: A model and seven principles of good feedback’, Studies in Higher Education, 31 (2), pp. 199-218.

Norton, L. (2004) ‘Using assessment criteria as learning criteria: A case study in psychology’, Assessment and Evaluation in Higher Education, 29 (6), pp. 687-702.

O’Hagan, S.R. and Wigglesworth, G. (2014) ‘Who’s marking my essay? The assessment of non-native-speaker and native-speaker undergraduate essays in an Australian higher education context’, Studies in Higher Education, 40(9), pp.1729–1747.

Price, M. (2005) ‘Assessment Standards: The Role of Communities of Practice and the Scholarship of Assessment’, Assessment and Evaluation in Higher Education, 30(3), pp.215–230. ​

Read, B., Francis, B. and Robson, J. (2005) ‘Gender, bias, assessment and feedback: analyzing the written assessment of undergraduate history essays’, Assessment and Evaluation in Higher Education, 30(3) pp.241–260.

Rust, C. (2011) ‘The unscholarly use of numbers in our assessment practices; what will make us change?’, International Journal for the Scholarship of Teaching and Learning, 5(1).  

Rust, C. (2020, Revised) Re-thinking assessment – a programme leader’s guide. Available at: (Accessed: 10 June 2022)

Shay, S.B. (2004) ‘The assessment of complex performance: A socially situated interpretive act’, Harvard Educational Review, 74(3), pp.307–29. 

Shay, S. (2005) The assessment of complex tasks: A double reading. Studies in Higher Education 30(6), pp. 663–79.

Shute, V. (2008) ‘Focus on formative feedback’, Review of Educational Research 78(1), pp.153–189., A. 1995. Competence-based assessment. Buckingham: Open University Press.



  • Chris Rust

    Chris Rust is Emeritus Professor of Higher Education at Oxford Brookes where he worked for over 25 years. He was Head of the Oxford Centre for Staff and Learning Development, and Deputy Director of the Human Resource Directorate from 2001-2011. Between 2005-10 he was also Deputy Director for two Centres for Excellence in Teaching and Learning - ASKe (Assessment Standards Knowledge Exchange) and the Reinvention Centre for undergraduate research (led by Warwick University). For his last three years he was Associate Dean (Academic Policy). He has researched and published on a wide range of pedagogic issues but, most especially, on assessment.

How to cite

Rust, C. (2022) Meaningful assessment: What is it and why does it matter?. Teaching Insights, Available at: (Accessed: 21 September 2023)

Post Information

Posted in Edition 2, The Bigger Picture