
The Oxford English Dictionary (OED) defines meaningful as something that has “a serious, important, or recognisable quality or purpose” (the Collins dictionary summarising the latter as “useful”) – which immediately starts to answer the second part of our question. We might also ask at this point, “meaningful to whom?” And if we start with meaningful to the student, this matters because we know that if students do not see an assessment task as “important” and/or “useful”, they are more likely to do it superficially and spend less time on the task, and even use it as an argument to justify cheating or plagiarising in some cases (Ashworth et al., 2006).
So, what makes an assessment meaningful to a student?
[From Rust, 2020] There are arguably three linked, but different, qualities that can help to make an assessment meaningful as follows.
- Validity – does the task truly assess what you claim it assesses? Writing a 500 word essay on “How to give safe injections” would not actually assess whether a nursing student could give safe injections.
- Authenticity – is it a ‘real’ world task? Does it look like something someone might ever be expected to do in a setting outside the university? And such tasks can be even more effective if they can actually be undertaken in a real world setting such as a placement, or by undertaking a ‘live’ project (so literally doing it ‘for real’).
- Relevance – can the student see why this topic is important and why they need to know this? How does it fit with the rest of the subject and the bigger picture (the rest of their course, society, their intended career)? It’s even better if there can be personal relevance and it can be something which the student is personally interested in, wants to know more about or be able to do. So, can there be elements of choice about the task/s undertaken?
Both authenticity and relevance should make the activity meaningful to the student, and there is strong research evidence that, if it is meaningful, the likelihood that the student will be motivated to engage with the task will be increased. This is the exact antithesis of, in common parlance, an ‘academic exercise’, namely something that is essentially pointless!
Regarding authenticity, Marilyn Lombardi (2007, pp. 3-4) provides a very useful checklist which distils ten design elements of the authentic learning experience as follows.
- Real-world relevance: Authentic activities match the real-world tasks of professionals in practice as nearly as possible. Learning rises to the level of authenticity when it asks students to work actively with abstract concepts, facts, and formulae inside a realistic— and highly social—context mimicking “the ordinary practices of the [disciplinary] culture.
- Ill-defined problem: Challenges cannot be solved easily by the application of an existing algorithm; instead, authentic activities are relatively undefined and open to multiple interpretations, requiring students to identify for themselves the tasks and subtasks needed to complete the major task.
- Sustained investigation: Problems cannot be solved in a matter of minutes or even hours. Instead, authentic activities comprise complex tasks to be investigated by students over a sustained period of time, requiring significant investment of time and intellectual resources.
- Multiple sources and perspectives: Learners are not given a list of resources. Authentic activities provide the opportunity for students to examine the task from a variety of theoretical and practical perspectives, using a variety of resources, and requires students to distinguish relevant from irrelevant information in the process.
- Collaboration: Success is not achievable by an individual learner working alone. Authentic activities make collaboration integral to the task, both within the course and in the real world.
- Reflection (metacognition): Authentic activities enable learners to make choices and reflect on their learning, both individually and as a team or community.
- Interdisciplinary perspective: Relevance is not confined to a single domain or subject matter specialization. Instead, authentic activities have consequences that extend beyond a particular discipline, encouraging students to adopt diverse roles and think in interdisciplinary terms.
- Integrated assessment: Assessment is not merely summative in authentic activities but is woven seamlessly into the major task in a manner that reflects real-world evaluation processes.
- Polished products: Conclusions are not merely exercises or substeps in preparation for something else. Authentic activities culminate in the creation of a whole product, valuable in its own right.
- Multiple interpretations and outcomes: Rather than yielding a single correct answer obtained by the application of rules and procedures, authentic activities allow for diverse interpretations and competing solutions.
There is however a health warning. Students can be very risk averse; ill-defined, complex problems are very worrisome. Big projects are high stakes assessment tasks. Collaborative assessment, where all receive the same mark for the final outcome, is considered by most students as unfair. The tension is that what they want (clarity, safety, low-stakes, certainly, fairness, equality and objectivity) can easily result in systems-driven, impersonal and inauthentic assessment tasks. The challenge, therefore, is to give sufficient guidance, support and scaffolding to overcome and negate their fears and objections.
Meaningful feedback
A further aspect to making an assessment meaningful must be the effectiveness of the feedback the student gains from the experience in helping them see the strengths and weaknesses in what they’ve done, and subsequently what they need to do next time in order to do better. Nicol and Macfarlane-Dick (2006, p.205) distil good feedback practice into the following seven principles.
- Helps clarify what good performance is
- Facilitates the development of reflection and self-assessment in learning
- Delivers high quality information to students about their learning
- Encourages teacher and peer dialogue around learning
- Encourages positive motivational beliefs and self-esteem
- Provides opportunities to close the gap between current and desired performance
- Provides information to teachers that can be used to help shape the teaching
More succinctly, Val Shute (2008, p.175) argues that for feedback to be effective, there are three requisite conditions – motive, opportunity and means. The student needs to be motivated to engage with the feedback, and this is more likely if they can see they will have the opportunity to put the feedback into practice (and less likely if the feedback is accompanied by a mark or grade, which will become their main, and possibly only, focus of attention). They also need the means (help, support, guidance) to address their shortcomings.
And what about being meaningful to others?
Nicol and Macfarlane-Dick’s last feedback principle neatly brings us back to the question of meaningful to whom – in this case not just the student but the teachers. In considering whether assessment is meaningful to teachers and the wider system beyond there are two major concerns.
Firstly, there is the issue of reliability. There is considerable and consistent literature that the marking of student work is not very reliable either between different markers, or, even more worryingly, by an individual marker with different marks given to the same piece of work marked at different times. (Hartog and Rhodes,1935; Laming et al.,1990; Wolf, 1995; Leach, Neutze, and Zepke, 2001; Elander and Hardman, 2002; Newstead, 2002; Baume, Yorke, and Coffey, 2004; Norton, 2004; Hanlon et al., 2004; Read et al., 2005; Price, 2005; Shay, 2004 and 2005; Brooks, 2012; O’Hagan and Wigglesworth, 2014; Bloxham et al., 2015) [this list is not exhaustive].
Secondly, there is the prevalence of recording the outcome of an assessment as a mark or grade – most commonly a percentage mark. “What does a score of 55%, for example, actually mean? Two students can have the same score while having very different strengths and weaknesses. The number ignores and obscures this detail … Aggregation of criteria is a further problem. If one criterion is inadequately met can that be mitigated by another criterion being met well? And could a student go on failing to meet that criterion but always manage to pass because the aggregated mark is a pass?” (Rust, 2011, p.2)
The combination and aggregation of marks has serious implications for the validity (and therefore meaning) of assessment.
But perhaps most worryingly of all when it comes to grading, and this brings us back to the question of meaningfulness for the student, Dahlgren et al (2009) has shown that if a piece of work is to be graded – anything beyond simple pass or fail – students are more likely to take a surface approach, and are much less likely to see the task as a learning opportunity.
References
Baume, D., Yorke, M. and Coffey. M. (2004) ‘What is happening when we assess, and how can we use our understanding of this to improve assessment?’, Assessment and Evaluation in Higher Education, 29(4), pp.451–477. https://doi.org/10.1080/02602930310001689037
Bloxham, S., Hudson, J., den Outer, B. and Price, M. (2015) ‘External peer review of assessment: An effective approach to verifying standards?’, Higher Education Research and Development, 34(6), pp. 1069–1082. https://doi.org/10.1080/07294360.2015.1024629
Brooks, V. (2012) ‘Marking as judgment’, Research Papers in Education, 27(1), pp. 63–80. https://doi.org/10.1080/02671520903331008
Dahlgren, L.O., Fejes, A., Abrandt-Dahlgren, M. and Trowald, N. (2009) Grading systems, features of assessment and students’ approaches to learning, Teaching in Higher Education,14(2), pp. 185-194. https://doi.org/10.1080/13562510902757260
Elander, J. and Hardman, D. (2002) ‘An application of judgement analysis to examination marking in psychology’, British Journal of Psychology, 93, pp.303–28. https://doi.org/10.1348/000712602760146233
Hanlon, J., Jefferson, M., Molan, M. and Mitchell, B. (2004) An examination of the incidence of ‘error variation’ in the grading of law assessments. United Kingdom Centre for Legal Education. Available at: http://www.law.uwa.edu.au/__data/assets/pdf_file/0006/1888611/Hanlon.pdf (Accessed: 10 June 2022)
Hartog, P. and Rhodes, E. C. (1935) An Examination of Examinations, London: Macmillan
Laming, D. (1990) ‘The reliability of a certain university examination compared with the precision of absolute judgments’, Quarterly Journal of Experimental Psychology, 42, pp. 239-254. https://doi.org/10.1080/14640749008401220
Leach, L., G. Neutze, and Zepke, N. (2001) ‘Assessment and empowerment: Some critical questions’, Assessment & Evaluation in Higher Education 26(4), pp. 293–305. https://doi.org/10.1080/02602930120063457
Lombardi, M.M. (2007) ‘Authentic Learning for the 21st Century: An overview. ELI paper 1’, Educause Learning Initiative, Available at: https://www.researchgate.net/publication/220040581_Authentic_Learning_for_the_21st_Century_An_Overview (Accessed 10 June 2022)
Newstead, S.E. (2002) ‘Examining the examiners: Why are we so bad at assessing students?’, Psychology Learning & Teaching, 2, pp. 70-75. https://doi.org/10.2304/plat.2002.2.2.70
Nicol, D. and Macfarlane-Dick, D. (2006) ‘Formative assessment and self-regulated learning: A model and seven principles of good feedback’, Studies in Higher Education, 31 (2), pp. 199-218. https://doi.org/10.1080/03075070600572090
Norton, L. (2004) ‘Using assessment criteria as learning criteria: A case study in psychology’, Assessment and Evaluation in Higher Education, 29 (6), pp. 687-702. https://doi.org/10.1080/0260293042000227236
O’Hagan, S.R. and Wigglesworth, G. (2014) ‘Who’s marking my essay? The assessment of non-native-speaker and native-speaker undergraduate essays in an Australian higher education context’, Studies in Higher Education, 40(9), pp.1729–1747. https://doi.org/10.1080/03075079.2014.896890
Price, M. (2005) ‘Assessment Standards: The Role of Communities of Practice and the Scholarship of Assessment’, Assessment and Evaluation in Higher Education, 30(3), pp.215–230. https://doi.org/10.1080/02602930500063793
Read, B., Francis, B. and Robson, J. (2005) ‘Gender, bias, assessment and feedback: analyzing the written assessment of undergraduate history essays’, Assessment and Evaluation in Higher Education, 30(3) pp.241–260. https://doi.org/10.1080/02602930500063827
Rust, C. (2011) ‘The unscholarly use of numbers in our assessment practices; what will make us change?’, International Journal for the Scholarship of Teaching and Learning, 5(1). https://digitalcommons.georgiasouthern.edu/ij-sotl/vol5/iss1/4/
Rust, C. (2020, Revised) Re-thinking assessment – a programme leader’s guide. Available at: https://archive.ocsld.org/re-thinking-assessment-a-programme-leaders-guide/ (Accessed: 10 June 2022)
Shay, S.B. (2004) ‘The assessment of complex performance: A socially situated interpretive act’, Harvard Educational Review, 74(3), pp.307–29.
Shay, S. (2005) The assessment of complex tasks: A double reading. Studies in Higher Education 30(6), pp. 663–79. https://doi.org/10.1080/03075070500339988
Shute, V. (2008) ‘Focus on formative feedback’, Review of Educational Research 78(1), pp.153–189. https://doi.org/10.3102/0034654307313795Wolf, A. 1995. Competence-based assessment. Buckingham: Open University Press.