Assessment: formative and classroom-based

How to use this guide

Formative assessment is a key process for improving teaching and learning. As such, this MESH Guide is suitable for all teachers of all age groups. It draws upon a wealth of research and practice-based evidence which can be read and applied directly into the busy classroom.

About this guide

There is over fifty-years’-worth of research evidence to suggest that formative assessment, when used effectively, can have a significant impact on pupil outcomes. Despite the wealth of research into, and exemplification of good practice, formative assessment in some countries like England, for example, has not had the impact it promised (Coe, 2013) because it has become confused in some schools, is weak in practice, and, therefore, requires clarifying.

To support teachers in developing their understanding of formative assessment, this MESHGuide: 

  • explains what formative assessment is;

  • offers concise information as to why it has not had the impact it promised in schools (using England as an example); and 

  • suggests how, using Leahy et al.’s (2005) short-cycle formative assessment framework, it can be made better in the busy classroom to improve both the quality of teaching and outcomes for all pupils.

Within Leahy et al.’s (2005) framework, the important strategies of: sharing and clarifying learning intentions and success criteria; eliciting quality evidence of learning; providing feedback to move teaching and learning forward; and self- and peer-assessment are explored to support quality minute-by-minute, day-by-day assessment practice.

References

Assessment Reform Group (1999) Assessment for learning - Beyond the black box, Cambridge, UK: Cambridge School of Education.

Bjork, R. and Bjork, E. (1992) A new theory of disuse and an old theory of stimulus fluctuation, In: A. Healey, S. Kosslyn and E. Siffrin (Eds.) From learning processes to cognitive processes: Essays in honor of Wiliam K. Estes, (Vol. 2, pp. 35-67), Hillsdale, NJ: Erlbaum.

Black, P. and, Wiliam, D. (1998) Assessment and classroom learning, Assessment in Education: Principles, Policy and Practice, 5(1), pp. 7-74.

Black. P. and Wiliam, D. (2018) Classroom assessment and pedagogy, Assessment in Education: Principles, Policy & Practice, 25(6), 1-24.

Bloom, B. (1969) Some theoretical issues relating to educational evaluation, In: R. Tyler, (Ed.) Educational evaluation: New roles, new means: The 68th yearbook of the National Society for the Study of Education (part 2), Vol. 68(2). Chicago, IL: University of Chicago Press, pp. 26-50.

Booth, N. (2017) What is formative assessment, why hasn't it worked in schools, and how can we make it better in the classroom? Impact - Journal for the Chartered College of Teaching, 1, pp. 27-29.

Booth, N. (2019) In-school summative and minute-by-minute formative assessment in the classroom, In: S. Capel, M. Leask and S. Younie (Eds.) Learning to teach in the secondary school (8th edition), Abingdon, UK: Routledge.

Broadfoot, P. (2008) An introduction to assessment, London, UK: Continuum International Publishing Group.

Brookhart, S., Stiggins, R., McTighe and Wiliam, D. (2019) The future of assessment practices: Comprehensive and balanced assessment systems, Available at: https://www.dylanwiliamcenter.com/whitepapers/assessment/ [Last accessed 28 July 2020].

Butler, D. and Winne, P. (1995) Feedback and self-regulated learning: A theoretical synthesis, Review of Educational Research, 65(3), pp. 245-281.

Butler, R. (1988) Enhancing and undermining intrinsic motivation: The effects of task-involving and ego-involving evaluation on interest and performance, British Journal of Educational Psychology, 58(1), pp. 1-14.

Clarke, S. (2005) Formative assessment in the secondary classroom, London, UK: Hodder and Stoughton.

Coe, R. (2013) Improving education: A triumph of hope over experience, Durham, UK: Centre for Education and Monitoring, Durham University.

Cronbach, L. (1971) Test validation. In: R. Thorndike (Ed.), Educational measurement (2 ed., pp. 443-507), Washington DC: American Council on Education.

Department For Education (DfE) (2015) Final report of the commission on assessment without levels, London, UK: The Stationary Office.

Elawar, M. and Corno, L. (1985) A factorial experiment in teachers' written feedback on student homework: Changing teacher behaviour a little rather than a lot, Journal of Educational Psychology, 77(2), pp. 162-173.

Fautley, M. and Savage, J. (2008) Assessment for learning and teaching in secondary schools, Exeter, UK: Learning Matters Ltd.

Fuchs, L and Fuchs, D. (1986) Effects of systematic formative evaluation - a meta analysis, Exceptional Children, 53(3), pp. 199-208.

Goertz, M., Oláh, L., Nabors and Riggan, M. (2009) From testing to teaching: The use of interim assessments in classroom instruction (Vol. 65), Philadelphia, PA: Consortium for Policy Research in Education.

Hattie, J. and Clarke, S. (2019) Visible Learning Feedback, Abingdon, UK: Routledge.

Kluger, A. and DeNisi, A. (1996) The effects of feedback interventions on performance: A historical view, a meta-analysis, and a preliminary feedback intervention theory, Psychological Bulletin, 119(2), pp. 254-284.

Leahy, S., Lyon, C., Thompson, M. and Wiliam, D. (2005) Classroom assessment: Minute-by-minute and day-by-day, Educational Leadership, 63(3), pp. 18-24.

Sadler, R. (1989) Formative assessment and the design of instructional systems, Instructional Science, 18, pp. 119-144.

Scriven, M. (1967) The methodology of evaluation. In: R. Tyler, R. Gangé and M. Scriven (Eds.) Perspectives of curriculum evaluation: Volume 1, pp. 39-83.

Stewart, W. (2012) Think you've implemented assessment for learning? Times Educational Supplement, Available from: https://www.tes.com/news/tes-archive/tes-publication/think-youve-implemented-assessment-learning [Last accessed 28 July 2020].

Swaffield, S. (2011) Getting to the heart of authentic assessment for learning, Assessment in Education: Principles, Policy & Practice, 18(4), pp. 433-449.

Wiliam, D. (2007) Comparative analysis of assessment practice and progress in the UK and USA, Presentation at Westminster Education Forum Seminar on Assessment, 10 October 2007.

Wiliam, D. (2014) Formative assessment and contingency in the regulation of learning processes, Paper presented in a Symposium entitled Towards a Theory of Classroom Assessment as the Regulation of Learning at the annual meeting of the American Educational Research Association, Philadelphia, PA: April.

Wiliam, D. (2018a) Embedded formative assessment (2nd ed.), Bloomington, IN: Solution Tree Press.

Wiliam, D. (2018b) The handbook for embedded formative assessment, Bloomington, IN: Solution Tree Press.

Wiliam, D. (2018c) What do we mean by assessment for learning? Available from: https://www.dylanwiliamcenter.com/ [Last accessed 28 July 2020].

Wiliam, D. and Leahy, S. (2015) Embedding formative assessment, West Palm Beach, FL: Learning Sciences International.

Zimmerman, B. and Schunk, D. (2011) Handbook of self-regulation of learning and performance, New York, NY: Routledge.

From formative evaluation to formative assessment

It is widely accepted that Scriven was the first to use the term “formative” where its role was to evaluate the ‘on-going improvement of the curriculum’ (Scriven, 1967: 41). Two years later, Bloom applied Scriven’s definition to classroom-based testing: 

By formative evaluation we mean evaluation by brief tests used by teachers and students as aids in the learning process. While such tests may be graded and used as part of judging and classificatory function of evaluation, we see much more effective use of formative evaluation if it is separated from the grading process and used primarily as an aid to teaching (1969: 48).

The term “formative assessment” was initially used within higher education contexts in the United Kingdom where it was used to describe any sort of assessment leading up to the main one (Wiliam, 2014). Wiliam (2014) states that during the 1970s and 1980s, the terms “formative evaluation” and “formative assessment” were not subject to much research and when they were (for example, Fuchs and Fuchs, 1986), the general consensus was that they referred to procedures such as tests for informing future teaching.

What is assessment?

Assessment, according to Cronbach, can be described as a ‘procedure for making inferences’ (1971: 447).

This is an important concept because, for many years, formative and summative assessment have been discussed as descriptions of assessment as opposed to inferences. As descriptions, summative assessment would often be labelled as providing learners with a score or grade on a piece of work (or test), and formative assessment would often be described as the giving of comments to improve the work in some way. When formative and summative are thought of as inferences (as suggested by Black and Wiliam, 2018), we begin to think deeper about the types of conclusions we are making from the information gathered. 

For example, when listening to a student play a piece on the piano, a teacher could infer that a learner’s left-hand technique is secure but that they have particular difficulty with their right-hand scales. They might also give comments to improve this difficulty. Although comments have been given to the pupil, if this is the only inference made then this is probably better described as summative assessment (even though no score or grade has been given); it relates to the status of the pupil since it has been “summed up” that they are good at one thing, but not another. Where formative assessment would take place, then, is not only that a developmental point has been identified, but, more importantly, that opportunities have been provided for the pupil to practise and get better. In other words, for formative assessment to be truly formative, the information elicited has to be acted upon.

Defining formative assessment

Sadler (1989) argued that the term “formative assessment” should be integrated within effective teaching. He stated:

[it] is concerned with how judgements about the quality of student responses (performances, pieces, or works) can be used to shape and improve the student’s competence by short-circuiting the randomness and inefficiency of trial-and-error learning (1989: 120).

He also makes us aware that formative assessment should not be the sole responsibility of the teacher, but also requires changes in learners, too:

The indispensable conditions for improvements are that the student comes to hold a concept of quality similar to that held by the teacher, is able to monitor continuously the quality of what is being produced during the act of production itself, and has a repertoire of alternative moves or strategies from which to draw at any given point. In other words, students have to be able to regulate what they are doing during the doing of it (1989: 121).    

Within the context of the United Kingdom, the term “formative assessment” tends to be built upon the work of Black and Wiliam (1998) as well as the Assessment Reform Group (1999). Having intended to research the effects of formative assessment practices Black and Wiliam defined formative assessment as:

all those activities undertaken by teachers and/or their students, which provide information to be used as feedback to modify teaching and learning activities in which they are engaged (1998: 8).

Based on Black and Wiliam’s (1998) definition, it could be argued that for formative assessment to be effective two key ingredients are required: formative intention and formative action. Formative intention could relate to the strategies used by teachers in classrooms (for example, questioning, exit tickets, and the giving of comments to improve) with the intention that they will be used by the recipients, whereas formative action, then, is the actual active use of the information gathered or shared in order to make a difference to the teaching and learning cycle.

Differences between formative assessment and Assessment for Learning (AfL)

Although in academic literature both formative assessment and Assessment for Learning (AfL)are used synonymously, it is important to note that, for some, the two terms do actually differ. For example, Wiliam (2018c) states that:

Paul Black and I have always used the term “formative assessment”, but the government adopted the term “Assessment for Learning”. It's a very attractive way of looking at it because we have “Assessment of Learning” bad, Assessment for Learning, good. It’s very easy to contrast those two things. The problem is that Assessment for Learning focuses on the intention rather than the action. It says “I’m collecting this information in order to…” and I go into classrooms and I see lots of formative intention, but I see very little formative action. I see very little use of that evidence actually to make a difference. Now, the word “formative” to me has a very clear etymology; formative experiences are the experiences that shaped us as individuals. In the same way, I think formative assessment should be assessment that actually shapes learning.  

Short-cycle formative assessment framework

Leahy et al., (2005) (cited in Wiliam and Leahy, 2015: 11) provide a useful framework (shown in the table below) which crosses three phases (where the learner is going, where the learner currently is, and how to get there) with three types of classroom-based agents (teacher, peer, and learner).

 

Where the learner is going

Where the learner currently is

How to get there

Teacher

Clarifying, sharing and understanding learning intentions and criteria for success

Engineering effective discussions, tasks, and activities that elicits quality evidence of learning

Providing feedback that moves learning forward

Peer

Activating students as learning resources for one another (peer-assessment)

Learner

Activating students as owners of their own learning (self-assessment)

 

In Wiliam and Leahy’s words:

Clarifying, sharing, and understanding learning intentions – deals with the joint responsibility of teachers, the learners themselves, and their peers to break this down into a number of criteria for success. The second strategy deals with the teacher’s role in finding out where learners are in their learning, once [they are] clear about the learning intentions (this sequence is deliberate – until you know what you want your students to learn, you do not know what evidence to collect). The third strategy emphasizes the teacher’s role in providing feedback to the students that tells them not only where they are but also what steps they need to take to move their learning forward. The fourth strategy emphasizes the role that peer assessment can play in supporting student learning and also makes clear that the purpose of peer assessment within a formative assessment framework is not to judge the work of a peer so much as to improve it. Finally, the fifth strategy emphasizes that the ultimate goal is always to produce independent learners (2015: 11).

These key formative assessment strategies are unpicked further in section 4 “Pedagogical Interventions”.  

Why hasn’t formative assessment had the impact it promised?

Despite the wealth of research into and exemplification of good formative assessment practices, Booth (2017), in his article for England’s Chartered College of Teaching, cites some of the key reasons why, in England, formative assessment has not had the impact in schools it had promised.

First, we need to consider the definition issue. As the Assessment Reform Group have stated:

The term “formative” itself is open to a variety of interpretations and often means no more than that assessment is carried out frequently and is planned at the same time as teaching. It may be formative in helping the teacher identify areas where more explanation or practice is needed. But for the pupils, the marks or remarks in their work may tell them about their successes or failures but not how to make progress towards future learning (1999: 7).  

This point is particularly exemplified by Dylan Wiliam who, in an interview published in England’s Times Educational Supplement magazine, commented ‘the big mistake that Paul and I made was calling this stuff “assessment” … because when you use the word assessment, people think about tests and exams’ (Stewart, 2012). 

Secondly, the concept of feedback requires some thinking. For example, in many schools, it is common for teachers to give students a mark, level, or grade with comments to improve. This idea has been widely researched, along with other modalities of feedback. The first pioneering study into the area was conducted by Butler (1988) who, in her oft-cited article, looked into the effects of different types of feedback on students in Israel between two lessons. The following key findings were reported: 

  • students who were given marks only made no gain in attainment between the two lessons,

  • students who were given comments only improved, on average, by 30% compared to their previous performance, and

  • students who were given marks and comments made, surprisingly, no gain. 

This is an important consideration for teachers and school policy makers because, whilst the idea of giving marks and comments to improve might appear favourable, what seems likely to happen in practice is that students look at their score, compare it with the person nearest to them, and ignore the comments. As such, the efforts of the teacher, having spent time writing the comments, is largely wasted.

A third reason relates to Fautley and Savage’s (2008) acknowledgement that, in some English schools, there is often pressure on teachers (and students), presumably by their senior leadership teams, to produce high levels of attainment in the form of marks or grades from assessments. With this in mind, we need to consider the fact that teachers may consciously be neglecting their formative practices and beliefs in favour of frequent summative assessments, some albeit mini ones, to meet requests for data tracking purposes.

Bearing these practices in mind, England´s Department for Education found that ‘formative assessment was not always being used as an integral part of effective teaching’ (DfE, 2015: 13) and that ‘[i]nstead of using classroom assessment to identify strengths and gaps in pupils’ knowledge and understanding of the programmes of study, some teachers were simply tracking pupils’ progress towards target levels’ (DfE, 2015: 13).

Validity and reliability in assessment

An assessment system which relies heavily on making inferences about individuals, teachers, and schools from just end-of-course examination data can be seen as hugely problematic; to some one of the principle issues is that, by doing so, a serious threat to validity has occurred: construct under-representation. What this means is that, since only a small part of the entire taught curriculum can be assessed, other equally important learning outcomes go untested. (Brookhart et al., 2019). Furthermore, when considering reliability, there are also issues because no single test or examination can ever be 100% reliable, and classifying learners (by grading them, for example) will always consist of a certain amount of error.

Formative assessment can help address the notion of construct under-representation as a threat to validity, as well as reliability; instead of getting a “snapshot” of learning, what we are getting is a broader ‘photo album’ (Brookhart et al., 2019). What this means is that ‘it has the effect of lengthening the test’ (Wiliam, 2007: 1). 

For example, during the normal course of teaching and learning, teachers gather evidence from a range of regular activities by the coverage of a more varied, and more complete, set of learning goals which can then be integrated into the normal day-to-day teaching and learning cycle. This is a particular strength for an effective assessment system; it includes important information about learning which externally-set tests cannot capture. In principle, what this means is that a much wider variety of learning outcomes can be assessed (for example, deeper assessment which includes creativity and problem solving in more “authentic” contexts) and not just those that can be easily tested.

Learning intentions and success criteria

Learning Intentions

In many English schools, it is quite common for teachers to begin the lesson with sharing the Learning Intention(s) (also commonly referred to as Learning Objectives) with students. Although wording may differ slightly, these tend to relate around the following three categories:

  • To know…

  • To understand… (sometimes know how…)

  • To be able to…

Writing learning intentions that are clear is hard, even for more experienced teachers, and Booth (2019) (based on Clarke’s (2005) work) provides some useful information for teachers to consider. 

First, teachers need to consider context surrounding the Learning Intention(s). For example, in an English lesson, a typical Learning Intention could be “To be able to write a letter to your local council to keep your local swimming pool open.” The context here would be “to your local council to keep your local swimming pool open”. This presents a problem because we are probably not that interested whether students can write a letter to their local council per se, but whether they can transfer that knowledge and understanding they have acquired into another, different, context like whether it is immoral to eat animals and birds, for example. As such, “To be able to write a letter” becomes our Learning Intention. This, too, is problematic; second, we now need to separate what students are going to do, with what they are going to learn by doing it. For example, the more focused Learning Intention “To be able to write a letter” is actually activity-focused not learning-focused. What is it students are going to learn by writing this letter in particular? Are they, for example, writing to argue or persuade? Are they learning about formal letter writing techniques or informal ones? When teachers become clearer in their own minds about these issues, then they can be clearer to their students as to what it is they are going to learn.

Success Criteria

Once teachers have shared what the intended journey is going to be, the next key aspect to consider (and share with the students) is what it is students need to do in order to get there

Clarke (2005) evaluates two types of Success Criteria: product and process. Product Success Criteria, merely tells students what the end product might look like. For example, in English, “the reader will be persuaded by your letter”, in music, “it will sound like a piece of Blues music, or, in Biology, “I can explain how cells and tissues in the body are adapted to increase the rate of diffusion.” These can be considered problematic when trying to communicate clearly what “success” looks like because learners are not likely to be sure on what exactly they need to do in order to persuade their readers, what exactly they need to do to compose in the Blues style, or what exactly how to write an effective explanation. 

The use of more Process Success Criteria, on the other hand, provides students with a step-by-step list of key ingredients that they need to do in order to “succeed” in the lesson’s activity/activities towards the Learning Intention(s)

For example, as Booth (2019: 420) exemplifies:

Learning Intention: “To be able to write persuasively”

Process Success Criteria: You need to: 

  • State your point of view

  • Give reasons for your points of view, with evidence

  • Give at least one alternative point of view

  • Use subjective language (personal feelings)

  • Use rhetorical questions

  • Summary

The important point is that by using process success criteria teachers are not simply just giving students the answer in order to complete the task(s). Instead, teachers are sharing and clarifying what students need to focus on when completing the task(s).   

More information and examples can be found in Booth (2019), Wiliam (2018a, 2018b), and Wiliam and Leahy (2015).

Gathering quality evidence of pupil learning

A frequent key question for teachers is: “Are we ready to move on yet?” In many classrooms, it might be the case that a teacher asks the class a question; several students put up their hands; the teacher picks one of these students; the student gets the answer correct; and the teacher feels confident that the class is ready to move on. The big problem here is that we are not sure as to whether the other, say, 30 or so students have understood the concept, too. In other words, we have not gathered quality evidence that we are indeed ready to move on. This is important because, without this information, we cannot be sure where our learners (whether whole class, small group, or individuals) are in their learning and, crucially, whether more time is needed to get learning back on track before the end-of-unit test.  

Allowing students to opt-in to answer a teacher’s question can be considered harmful because:

by allowing them [the students] to raise their hands to show they have the answer – they [teachers] are actually making the achievement gap worse, because those who are participating are getting smarter, while those avoiding engagement are foregoing the opportunities to increase their ability (Wiliam, 2018a: 93).

What teachers need to do more of, then, is broaden the evidence base so we can make better inferences about where learning is and what needs to be done next to enhance it further. 

One method of doing this is through what Wiliam calls “all-student response systems” (2018a: 100). Here, a teacher would ask a question to the class (this could also include a small number multiple choice questions); students would write down their answer(s) on a mini-whiteboard (sometimes referred to as “show me” boards); and then hold it up so the teacher can see 100% of the students’ responses. In terms of gathering quality information, whether they are used at the beginning, during, and/or at the end of a lesson, what teachers are now able to do is scan quickly across the room as to whether students have understood the concept and whether it is safe to proceed or not. If a large number of students have not understood the concept, then more teaching (perhaps with carefully scaffolded modelling) is required. For a smaller group of students (or even individuals) it helps inform teachers that more practise is required. The important point is that, through collecting quality evidence in this way, teachers become more aware as to where their students are in their learning and can respond accordingly.  

More information and examples can be found in Wiliam (2018a, 2018b), and Wiliam and Leahy (2015).

Providing feedback to move teaching and learning forward

Feedback is key for reducing the gap between where the learner currently is to the desired outcome. It is, therefore, a central part of the formative assessment process. The important point, however, is that it is not only the giving of feedback which improves learning, but that the feedback received is acted upon by the learner and/or the teacher.

Feedback can be oral or written. Some researchers (for example, Hattie and Clarke) believe that in-lesson feedback is particularly beneficial because ‘[a]nything which happens after the lesson has questionable value compared to what happens in the moment’ (2019: 123). That said, there is also evidence (for example, Bjork and Bjork, 1992) to suggest that slightly delayed feedback is, perhaps, more beneficial because it is more impactful for longer-term learning through creating desirable difficulties. In other words, when feedback is delayed a little it is probably at a point when students are starting to forget what they had learned. This, however, can be good; it helps to increase a learner’s retrieval strength (how accessible, or retrievable information is) and storage strength (how well learned something is). Whilst research and researchers may have different viewpoints regarding when to give feedback, the important point is not to favour one method of feedback over another; they both have an important role within the formative assessment process.

Although feedback can be considered to improve learning, Kluger and DiNisi (1996) conducted a meta-analysis which found that some types of feedback (approximately one-third of the studies included) can actually lower student achievement, particularly when the feedback is focused on the person rather than the task. Similarly, approximately one-third of the studies included in the meta-analysis made no difference on pupil outcomes at all. The remaining studies showed that when students are given feedback which is focused on specific aspects of the task, or gives clear guidance on how to improve, students’ outcomes improved.

The notion of quality in feedback is shown by Elawar and Corno (1985) in their research on mathematics homework. One group (experimental group) of students received detailed comments on any mistakes they made, suggestions on how to improve, and at least one positive comment. A second group of students was split into two sub-groups. One sub-group (experimental group) received constructive feedback, and the other sub-group (control group) received just scores. A third group (control group) received their usual form of feedback – receiving just scores. Their research found that students who received constructive feedback learned twice as much as those in the control groups. Furthermore, it was also found that the gender gaps between boys and girls reduced, as well as a more positive attitude towards learning mathematics.

In addition to the research cited above, there are several strategies as to how feedback can be made more impactful in the classroom which also support teacher workload:

  • Oral group feedback: when the teacher would notice a common misconception among students, they would stop groups of students to explain further, perhaps detailing the thinking process through the use of scaffolded model examples.

  • Whole class feedback: when would share feedback with the whole class including class strengths and areas for development.

  • Feedback to the teacher: when the teacher uses the information gathered about students’ learning to make decisions about where learning is heading next.  

More information and examples can be found in Booth (2019), Wiliam (2018a, 2018b), and Wiliam and Leahy (2015).

Peer- and self-assessment

Research (for example, Butler and Winne, 1995; Black and Wiliam, 1998; Swaffield, 2011) suggests that when learners are given opportunities to be more active within the assessment process they are better able to develop, use, and apply their understanding to improve the quality of their own work with increasing autonomy. As a result, students become less dependent on their teacher and, therefore, owners of their own learning.

Self- and peer-assessment is much more than students simply ticking or crossing their own and/or each other’s work. It is a review process whereby inferences of current learning can be made based on:

  • reflecting upon past experiences;
  • evaluating and attempting to articulate what has been learned; and
  •  identifying, in the light of this reflection, what still needs to be perused.

(Broadfoot, 2008: 135)

Although very useful, a problem here is that this reflective thinking, although important, is done after learning, not during. To help support even better self- and peer-assessment teachers have a responsibility to grow self-regulating learners.

Self-regulating learners are those who set their own learning goals during learning, and then monitor and regulate their learning behaviour, motivation and cognitive strategies to achieve the desired outcome (Zimmerman and Schunk, 2011). Growing self-regulating learners can be scaffolded by teachers where three questions become key:

  • Where am I going? (what is the learning intention?)
  • Where am I now? (where am I currently within the success criteria?)
  • Where to next? (What do I still need to do to meet the learning intention?)

Again, the important point, however, is not just knowing this information (formative intention), but, crucially, that it is acted upon (formative action) to improve and develop the teaching and learning cycle. This, then, is formative assessment in action. 

More information and examples can be found in Wiliam (2018a, 2018b), and Wiliam and Leahy (2015).

Medium-cycle formative assessment

Although research studies (cited in this MESH Guide, for example) have shown that short-cycle formative assessment, when used effectively, can have a significant impact on learner outcomes, medium-cycle formative assessment (where evidence of student learning can be gathered across lessons and units of work) can also be beneficial.

According to Brookhart et al. (2019), medium-cycle formative assessment can involve quizzes or pre- or end of unit tests, for example, which cover a number of learning intentions. 

For instance, Goertz et al. (2009) show that, in Philadelphia, the year is broken down into blocks of six-weeks. During weeks one to five, the essential curriculum is taught to students and they are tested at the end of week five. From gathering and using test score evidence, in week six, teachers either review previously taught material if there are areas for improvement, or extend on their current understanding and skills. To some, this might conjure up the thought of the formative use of summative assessment and that the spirit of formative assessment might be lost. The important point here, though, is that the evidence collected is establishing whether students are retaining the taught curriculum and, crucially, where teachers may need to adjust their teaching of lessons by their lesson planning, pacing, grouping of students, and/or further practise of a previously learned concept, for example. Over time, it becomes possible to identify patterns of how students are performing holistically and in what curriculum areas they are performing well and least well at.

Long-cycle formative assessment

Long-cycle formative assessment might typically take place at the end of an academic year when students complete an end of year examination or when nationally standardised tests take place. Like with medium-cycle formative assessment, information gathered can help draw conclusions about learners’ achievement in different areas of the curriculum and can help inform what teaching and learning needs there are for the cohort of students as they progress into their next year of education, as well as how teaching might be better for the next cohort of students studying that same curriculum.

King’s-Medway-Oxfordshire Formative Assessment Project (KMOFAP)

The KMOFAP was successful in developing the implementation of formative practice in classrooms. It led to numerous other professional development projects in the area of classroom-based assessment. A follow-up project was undertaken in Jersey, which led to the removal of national testing at Key Stages 1, 2 and 3 in order to set up the culture for developing Assessment for Learning practice successfully.

More information about this project can be retrieved from:

Embedding Formative Assessment programme (Education Endowment Fund)

During April 2015 to July 2018, 140 schools took part of the Embedding Formative Assessment professional development programme. 

Key findings included:

  • Embedding Formative Assessment schools made the equivalent of two additional months’ progress.

  • The additional progress made by children in the lowest third for prior attainment was greater than that made by children in the highest third.

  • Teachers were positive about the Teacher Learning Communities. They felt that these improved their practice by allowing valuable dialogue between teachers, and encouraged experimentation with formative assessment strategies.

  • The process evaluation indicated it may take more time for improvements in teaching practices and pupil learning strategies to feed fully into pupil attainment. Many teachers thought that younger students were more receptive to the intervention than their older and more exam-minded peers.

More information about this case-study can be retrieved from:

https://educationendowmentfoundation.org.uk/projects-and-evaluation/projects/embedding-formative-assessment/ [last accessed 28 July 2020].

Strength of evidence

There is extensive research into forms of assessment in education and the ideas in this Guide are backed up by research across settings, ages and phases.

Transferability

Constructive feedback/formative assessment  from a teacher provides the stepping stones for learners, giving them confidence they are progressing. The principles for this,  we suggest, apply across cultures, settings, ages and phases.

Areas for further research

Classroom-based formative assessment is a popular topic in educational discourse with a wide literature base. There is still work to be done, however, on subject-specific formative assessment practices and the contexts of which they occur to impact positively on the teaching and learning cycle. As such, we very much welcome examples of classroom-based formative assessment. These could be, for example: video or audio recordings; PowerPoint slides; student work; and so on.

Please email enquiries@meshguides.org if you would like to share any examples.    

Editor's comments

Formative feedback is one of the basic tools in a teacher’s toolkit - a positive word from a teacher, praising work done and constructively showing how a child can improve their work, may be remembered for a lifetime. Negative unconstructive feedback can damage self belief and motivation in some learners.

Online communities

At present, there is no dedicated online community that we can refer you to. If you wish to start one or inform us of a relevant community, please email admin@meshguides.org