Summative assessment – a final assessment (such as an exam or test) usually held at the end of a course. Students results are scored and used toward their final grade.
Summative assessment is what most people think of when they hear the word ‘assessment’. It includes tests, exams, projects or other assessments that are submitted for marking or scoring at the end of a learning sequence. There are 2 key aspects to summative assessments. Firstly, the assessment is the final activity in the learning sequence. Unlike formative assessment, summative assessment is not designed to ensure that students are ‘on the right track’. There is generally no follow-up learning after a summative assessment to fill in student knowledge or skill gaps. It is used to test or examine the extent to which skills and knowledge have been acquired from a learning program, course or unit of work. Secondly, summative assessment is marked or scored against a set of pre-determined criteria which contribute to the student’s total mark.
Hint: while pure summative assessment has no follow-up learning, many teachers like to review summative assessments with their students where possible. Assessment of any kind can be a teaching opportunity. At the very least, feedback that helps students in future courses or tasks is highly valued. This can be useful for students making generic mistakes that are easily fixed and are likely to be repeated in the future (such as using too many passive sentences in a piece of writing).
Summative assessment may take the form of:
Hint: experienced teachers know how their students will perform before they sit an assessment. Where possible, it is good practice to not let students sit an assessment when they are almost certain to fail. For example, pilot instructors do not allow their students to sit the final practical examination if the student isn’t ready – they keep training and encouraging the student to practise until passing is more likely.
Teachers and educational professionals have been debating various aspects of summative assessments for a very long time. One such debate is over whether summative assessment provides a true and fair indication of a student’s abilities. For example, exam stress is known to impact some students more than others, sometimes with disastrous results (including self-harm, poor scores, absenteeism).
One of the other contentious issues with exams and tests is that they are timed. The issue is that 2 students with the same abilities may take different amounts of time to complete a task. Traditionally, students have to rush to finish all of an assessment – this means that ‘speed’ is an assessment component and accounts for a large percentage of the assessment’s score (even though speed is rarely listed in the rubric – it is inbuilt and embedded). The time limit placed on exams and tests are also arbitrary and not related to the topic or content – all exams for a university or college are uniformly the same for logistical reasons. Teachers may use a best-guess, ad-hoc approach to fill the exam paper with superfluous questions, so that the majority of students finish ‘just in time’. Exams are also artificial in that they are not representative of the way that tasks are completed in the work environment.
Finally, no discussion about summative assessment is complete without touching on assessment benchmarks. When teachers decide to assess students, they need to think about the scoring system they intend to use. There are 2 basic types of assessment scoring systems: norm-referenced and criterion-referenced.
Norm-referencing is where students are compared to each other.
Norm-referencing is where students are compared to each other. This results in a small percentage of students receiving top marks, a small percentage receiving low marks, and the majority (the average) being somewhere in the middle. This is referred to as the ‘bell-curve distribution’. One of the issues with this method is that a student’s score depends on the performance of their peers. This is an issue because a ‘C’ grade in a class with high-performing students cannot be compared to a ‘C’ grade elsewhere.
The second system used to benchmark student achievement is criterion-referenced assessment. Students are not compared to each other in this method – they are compared to a pre-determined set of criteria instead. This can be a marking sheet, rubric or scale of some sort that the teacher creates before students begin the assessment. With criteria referencing, every student in the class could score high marks (or low marks). Most of the time however, even criterion-referenced tests generate a bell-curve distribution of some type with some students at the top, some at the bottom and the majority in the middle. This happens because of teacher bias: the teacher expects a range of scores approaching a typical bell-curve, and consequentially develops the assessment and scoring mechanisms based on this sub-conscious expectation (it is rare for a teacher to set a test where every student fails or scores perfect marks – they set the test aimed at the average 65% score or so, meaning above-average students score 70 or more and below-average score 60 or less).
A rubric is sometimes used when subjective judgement is required, such as for student essays and projects. A rubric is a table that helps the teacher to score the work product against a specific set of criteria. Each criterion is given a weighting, and each is broken further into tiers or levels. For example, a project may be divided into 5 criteria: research, writing, presentation, analysis and overall impression. Each criterion can have equal weighting – 20% in this case as there are 5 criteria – or a weighting based on perceived importance (such as 15%, 15%, 20%, 30% and 20%). Descriptors may be used to further guide the marking for each criterion. The teacher marks the work by assigning a score against each criterion. Each criterion score is then added up for a final total score. Often the score is out of 100 to make it easy to calculate percentages. However, sometimes the score is out of 20, 30 or any other number. Essays are often marked out of 30 for example.
Adam Green is an advisor to government, a registered teacher, an instructional designer and a #1 best selling author. He is completing a Doctor of Education and was previously head of department for one of the country’s largest SAER (students at educational risk) schools. Adam is managing director of FTTA, an accredited training provider for thousands of teacher aides every year.
Source: Teaching Skills and Strategies for the Modern Classroom: 100+ research-based strategies for both novice and experienced practitioners. Amazon #1 best seller in the category of Classroom Management.
The introductory teacher aide course for anyone seeking to work as a support worker.LEARN MORE
Maximise your job prospects and skills with the highest level teacher aide course.LEARN MORE
Turbo charge your resume and save $1500 with our most popular teacher aide course.LEARN MORE
View resources and materials from our research-based, best practice teacher aide courses.LEARN MORE
1 in 2 study the CHC40213 Certificate IV in Education Support with FTTA.
Interest free plans from $40 - no hidden fees, includes all resources.
From $50 - courses subsidised by the Queensland government.
Live webinars, regular tutorials, phone and email support.
We visit every learner on placement to help improve your practice.
A 30-day no obligation period so you can be sure the course is for you.
Supported, self-paced distance mode or class from 1 day per week.
Learn industry best practice and research-based pedagogy.
An established provider with more than 5000 happy graduates.
With more than 5000 graduates, FTTA is the go-to provider for teacher's aide courses. 1 in 2 students choose to study the CHC40213 Certificate IV in Education Support with FTTA.
Head Office (WA): Unit 38, 12 Junction Bvd. COCKBURN CENTRAL WA 6164
Brisbane (Appointment only): S16, Level 18, 324 Queen St. Brisbane QLD 4000
Enquiries: 1300 858 191 | (08) 6555 2992 | firstname.lastname@example.org