When I was working as a teacher, the worst shock I received was a year before I designed and created my first trackers. This was the shock of the mock exam results impacting final grades. I realised that I’d hugely overestimated what grade each pupil was working at in the theory section of the course, and therefore grossly overestimated their forecast grade. This was because I hadn’t made the correct conversion from the mini-tests to get an accurate representation of the final grade. As a result, I spent too much time on the practical section and not enough on improving key sections of the theory papers.
In my experience, the best calculation includes using all relevant mini-tests and mock papers, scaling the total marks to match the unit’s maximum marks, and using the resulting mark to calculate the overall course marks.
Converting all these values also gives you a clear picture of areas of strength and weakness, and therefore where you and your students need to focus. You can’t rely on percentages, as a small test out of 15 marks won’t have the same relevance as a longer test out of 50 marks. Percentages can skew the marks, and therefore give you a false picture.
You can, of course, do this all on a spreadsheet, which means the task list might look like this:
- Makes sure all assessments are linked to a specific unit
- Enter the raw data for a test
- Add the marks of all relevant mini-tests and mocks for a unit
- Divide by the maximum marks
- Multiply by the total marks for the unit
- Repeat for all units
- Change the formulas each time you do another test
- Find and include all grade boundaries for all units
- Use look-up tables to convert to grades
- Add scaling factors or UMS conversions if your courses uses them
- Make sure the marks for all units are used to give you an overall total mark for the course
- Convert you overall total marks to a grade
- Re-check all your formulae everytime you add another assessment
- Adapt the formula to only include the most relevant test data
- Correct any changes people have made that have affected the formulae, and so on…
Another cheeky plug: when you use Pupil Progress trackers, all of the above can be boiled down to three tasks:
- Choose the tracker that’s bespoke to and built for your course
- Add personalised assessments
- Enter data
Getting accurate grades involves lengthy calculations if you’re doing them manually, but it’s these calculations that keep mini-tests in line with the final grade of each assessment.
Now, obviously you shouldn’t use small assessments that don’t assess all core Assessment Objectives (AOs) of that component. For example, a pupil achieving 12 out of 15 in a mini-test that only tests a student’s ability to recall 15 bones of the body is not a representation of what the mock will be like, and therefore won’t be an accurate reflection of a pupil’s ability in the final exam. Any tests that are used in such a calculation should include past paper questions, long and short answer, and questions that have a mark scheme. Our guide to rigorous assessments looks at ways you can make sure your mini-tests give you meaningful data.
As a benchmark, I would only use smaller assessments to guide the potential grade if you have covered a minimum of 3-4 topic areas and the test is made up of published material, sat in exam conditions, and pupils have had enough time to revise. You may be several months into the course before your students have covered enough to start this process. The ultimate goal is that the tracker you have will do these calculations for you, so you don’t have to. (Spoiler alert – ours does!)