Arrangements for GCSE and A-level exams in 2021 will again be challenging – but the alternatives look worse. Flexible exams may create resilience to further disruption. Universities should boost efforts to consider the full context of applicants’ educational experience.
In the summer of 2020, in the face of great controversy, the UK government cancelled both GCSE and A-level exams and replaced them with ‘centre assessment grades’ based on teachers’ predictions. While the devolved government in Scotland have cancelled some exams for 2021, and its Welsh counterpart seems to have arranged for something akin to exams to take place in a classroom setting, the UK government remains adamant that exams in England will go ahead as planned.
This strategy is not without its problems, but with some important adjustments, on the balance of the evidence, it may still be the best and fairest way to assess school students’ academic attainment.
Learning losses from lockdown
Primary and secondary schools closed their doors in late March 2020, and only fully re-opened six months later in September. Schooling has continued to be disrupted for many: when classes or other ‘bubbles’ have to self-isolate due to suspected Covid-19 outbreaks, learning has to move online. This situation is likely to result in further unequal ‘learning losses’ as a result of inequalities in home learning environments, including technology to get reliable access to online lessons.
Recent work by Ofsted, the education regulator, reports widespread learning loss as a result of these closures, with younger children returning to school having forgotten basic skills, and older children losing reading ability. But the loss is not evenly distributed: Ofsted reports that children with good support structures were doing better than those whose parents were unable to work flexibly.
Several analyses (for example, Andrew et al, 2020 and Anders et al, 2020) back this up, reporting that children in better-off families spent more time on home learning, and were much more likely to have benefitted from online classes than those from poorer backgrounds. Work by the Sutton Trust finds that children in households with earnings over £60,000 per year were twice as likely to be receiving tutoring during school closures compared with those earnings less than £30,000.
While steps have been put in place to help children catch up, such as the catch-up premium and the national tutoring programme, students this year will almost certainly be at a disadvantage compared with previous cohorts when they face this year’s exams, and the severity of disadvantage is likely to vary by family background. Newly collected data by colleagues at LSE shows that the vast majority of parents and students support reforms to exams this year to take account of learning losses.
What are the alternatives to exams?
While all of this might be evidence enough that exams should be cancelled this year, it is worth first considering the alternatives.
Continuous teacher assessment
Perhaps the most obvious alternative to exams is continuous teacher assessment, through the use of coursework, in-class testing and so on. This would negate the need for exams, and it would mean that all students would receive a grade in the event that exams have to be cancelled due to a resurgence in the pandemic. The government in Scotland has already committed to using teacher assessment this year instead of exams for their ‘national 5s’ (the equivalent to GCSEs).
While this does seem like a safe choice to replace exams, research has shown that teacher assessment can contain biases. For example, Burgess and Greaves (2013) compared teacher assessment versus exam performance at key stage 2, finding evidence of black and minority students being under-assessed by teachers, in contrast with white students. Similarly, Campbell (2015) shows that teachers’ ratings of children’s reading and maths attainment at age 7 varies according to income, gender, ethnicity and special educational needs.
Using coursework to assess children (whether internally or externally marked and/or moderated) could also lead to further issues. On the one hand, without rigorously evidenced assessment, there is the risk of interference from parents and teachers, so that students’ eventual grades might be a reflection of the support that they have received rather than their own achievements. And levels of support are likely to vary by socio-economic status (SES), again putting those from poorer backgrounds at a disadvantage.
On the other hand, if the assessment were to be well evidenced in terms of standards, this would lead to a huge increase in teachers’ workload to ensure that the decisions stand up to external scrutiny.
But sticking with exams is not without its risks. It is, after all, a pandemic, and the government could be forced to cancel exams at the last minute. If they leave it too late to implement continuous teacher assessment or an alternative form of external assessment, then they will have to turn to more reactive measures – such as asking teachers to predict students’ grades (the method finally adopted for the 2020 GCSE and A-level cohorts).
This would at least have the advantage of being consistent with last year, but again it would be likely to result in biased measures of achievement. Predicted grades have been shown to be inaccurate, with the vast majority over-predicted (causing problems for university admissions). But work by Anders et al (2020) and Murphy and Wyness (2020) shows that among high-achieving students, those from low SES backgrounds and state schools are harder to predict, and end up with lower predictions than their more advantaged counterparts.
A school leaving certificate?
There are more radical possibilities to consider. One is for schools to abandon assessment this year altogether, and simply to issue students with school leaving certificates, similar to those received in the United States for graduating high school. This would certainly level the playing field among school leavers. But it could lead to some big problems for heads of university admissions, who would need to find alternative ways to choose between applicants.
Might there be an alternative approach to exams?
The alternatives to exams raise many concerns, particularly for those from poor backgrounds. A better solution may be to design A-level exams to take account of the learning losses and missed curricula experienced by students, and the fact that some of them will have experienced this to different degrees. Ofqual, the exams regulator, were dismissive of this suggestion in their report on exams for 2020/21, pointing to the burden on exam boards among other factors. But while we take seriously the considerations that they highlight, we think this underestimates the challenges of the status quo.
For all the headlines about the ‘cancelling’ of exams in Wales, from a first look it appears that this is rather a simplistic summary. The Welsh government is still planning to hold some kind of exams, which will be both externally set and externally marked. But when these will take place is now more flexible, and they will happen in classrooms rather than exam halls – ironically, removing the in-built social distancing normally associated with exams. This kind of flexibility is needed in these difficult circumstances.
An alternative that has also been discussed for England is that exams could be redesigned so that the majority of questions are optional. In this way, they would look more like university finals, in which students are typically given a set of questions, and need only answer a subset of their choice – for example, two out of seven questions. This would take account of the fact that students may have covered different aspects of the curricula but not all of it, since they need only answer the questions for which they are prepared.
While appreciating that there are challenges with this approach, a carefully designed exam would at least provide students with a grade that they have earned – and it would provide universities and employers with the information needed to assess applicants.
As there may be additional public resources required to redesign exams, one strategy could be to focus on A-level exams, and to replace GCSE exams with continuous assessment (for example), given the importance of A-levels for university entry. This is the strategy that the Scottish government has taken with national 5s and highers (Scotland’s A-level equivalent).
What are the implications for universities?
The implications for universities will of course depend on the decisions taken by the UK government. If exams are cancelled, or even if they become less trustworthy than in previous years (which is probable, whatever the scenario), universities may be tempted to lean more heavily on other sources of information about candidates. For example, they may become increasingly reliant on ‘soft metrics’ such as personal statements, teacher references and interviews. This may also lead to more widespread use of university entry tests, which are already in place at some institutions.
All of this is likely to be bad news for social mobility since the use of soft metrics has been shown to induce bias (Wyness, 2017; Jones, 2016), while there is very little evidence about the implications for equity of using aptitude tests, except in highly specific settings (Anders, 2014; Anders, 2020) – so the potential for unintended consequences is substantial.
But in theory, universities should not need to use entry tests: these students already have grades in national tests – their GCSEs. For this university entry cohort, they were sat before the pandemic, and are high stakes, externally marked assessments. Indeed, Kirkup et al (2010) find no evidence that the SAT (the most widely used aptitude test in the United States) provides any additional evidence on performance once at university than using GCSE results on their own.
Many universities already use GCSE grades as part of their admissions decision along with predicted A-level grades. Yet these grades were measured two years ago now – and so will obviously miss any changes in performance since then.
Indeed, recent work by Anders et al (2020) suggests that GCSE performance is a poor predictor of where students are at, in terms of achievement, at the end of their A-levels. Using administrative data and machine learning techniques, they predict A-level performance using GCSEs, finding that only one in three students could be accurately predicted, and that certain groups of students (those from state schools and low SES backgrounds) appeared to be ‘under-predicted’ by their GCSEs, going on to outperform at A-level.
Whatever the government chooses to do with exams this year, universities should be aware that students from different backgrounds will have experienced lockdown in very different ways, and that those lacking school and parental support may still struggle to do well, even in well-modified exams. This could and should be tackled with increased use of contextual admissions.
Universities often cite fears that students from contextual backgrounds are more likely to arrive underprepared for university and risk failing their courses. But this year, lack of preparation for university may well be the norm, forcing universities to provide extra tuition and other assistance to help students get ‘up to speed’. There has never been more need, and more opportunity, for widespread contextual admissions.
Where can I find out more?
- Minority report: the impact of predicted grades on university admissions of disadvantaged groups: Richard Murphy and Gill Wyness examine A-level prediction accuracy by student and school type.
- Grade Expectations: How well can we predict future grades based on past performance?: Jake Anders, Catherine Dilnot, Lindsey Macmillan and Gill Wyness show that predicting grades using statistical and machine learning methods based on students’ prior achievement can only modestly improve the accuracy of predictions compared with teachers.
- Test scores, subjective assessment, and stereotyping of ethnic minorities: Simon Burgess and Ellen Greaves assess whether ethnic minority students are subject to low teacher expectations, comparing differences in externally marked tests and ‘non-blind’ assessment methods between white and ethnic minority students. They find evidence that some ethnic groups are systematically under-assessed relative to their white peers, while some are over-assessed.
- How should universities select students? Jake Anders discusses the measures that UK universities use to select students, showing that aptitude testing may provide little information over and above that from existing prior attainment measures, and may discriminate against certain types of students.
Who are experts on this question?
- Gill Wyness, Centre for Education Policy and Equalising Opportunities, UCL, London
- Lindsey Macmillan, Centre for Education Policy and Equalising Opportunities, UCL, London
- Jake Anders, Centre for Education Policy and Equalising Opportunities, UCL, London
- Richard Murphy, Centre for Economic Performance, LSE
- Simon Burgess, University of Bristol, Bristol