The government is consulting on a system of post-qualification university admissions, where school students would receive and accept offers once they have their A-level or equivalent grades. An alternative plan would involve applications as well as admissions coming after results day.
Applications for entry to UK university courses for autumn 2021 close on Friday 29 January, after a brief extension was granted due to the closure of schools at the start of January and the cancellation of exams. While many will have applied (and had offers) prior to the announced closures and cancellations, this period of disruption will have led to further chaos and confusion for those looking to start university this year.
As it stands, all prospective students will be making their applications yet again based on the grades that their teachers predict they would have achieved at the end of this academic year. But some of those predictions will have been based on the assumption that students were going to be taking exams (as was the plan before January), while others, who have applied more recently, will have been based on the students’ likely outcomes from some form of teacher assessment, the nature of which remains unclear at this point.
This situation highlights the need for reform of an outdated system.
Simply put, applications this year are going to be based on teacher predictions that are even more inaccurate than in ‘normal’ years. And even in normal years, fewer than one in five students have results accurately predicted by their teachers. It is no surprise then that the majority of students and parents are now calling for reform to the way that students apply to university, and many leading bodies are also supporting these calls.
In November 2020, UCAS (the Universities and College Admissions Service) led new calls for reforms to the process, followed by support from Universities UK, and the much-welcomed announcement from the Department for Education (DfE) that plans are underway to move to a post-qualification admissions (PQA) system. The DfE has announced a consultation on plans to move to a PQA system, to ‘level up the university admissions system’.
Based on our research, we explain why it is clear that the A in PQA must mean applications rather than admissions. The central reason is that all other options rely on the use of predicted grades, which are both inaccurate and unfair, and so should not be used at any point in the process.
Here we suggest how a post-qualification applications system could be achieved, central to which is reducing the time taken to mark exams. The model we propose is essentially Model 1 from the consultation document, which we fed into through our previous blog post and follow-up discussions with the DfE.
What is crucially important about this model is that it removes the need for predicted grades at any stage in the process. This is a once-in-a-generation chance to redesign a flawed system. We should not waste this chance by recreating a system that continues to embed systematic inequalities by using predicted grades.
An outdated applications calendar?
While at first glance the UK’s centralised applications system, UCAS, might appear modern, we have actually had a centralised applications system since 1961 (then known as UCCA – the University Central Council on Admissions).
Since then, there have been major advancements in technology, including the move from paper forms with traditional mail and hand processing, to online applications, and the computerisation of exam marking. Yet despite these changes the calendar of the applications and admissions process has remained fixed for the past 60 years. The inertia of this system has been widely accepted, but we believe there are clear opportunities for efficiency gains.
The acceptance of this inertia can be seen in discussions of what a new post-qualifications system might look like. There has been a plethora of ideas, debating the pros and cons of post-qualification decisions, offers and applications (PQD, PQO, PQA) all of which appear to take the current grading duration as given.
In much of the discussion that has followed, there is a frequent assumption that in order to move to post-qualification applications, A-levels will need to happen much earlier or universities start much later. This acceptance of the status quo in terms of timing creates the very real possibility that this opportunity for change will be squandered.
What is the problem with PQO or PQD?
The crucial problem with the alternatives to post-qualification applications is that they still rely on predicted grades. As is clear from the DfE press release, the significant inaccuracies in predicted grades, and the systematic differences across students, are the key reasons for this reform in the first place.
Our recent study shows that only 16% of students receive accurate predictions. While the majority of students are over-predicted, high-achieving students from disadvantaged backgrounds are typically under-predicted.
In another recent study, we highlight the difficulty of predicting grades, showing that when relying on machine learning and advanced statistical techniques, only one in four students are accurately predicted. This research also shows that it is harder to predict grades accurately for high-achieving state school students, relative to their selective grammar or private school counterparts.
The implications for relying on predicted grades are also clearly explained in the DfE press release: ‘Disadvantaged students are more likely to ‘under-match’ and enter courses below their ability than their advantaged peers. Under-matched students are then more likely to drop out of university, get a lower-class degree and earn less in employment.’
The phenomenon of under-match had not been formally documented until recently in the UK. Our research highlighted for the first time that students from more disadvantaged families systematically attend lower tariff universities than they could do given their A-level achievements, relative to their more advantaged counterparts. Predicted grades are a driver of this phenomenon – we found that those disadvantaged students who were under-predicted ended up being overqualified, or under-matched, in terms of both their applications to university, and where they ended up going.
Reforms in which school students are still making their applications on the basis of predicted grades (as is the case with many of the options being discussed) will not solve this under-match problem, since they will still be making their applications on the basis of inaccurate predictions. It is therefore crucial that students are allowed to make their applications on the basis of actual rather than predicted grades.
So how could we deliver post-qualification applications?
Our proposed approach to PQA is based on using the efficiency gains that should be made from our archaic applications system. We argue that given the technological advancements made over the past 60 years, there is no obvious reason why we need to stick to the current timeline. These gains can be made both in terms of how quickly exams are marked, and in terms of how long it takes to process university applications.
Stage 1: Exams and student applications
We suggest virtually no change is needed to the timing of A-levels, allowing teachers the time to teach the full curricula. Examinations would still take place in early May, but instead of running through until late June as in a normal exam calendar, the schedule could be compressed, as was proposed for this year’s exam schedule, running over a three or four week period. This could be coupled with a review of the increasing number of papers that students are asked to sit for each subject to minimise the burden on students’ timetables.
Once these exams are over, A-level students could return to school for a range of ‘forward-looking’ activities, including university and careers guidance, work experience, financial literacy training and, in the final week, an ‘applications week’. Keeping students in school for the full school year addresses concerns about disadvantaged students not receiving guidance.
At the start of their applications week, students would receive their grades, seven to eight weeks after the compressed exam schedule ends, adding little additional burden to exam board timetables. They would then apply to their chosen courses, with the support of their teachers.
This would have a neutral impact on teacher workload as they would no longer have to predict grades during the year – their support for applications would shift from earlier in the year, and perhaps would no longer have to help with personal statements (see below). Or schools could take on specialists to provide this guidance.
Given that the new marking timeline will fall during term time, it seems clear that exam boards would need to invest in more markers for quality assurance and to reduce the additional burden on teachers. Indeed, the condensing of the exam period could be complemented with an acceleration of the marking process if needed, through a combination of the previously discussed technological improvements, and further investment to employ more markers during this intense period to ensure that quality is unaffected.
Stage 2: University processes
Universities would receive all applications by the end of July and would have a finite period (a month) to process these and make offers. Given that they will be receiving applications based on final grades, it is possible that the entire application process could be simplified, relying less on ‘soft metrics’ such as personal statements or teacher references, which can be affected by bias.
While this approach might lead to increased workload over the summer period for universities that still rely on interviews, most application decisions now are far more automated and the number of institutions that actually interview are very small, so this impact would be minimal for most.
Where applicants have identical grades, universities could look at the students’ final grades relative to their schools’ performance, baking in some ‘widening participation’ targets to the process. Those students with most ‘potential’, outperforming their school’s results, would be given priority among ties.
Students would receive their offers at the end of August, only a few weeks after they do in the current system, and accept their favoured choice. While there would be more uncertainty in some aspects of this approach – for example, in terms of delays in students knowing where they can apply to – it would also lead to more certainty in other aspects of the process, such as students having firm knowledge that their actual grades are high enough for admittance. As noted, the time delay in finding out where they will end up would be minimal relative to the current system, so that making arrangements for accommodation and finances should be relatively unaffected.
Creating a more agile system?
When setting out his rationale for considering post-qualification admissions, Gavin Williamson, the Secretary of State for Education, said he wanted to ‘remove the unfairness’ that some groups currently face due to inaccurate predicted grades.
The only way to remove this unfairness, is to remove predicted grades from the application process altogether, and create a post-qualification applications system.
Rather than shifting the timing of examinations and university start dates, we argue that this can be achieved by harnessing technological efficiencies that have emerged over the 60 years in which the current system has been in place. This is a once-in-a-generation chance to redesign our system to make it fairer for all young people. Allowing them to make critical decisions based on their own merits, rather than inaccurate predictions, is a critical step.
Where can I find out more?
- Minority report: the impact of predicted grades on university admissions of disadvantaged groups: Richard Murphy and Gill Wyness examine A-level prediction accuracy by student and school type.
- Grade Expectations: How well can we predict future grades based on past performance?: Jake Anders, Catherine Dilnot, Lindsey Macmillan and Gill Wyness show that predicting grades using statistical and machine learning methods based on students’ prior achievement can only modestly improve the accuracy of predictions compared with teachers.
- Test scores, subjective assessment, and stereotyping of ethnic minorities: Simon Burgess and Ellen Greaves assess whether ethnic minority students are subject to low teacher expectations, comparing differences in externally marked tests and ‘non-blind’ assessment methods between white and ethnic minority students. They find evidence that some ethnic groups are systematically under-assessed relative to their white peers, while some are over-assessed.
- How should universities select students? Jake Anders discusses the measures that UK universities use to select students, showing that aptitude testing may provide little information over and above that from existing prior attainment measures, and may discriminate against certain types of students.
Who are economic experts on this question?
- Gill Wyness, Centre for Education Policy and Equalising Opportunities, UCL, London
- Lindsey Macmillan, Centre for Education Policy and Equalising Opportunities, UCL, London
- Jake Anders, Centre for Education Policy and Equalising Opportunities, UCL, London
- Richard Murphy, Centre for Economic Performance, LSE
- Simon Burgess, University of Bristol, Bristol