Universities aim to admit the best pool of students possible, which are expected to have better labour market achievement. Competition among universities has increased over the last decades, and the institution’s ranking is affected by its selectivity.
New research by Pedro Luis Silva studies whether universities should select their students using only field-specific exams or based on a broader set of exams. The central finding is that, on average, universities with less specialised admission policies (e.g. which combine specialised and non-specialised A-level requirements) admit a pool of students who obtain a higher final GPA (grade point average).
The author describes the study and results:
Most institutions rely on standardised tests (for instance, A-levels) as the primary mechanism to select their students. The purpose is to obtain a more comprehensive assessment of higher education (HE) applicants. But how broad the admission criteria should be is a question that has been given little research attention.
Many universities operate under the assumption that candidates who perform better at a subject-specific test, will perform better in that subject at university. For instance, in Physics, maths is often perceived as a relevant field-specific skill, while languages convey information on broader cognitive skills. I define general skills as those that are not directly rewarded by the field of study. Strong emphasis on field-specific skills rewards students specialising early at high school rather than cultivating a more versatile portfolio of skills.
My approach combines theoretical modelling and empirical analysis. Although my empirical analysis uses Portugal as a case study, the results are generalisable to other HE systems that consider at least one of two different metrics in the admission process: a field-specific exam (e.g., A-level of Mathematics for the economics degree); and another one that conveys information of a more generic nature (e.g. A-level of Portuguese for the economics degree)
In the Portuguese HE system, students may be admitted to the same programme (pair university/degree) with different sets of exams, depending on their performance on those exams. Within each cohort, I observe students being admitted to the same programme with different exam sets. I exploit the variation between subject-specific and non-specific exam sets. I rely on a novel administrative dataset for a period of eleven years (2008-2018), comprising the application process for each student and her performance/graduation in HE.
Within each programme, I define three categories of admitted students: the specialists (those admitted with a field-specific exam and whose general skills would not have been enough to gain entry in HE), the generalists (those admitted purely based on their performance on the general admission exam, who would not have been admitted only based on the field-specific exam result), and the all-rounders (those that could have been admitted with each one of the two types of exams).
I observe differences in academic student performance across the different groups at distinct landmarks of the academic programme. I find that: i) the generalists perform no worse than their specialist peers, by the end of the first academic year; and ii) by the end of the programme the specialists are outperformed by the all-rounders and the generalists.
My results have substantial implications for university admission practices. Differences in student performance across groups suggest that the field-specific exam is not always an effective test of the specialist skills associated with high achievement at university. Universities should rethink their admission practices by either designing admission exams that evaluate field-specific skills more accurately or by introducing general admission requirements to distinguish their candidates effectively. In short, diversity in admission criteria has a positive effect on student performance at university.

Pedro Luis Silva
Teaching Associate in Economics at University of Nottingham