At a time when so much of the education landscape is subject to reform, it is important to understand what is changing and what the likely impacts are going to be. The ability to understand performance tables has become critical to this understanding.
Whether it’s coverage of the performance tables in the media, the information your school or college present on your website or the questions that prospective parents ask, the information held in performance tables provides a key information source. The introduction of performance tables was linked to a public information initiative and they do meet that objective - they provide data on attainment and progress and a mechanism for comparing performance. However, they are not context free and because they measure ‘performance’ of schools by virtue of student attainment and/or progress in assessments they need some interpretation and understanding.
The critical factor for compiling performance tables is the ability to measure students’ attainments in assessments. Unlike production processes, which ensure the conditions and ingredients of products are consistent and uniform, individual students can never be uniform. Each cohort that takes national assessments are unique, with inconsistent backgrounds, different levels of parental engagement, different interests, experiences and a whole host of other characteristics and interactions that must be taken account of and therefore make the measurement of school performance through assessment results only a challenge.
Since Professor Alison Wolf’s Review of Vocational Qualifications in 2011, the landscape of performance tables has been changing. This year’s performance tables will be the first in which all education providers for pre-16 and 16-19 education will be measured on a new set of incentives. Since that review, there has been much debate about what the measures mean for education, with frequent discussion over the potential for the curriculum offered to students to be narrowed to focus on subjects that ‘count’ in performance tables.
While it has been true for some time that the inclusion or exclusion of qualifications in performance tables has noticeably affected the volume of certain qualifications being delivered in schools, not all changes have a direct correlation to performance tables. According to data from the Joint Council for Qualifications (JCQ), for the examination series in 2015, the top 10 subjects included both Art & Design courses and Design & Technology, despite strong presence of EBacc subjects, the numbers of students taking Art & Design courses having increased again from a relative low in 2013.
Of the subjects that saw the highest increases in student volumes, only 1 of the top 3 were EBacc subjects and of subjects seeing decreases in take-up, the decline in numbers for French had accelerated from 2014. So while the EBacc will continue to be a performance table headline measure, there are other subjects that large numbers of students are taking and, therefore, more stories that our schools and colleges need to be able to tell about their performance.
We believe that education should be broader than the examination syllabus. The Education Select Committee is currently holding an inquiry into the purpose of education and it will be interesting to see what conclusions are drawn. Its conclusions will have implications for how we also measure educational outcomes. While it is easier to focus on those things that are relatively straightforward to measure, like exam results, education’s purpose is broader and so our judgements of school performance need to be based on a broader view as well. It might be possible that some of this is built into performance tables in future when the destination data can be robustly measured, but in the meantime, the methods we use to judge schools and colleges should be broader than how they perform in performance tables.