Papers 2019

Big data issues

Despite being ugly und uninformative the term ‘big data’ has entered the language of science as well as that of the media. The meaning of the term ‘big’ data is explored in the introduction to this book. This chapter focuses on administrative data, meaning data related to services such as health and education, that are attached to individual people and are generated through the administration of services in the public or private sector. We are especially concerned with the linking together of such data files.

 

 
In the UK, USA and elsewhere, school accountability systems increasingly compare schools using value-added measures of school performance derived from pupil scores in high-stakes standardised tests. Rather than na€ıvely comparing school average scores, which largely reflect school intake differences in prior attainment, these measures attempt to compare the average progress or improvement pupils make during a year or phase of schooling. Schools, however, also differ in terms of their pupil demographic and socioeconomic characteristics and these factors also predict why some schools  subsequently score higher than others. Many therefore argue that value-added measures unadjusted for pupil background are biased in favour of schools with more ‘educationally advantaged’ intakes. But others worry that adjusting for pupil background entrenches socioeconomic inequities and excuses low-performing schools. In this article we explore these theoretical arguments and their practical importance in the context of the ‘Progress 8’ secondary school accountability system in England, which has chosen to ignore pupil background. We reveal how the reported low or high performance of many schools changes dramatically once adjustments are made for pupil background, and these changes also affect the reported differential performances of regions and of different school types.We conclude that accountability systems which choose to ignore pupil background are likely to reward and punish the wrong schools and this will likely have detrimental effects on pupil learning. These findings, especially when coupled with more general concerns surrounding high-stakes testing and school value-added models, raise serious doubts about their use in school accountability systems.
 
 
 

Forecasting the repose between eruptions at a volcano is a key goal of volcanology for emergency planning and preparedness. Previous studies have used the statistical distribution of prior repose intervals to estimate the probability of a certain repose interval occurring in the future, and to offer insights into the underlying physical processes that govern eruption frequency. However, distributions are only decipherable after the eruption, when a full dataset is available, or not at all in the case of an incomplete time-series. Thus there is value in using an approach that does not assume an underlying distribution in forecasting likely repose intervals, and that can make use of additional information that may be related to the duration of repose. The use of a non-parametric survival model is novel in volcanology, as the size of eruption records is typically insufficient. Here, we apply a non-parametric Bayesian grouped time Markov Chain Monte Carlo (MCMC) survival model to the extensive 58-year eruption record (1956 to 2013) of Vulcanian explosions at Sakura-jima volcano, Japan. The model allows for the use of multiple observed and recorded data sets, such as plume height or seismic amplitude, even if some of the information is incomplete. Thus any relationships between explosion variables and subsequent or prior repose interval can be investigated. The model was successfully able to forecast future repose intervals for Sakura-jima using information about the prior plume height, plume colour and repose durations. For plume height, smaller plumes are followed by shorter repose intervals. This provides one of the first statistical models that uses plume height to quantitatively forecast explosion frequency.

PISA critique commentaries

I congratulate the guest editors of Assessment in Education, Caro and Kyriakides (2019), for this collection of papers on PISA; its achievements and possibilities. Some of these articles will serve as a useful reference works for fu ture research. As has been the case since the start of PISA in 2000, its implementation and published results have often been controversial and some of this controversy is reflected in these papers. The editors themselves provide a brief summary of each paper and I will not attempt to replicate that. Rather I will focus on how far the papers increase our understanding of what PISA does and how much it contributes to scientific knowledge, and also the extent to which each paper’s authors address some of the broad questions of validity and impact. I will start by offering critical comments on each paper and then provide a general review that will attempt to evaluate the global role that PISA continues to play.