The role of private school chains in Africa

The trend towards private schoolinghas largely been a phenomenon of industrialised country education systems,starting with Charter schools in the USA and spreading to other countries such as England where Government policyannounced in 2016 is to convert all schools into ‘academies’ run by so-calledmulti-academy chains. In these systems the commercial returns fromprivatisation is often indirect, being expressed through the letting ofcontracts for support services and the like. In developing countries, however,the privatisation is often directly commercially driven where for-profit companiesset up or take over schools and charge parents for the education provided. Thefollowing commentary looks at the case of one corporation that involved in  several African countries where it makesclaims for the superiority of its education it. Specifically, BridgeInternational Academies (BIA) has recently published a report comparing itsschools in Kenya with neighbouring State schools and claims greater learninggains. The report can be viewed  at:http://www.bridgeinternationalacademies.com/results/academic/A detailed commentary andcritique of this report has been compiled by Graham Brown-Martin and can be accessedat the following site.https://medium.com/learning-re-imagined/education-in-africa-1f495dc6d0af#.p1mpj67ed
Brown-Martin makes reference  to some of my own remarks on the report andwhat follows is a more detailed quantitative critique of the report.The study sampled 42 BridgeAcademy schools and 42 ‘geographically close’  State schools, and carried out testing in theearly grades of primary schooling on approximately 2,700 pupils who werefollowed up one year later, with just under half being lost through attrition.On the basis of their analyses the report claims“a Bridgeeffect of .31 standard deviations in English. This is equivalent to 64additional days of schooling in one academic year. In maths, the Bridge effectis .09 standard deviations, or 26 additional days of schooling in one academicyear.”
Such effectsizes are large, but there are serious problems with the analysis carried out.
First, and most importantly, parents pay to send their children to Bridge schools; $6 a monthper student which represents a large percentage of the income of poor parentswith several children, and where the daily income per household can fall below$2 a day. So  some adjustment for’ability to pay’ is needed, yet this is not attempted, presumably because suchdata is very difficult to  obtain.Presumably those with higher incomes can also support out-of-school learning.Does this go on? Instead the report uses factors such aswhether the family has electricity or a TV, but these are relatively poorsurrogates for income. Yet the report has no mention of this problem.Some of thestate schools approached to participate  refusedand were replaced by others, but there is no comparison of the characteristicsof the included schools and all non-bridge schools. Likewise we know littleabout the students who left the study (relatively more from the Bridge schools)after the initial assessment. Were these pupils who were ‘failing’? For exampledid parents with children ‘failing’ at Bridge schools withdraw them more often,or did parents who could barely afford the school fee tend to withdraw theirchildren more often? What is policy of bridge schools with pupils who fallbehind? Are they retained a year or otherwise treated so that they are notincluded in the follow-up. Such policies, if different in Bridge and Stateschools would lead to potentially large biases. To be fair, section VII doeslook at the issue of whether differential attrition could affect results andsuggests that it might and recommends further work. In these circumstances onemight expect to see, for example, some kind of propensity score analysiswhereby a model predicting propensity to leave, using all available dataincluding school characteristics, would yield individual probabilities of leavingthat can be used as weights in the statistical modelling of outcomes. Withoutsuch an analysis, apart from other problems, it is difficult to place muchreliance on the results.The differencesin differences (DiD) model is the principal model used throughout the report,  yet has serious flaws which are not mentioned.The first problem is that it is scale dependent – thus any monotone (orderpreserving) transformation will produce different estimates – so that at thevery least different scalings need to be tried. Since all educational tests areon arbitrary scales anyway this is an  issue that needs to be addressed, especiallywhere the treatment groups (Bridge and non-Bridge schools) have very differentstudent test score distributions.Secondly,even ignoring scale dependency, the differences across time may in fact be (andusually are) a function of initial test score, so that the latter needs to beincluded in the model, otherwise the DiD will reflect the averagedifference and if, as is the case here, the baseline score is higher for bridgeschools, and for the scale chosen the higher baseline scoring pupils tend tomake more progress in Bridge schools, then DiD will automatically favour the Bridge schools.Thirdly, theclaim that DiD effectively adjusts for confounders is only true if there are nointeractions between such confounders and treatment. This point does appear tobe understood, but nevertheless remains relevant and is not properly pursued inthe report.The reportdoes carry out an analysis using a regression model which, in principle, ismore secure than the DiD model but requires a nonlinear relationship withbaseline, which is done, but also possible interactions with covariates whichis not done. Even more important is that there needs to be an adjustment forreliability which is likely to be low for such early year tests. If thebaseline test reliability is low – say less that 0.8,  then inferences will be greatly changed andthe common effect found in other research around this age is that the treatmenteffect is weakened. (Goldstein, 2015). Table 15 isespecially difficult to interpret. It essentially looks at what happens to thelower achieving group at time 1 using a common cut-off score. Yet this groupoverall is even lower achieving in control schools than Bridge schools, so itwill be easier on average for those in this group in Bridge schools to move outof this category. The evidence from these comparisons is therefore even lessreliable than the above analyses and can be discounted as providing anythinguseful. Surprisingly this point appears to be understood  yet is still used as ‘evidence’There is asection in the report on cross-country comparisons. The problem is that countryassessments are fundamentally different and comparability is a very slipperyconcept and this section’s results really should be ignored since they arehighly unreliable.
In short,this report has such considerable weaknesses that its claims need to be treatedwith scepticism. It also appears to be authored by people associated with BIA,and hence presumably with a certain vested interest.  The issue of whether private education candeliver ‘superior’ education remains an interesting and open question.

Reference

Goldstein, H. (2015), Jumpingto the wrong conclusions. Significance, 12: 18–21.doi: 10.1111/j.1740-9713.2015.00853.

Harvey Goldstein University of Bristol; 21 June 2016

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s