The latest edition of Prof Greg Brooks’ work provides important insights – and raises questions.
We’ve been reading with great interest the updated edition of emeritus Professor Greg Brooks’ ‘What works for children and young people with literacy difficulties? The effectiveness of intervention schemes’. Professor Brooks has a strong overview of literacy interventions in reading, writing and spelling, having completed this analysis five times since 1998.
The resource is well worth a read for a variety of reasons. Firstly, his overall conclusions (pp 15-16) about the types of support required for students with literacy difficulties are clear and well-evidenced. Here are a few selected quotes:
Ordinary teaching (‘no treatment’) does not enable children with literacy difficulties to catch up. For the evidence on this, see the third edition.
Implication: Although good classroom teaching is the bedrock of effective practice, most research suggests that children falling behind their peers need more help than the classroom normally provides. This help requires coordinated effort and training.
Large-scale schemes, though expensive, can give good value for money.
Implication: When establishing value for money, long-term impact and savings in future budgets for special needs must be considered, particularly when helping the lowest-attaining children.
Good impact – sufficient to at least double the standard rate of progress – can be achieved, and it is reasonable to expect it.
Implication: If the scheme matches the child’s needs, teachers and children should expect to achieve rapid improvement. High expectations are realistic expectations in most cases.
The section on boosting literacy at the transition between Year 6 and Year 7 is particularly interesting, not only in terms of the startling decline that is evident in many countries at the primary / secondary divide, but also because of the amount of money thrown at this area by government, with only token attempts to evaluate the impact (see p. 138-9). It is also illuminating to see how many of the so-called ‘gold standard’ studies by the EEF were excluded from Professor Brooks’ review:
The reasons for not mentioning 14 of the  RCT evaluations in this report varied: non-significant findings, implementation or sampling problems, small samples, high drop-out, … which all go to show how difficult it is to produce robust and reliable findings, even (or especially) when rigorous research designs are adopted.
The sections on literacy interventions at secondary level (p. 157, p. 255) are obviously most relevant for us. For us, the most striking element is how few interventions exist where pupils are expected to catch up completely. Thinking Reading appears to be the only one where the time frame is open-ended: students graduate from the programme once they are reading fluently at their age level, and so the number of lessons depends on how far behind they were when they started, and their particular rate of progress.
It is of course pleasing to see the data from two secondary schools showing such a positive impact for Thinking Reading students (p.187), but we are looking forward to a larger sample and using a range of standardised assessment measures for our entry in the next edition, so that impacts on decoding, spelling and comprehension can be identified separately. In the meantime, we have some interesting data on GCSE impact to consider for a blog post in the near future.
However, the report leaves us with a number of questions:
- Is the EEF producing sufficiently useful evidence, given the scale of the government’s investment?
- Given how little has changed with respect to literacy attainment, what do school leaders need to do to improve outcomes for lower attaining students?
- What beliefs or entrenched interests are preventing us from implementing effective interventions, when there is clear evidence that this is realistic and practicable?
You may also be interested in: