Education Policy, Reading Interventions, Research, School-wide Literacy
Comments 11

What Works?

The latest edition of Prof Greg Brooks’ work provides important insights – and raises questions.

What works for children and young people with literacy difficulties?

We’ve been reading with great interest the updated edition of emeritus Professor Greg Brooks’ ‘What works for children and young people with literacy difficulties? The effectiveness of intervention schemes’. Professor Brooks has a strong overview of literacy interventions in reading, writing and spelling, having completed this analysis five times since 1998.

The resource is well worth a read for a variety of reasons. Firstly, his overall conclusions (pp 15-16) about the types of support required for students with literacy difficulties are clear and well-evidenced. Here are a few selected quotes:

Ordinary teaching (‘no treatment’) does not enable children with literacy difficulties to catch up. For the evidence on this, see the third edition.

Implication: Although good classroom teaching is the bedrock of effective practice, most research suggests that children falling behind their peers need more help than the classroom normally provides. This help requires coordinated effort and training.

Large-scale schemes, though expensive, can give good value for money.

Implication: When establishing value for money, long-term impact and savings in future budgets for special needs must be considered, particularly when helping the lowest-attaining children.

Good impact – sufficient to at least double the standard rate of progress – can be achieved, and it is reasonable to expect it.

Implication: If the scheme matches the child’s needs, teachers and children should expect to achieve rapid improvement. High expectations are realistic expectations in most cases.

The section on boosting literacy at the transition between Year 6 and Year 7 is particularly interesting, not only in terms of the startling decline that is evident in many countries at the primary / secondary divide, but also because of the amount of money thrown at this area by government, with only token attempts to evaluate the impact (see p. 138-9). It is also illuminating to see how many of the so-called ‘gold standard’ studies by the EEF were excluded from Professor Brooks’ review:

The reasons for not mentioning 14 of the [24] RCT evaluations in this report varied: non-significant findings, implementation or sampling problems, small samples, high drop-out, … which all go to show how difficult it is to produce robust and reliable findings, even (or especially) when rigorous research designs are adopted.

The sections on literacy interventions at secondary level (p. 157, p. 255) are obviously most relevant for us. For us, the most striking element is how few interventions exist where pupils are expected to catch up completely. Thinking Reading appears to be the only one where the time frame is open-ended: students graduate from the programme once they are reading fluently at their age level, and so the number of lessons depends on how far behind they were when they started, and their particular rate of progress.

_A_C1888.jpg Literacy Teacher Dianne Murphy

You can read the data from two secondary schools showing ‘remarkable’ impact for Thinking Reading students on p.187.*

However, the report leaves us with a number of questions:

  • Is the EEF producing sufficiently useful evidence, given the scale of the government’s investment?
  • Given how little has changed with respect to literacy attainment, what do school leaders need to do to improve outcomes for lower attaining students?
  • What beliefs or entrenched interests are preventing us from implementing effective interventions, when there is clear evidence that this is realistic and practicable?

* Update from our July 2017  impact evaluation: 

All students catch up to their chronological age in reading remarkably quickly. In the four schools that shared progress data for our July 2017 evaluation, the average gain was over five years per student, and the average rate of progress was two months per lesson. Students were in the programme for an average of seven months (including school holidays) and progressed at a rate of nearly nine months for each month of intervention.

Visit our website.

You may also be interested in:

Is Tackling Literacy Like Wrestling an Octopus?

Pulling the Strands Together

There is Hope

Why is there a reading problem in secondary schools?


  1. Thanks, Dianne. Your post touches on many of the problems encountered in ascertaining which programmes work best. Of course, as you’ve remarked many time, it would be so much better if we didn’t have these problems in the first place.
    Although Sounds-Write is used as a catch-up intervention in many primary and secondary schools and has a very good track record in this area, we have never participated in Greg Brooks’s surveys.
    Initially, in 2003, when we first started, we did try but we came up against the problem of being unable to make reasonable comparisons from the data we tried to collect. Among other things, teachers want to use different tests, they forget to pre-test, they don’t post-test at the end of an agreed period of intervention, and they don’t post test six months after the end of intervention (to see if the effects have washed out). But, most important of all, the periods and frequency of intervention sessions and the attendance of each individual pupil are not usually carefully monitored.
    All of this this makes the collection of data extremely problematical. It’s also the reason why we decided to go for collecting data on KS1 and KS2 classes, which, we think, gives, over the period of a whole year a more with follow-up year after year, a reliable indication of how successful a programme is. Unfortunately, not even a comparison against the population against which the tests are normed and standardised is enough for some (Dorothy Bishop, for example). A truly rock-solid study would have to be an RCT – but who is going to undertake it?
    Best wishes, John


    • Very pertinent comments, John. There is certainly variability between schools with regard to how well they evaluate interventions. We ask schools to use a tracking spreadsheet that we have designed which includes pre-test, monitoring and post-test, including yearly follow-up and the number of lessons. With respect to the use of randomisation, this does seem to be at risk of obscuring other ‘quasi-experimental’ approaches which can actually be more informative. This post by Zig Engelmann raises some very telling questions about randomisation as the key to the ‘gold standard’ “Socrates on Gold Standard Experiments”.

      Liked by 1 person

  2. Pingback: The natural home for reading interventions (and it’s not SEN) | thinkingreadingwritings

  3. Pingback: Building on the Evidence | thinkingreadingwritings

  4. Pingback: Looking Back – old problems, new challenges | thinkingreadingwritings

  5. Pingback: I tried that and it didn’t work . . . | thinkingreadingwritings

  6. Pingback: Beware of the Reading Traps | thinkingreadingwritings

  7. Pingback: Reading Crisis? What Crisis? | thinkingreadingwritings

  8. Pingback: Assessing an Intervention: Who Does It Help? | thinkingreadingwritings

  9. Pingback: How to find out what works in ‘What Works?’ | thinkingreadingwritings

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.