It’s the time of year when we farewell Year 11 students, with a mixture of relief, anticipation, and sometimes a tinge of regret. For some, the promise of what they will do with their lives is so beautiful it almost intoxicating. For others, not so much: those students who strove, who struggled, who despaired, and sometimes gave up; the ones whom we instinctively feel should have done better, but we know are likely to end up with grades at 3 or even below. And it‘s at this time that we most wonder – could we have done something different?
There are many potential reasons why students struggle. The learning that is being assessed at GCSE has accumulated over the years of the education, both inside and outside school walls. Skills that bear a single name – like ‘essay writing’ – are in fact are a composite of many different skills, which are themselves likewise a combination of more basic skills. Achievement comes from acquiring knowledge, then practising its application to mastery, then combining it with other knowledge, ad infinitum.
Often the reasons for failure or slow progress are hidden below the surface. It is not the presenting weakness that is the problem, but fractures in the layers of learning that lie beneath – layers that we assume students have, but in reality are incomplete or even absent. And beneath all this lies the murky stratum of how well they can actually read.
Here are seven things that get in the way of effective help for students in the bottom third. Fortunately, they are all things that we can change:
1 Assumptions rather than objective data
We assume that the student has problems which will prevent them from learning. We sometimes call this making a professional judgement, but it is more accurately speaking pre-judgement.
We ‘diagnose’ students as having problems or disabilities which prevent them learning. We do this through the use of inadequate data, preconceptions about what low test scores actually mean, or a disability mindset where we are looking for a label to apply to the child.
3 Misunderstanding the role of motivation
We attempt to build motivation in order to promote achievement, instead of ensuring success in order to build motivation.
4 Ineffective intervention
The two main problems here are either weak programmes, whose design can only ever have limited impact, or weak execution. In both cases student achievement and motivation can actually decline rather than improve.
Sometimes students deemed to be ‘at risk’ have their subject choices and/or time in class reduced so that they can attend more intervention. Although this might appeal to frustrated classroom teachers, heads of subject and the senior managers responsible for GCSE grades, it is rarely profitable – and the students themselves miss out on vital learning.
6 Low expectations
Students who arrived at secondary school with low baseline scores – for example, for KS2 SATS or CATS testing, are usually allocated a place in the ‘bottom sets’. Setting can have a major effect, not just on students’ self-perceptions, but on what their teachers expect of them, and therefore what they attempt to teach. Add to this the problems caused when poor behaviour is a criterion for allocation to the bottom set, and we have an invisible but very firm ceiling through which students are unlikely to rise.
7 Insufficiently detailed assessment.
Almost always overlooked, and yet it is the first step to actually solving students’ learning difficulties.
To see how these apply, let’s take an example. Suppose I note that a student only makes superficial references to a character. Inferences – even fairly obvious ones – are overlooked. The student may repeat some phrases we have discussed in class, either orally or in writing, but on probing they show little or no understanding. I might decide that the student has a disability that means that they cannot learn this material, but I choose to look instead at what they know and how I’m teaching it.
What to do? I could do some inference training, or work on comprehension strategies. But could the problem be deeper? What is the student’s vocabulary like? I may be explaining in terms that other students understand, but what if this student doesn’t know some of those terms? What if the student seemed to acquire them in class, but didn’t remember later? Was there enough practice for every student?
And of course limited comprehension could be related to gaps in background knowledge. This is often apparent in students who have arrived from a different culture, even if their language skills are good. But it can also be an issue for students who have not had the opportunities to develop such knowledge. One reason for this can be limited life experience. Another possibility is that they have limited reading experience: they simply haven’t encountered enough print to grow their repertoire of more formal, precise vocabulary.
So we need to drill deeper still into these layers of learning. Just how well can the student read? The school may have some reading data, but in many cases this data is only taken on arrival in Year 7 and not followed up thereafter. There may be other tracking data – most commonly, schools seem to rely on the STAR test associated with Accelerated Reader. While AR may provide pages of reports to show Ousted, practising teachers often find that the scores tend to bounce around and are unhelpful for analysing individual progress.
Even if we take a good standardised test, like the New Group Reading Test, this one score cannot be relied upon as definitive. Not only are there confidence intervals, but with low motivation it is possible for quite able students to appear as if they are in need of help. Running a second standardised test on students who score in the bottom third nearly always yields a number of students – sometimes up to half – whose scores significantly improve. Standardised tests can help to weed out those whose low performance is due to motivation, not a reading problem. (Which is still very useful information.)
While standardised tests may help us to sort students into groups, they do not tell us what we need to actually teach those students. Two students might get an identical score but have quite different gaps in vocabulary, background knowledge and decoding skills. To identify these gaps, we need to engage in ‘fine-grain’ assessment – a level of analysis that is not common in the secondary school curriculum.
For example, we might analyse their oral reading by tracking every error in a passage of reading; we might use word lists to look at their whole word decoding; or we might complete a detailed sound-spelling assessment that identifies exactly where their gaps in decoding are.
Once we have completed this fine-grain assessment, we are in a position to precisely identify the gaps in the student’s learning, which makes addressing them much more efficient. It’s only at this point that we can confidently begin to plan how we will help this student to catch up to where he or she should be.
This is one of the main barriers to changing the trajectories of students in the bottom third – we don’t assess them closely enough. And of course, the classroom teacher rarely has the time or opportunity for such a task – it is a role that requires training and comprehensive assessment tools. But it has to be done if we are serious about helping these students to make the progress that they should.
If you would like to read more about helping these students in lessons, see this post: Six ways to help struggling readers in the secondary classroom. For more detailed discussion on screening, assessment and intervention see our book (below).
If you’d like to talk about screening and assessment systems to help pinpoint why some students are having difficulty, we offer a one-day consultation with school leaders and a two-day workshop on fine-grain assessment of reading skills.
Visit our website.
You may also be interested in: