The best way to understand proper use of data and assessment is to look at an intricately designed course that makes use of it faultlessly. In other words, look at a Direct Instruction course. Within Expressive Writing, for example, there is a lot of data generated within each lesson, but it is not the teacher who makes use of it; it is the students. In other words, the students get the chance to self-correct when they find out during the ‘Check Your Work’ section of an activity that they have got some of it wrong. But there is no action for the teacher here, other than circulating and checking that the students are in fact self-correcting. There is no instruction in the DI script for the teacher to gather data about how many mistakes students have made and re-do the activity if there are too many. Evidently, the authors of the programme have judged that this is not a useful time for gathering that sort of data. The focus here is not on data-gathering but on repeated practice with rapid feedback.
The authors of Expressive Writing have judged that the time for gathering data about mastery and reteaching where necessary is not lesson by lesson, but at regular, well-spaced intervals. It is done through the mastery tests, which are every 10-15 lessons or so. It is at this point that the teacher makes decisions about what to reteach based on specifically identified gaps in student knowledge. These are the interim assessments of the programme. But they are not designed to look like any final assessment, such as American AP exams or English GCSEs. They don’t even look like the final mastery test at the end of Expressive Writing. Instead, these interim assessments are precisely designed to test the components of writing that the programme has taught up to that point.
What we have here is a highly effective programme for improving the accuracy and effectiveness of students’ writing that looks nothing like an English Language GCSE or an American AP exam. But there is no doubt that it will help students do better in such exams, because it will mean that they are able to punctuate correctly and write clearly, keeping control of their tense. It will mean that they are able to lay out speech correctly. It will mean that they are able to structure their writing effectively into paragraphs. It is courses like this which are the best antidote to the tedious and ineffective method that has become so common in England and America, whereby teachers believe that the best way to improve performance in summative tests is by repeatedly doing mocked-up versions of such tests.
DI courses are also an excellent remedy for the idea that teachers have to be continually gathering data and acting on it in every lesson. When a course is well designed with repeated practice of multiple strands built into it, there is no need for this sort of frenetic data gathering. It may well be that some students are getting some things wrong some of the time, but instead of trying to act on that in the moment by adjusting our lesson plan on the fly, we should have a robustly designed programme of instruction which takes this into account by building in lots of practice along with rapid feedback and opportunities for student self-correction.
And even when all students are getting everything right, that doesn’t necessarily mean it’s time to move on, because a well-designed programme of instruction will continue practice beyond this point in order to make things really stick through overlearning.
There are no DI programmes available for much of what we need to teach. For example, there is no DI English literature course. But we should be looking to DI as the gold standard for how to design our own programmes of instruction. Whatever question we have about designing instructional sequences — whether it is how often to assess, or how to design assessments, or how to analyse assessments, or how much practice students need — we will find answers if we carefully study the masters at work by considering how these things are done by the authors of DI programmes.