Is this expected progress for such students or less than what has been shown for effective early reading interventions?
Shanahan’s response:
This is a very complicated question. No wonder the field has largely ducked it. Research is very clear that amount of instruction matters in achievement (e.g., Sonnenschein, Stapleton, & Benson, 2010), and there are scads of studies showing that various ways of increasing the amount of teaching can have a positive impact on learning (e.g., preschool, full-day kindergarten, afterschool programs, summer school programs).
Although many think that within-the-school-day interventions are effective because the intervention teachers are better or the methodology is different, but there is good reason to think that the effects are mediated by the amount of additional teaching that the interventions represent. (Title I programs have been effective when delivered after school and summer, but not so much with the daytime within school (Weiss, Little, Bouffard, Deschenes, & Malone, 2009); there are concerns about RtI programs providing interventions during reading instruction instead of in addition to it (Balu, Zhu Doolittle, Schiller, Jenkins, & Gersten, 2015)).
Research overwhelmingly has found that a wide-range of reading interventions work — that is the kids taught by them outperform similar control group kids on some measure or other — but such research has been silent about the size of gains that teachers can expect from them (e.g., Johnson & Allington, 1991). There are many reasons for such neglect:
- Even though various interventions “work” there is a great deal of variation in effectiveness from study to study.
- There is a great deal of variation within studies too — just because an intervention works over all, doesn’t mean it works with everybody who gets it, just that it did better on average.
- There is a great deal of variation in the measures used to evaluate learning in these studies — for example, if an early intervention does a good job improving decoding ability or fluency, should that be given as much credibility as one that evaluated success with a full-scale standardized test that included comprehension, like the accountability tests schools are evaluated on?
- Studies have been very careful to document learning by some measure or other, but they have not been quite as rigorous when it comes to estimating the dosages provided. In my own syntheses of research, I have often had to provide rough guestimates as to the amounts of extra teaching that were actually provided to students (that is, how much intervention was delivered).
- Even when researchers have done a good job of documenting numbers and lengths of lessons delivered, it has been the rare intervention that was evaluated across an entire school year — and, I can’t think of any examples, off hand, of any such studies longer than that. That matters because it raises the possibility of diminishing returns. What I mean is that a program with a particular average effect size over a 3-month period may have a lower size of effect when carried out for six or 12 months. (Such a program may continue to increase the learning advantage over those longer periods, but the average size of the advantage might be smaller).
Put simply? This is a hell of a thing to try to estimate — as useful as it would be for schools.
One interesting approach to this problem is the one put forth by Fielding, Kerr, & Rosier, 2007. They estimated that the primary grade students in their schools were making an average year’s gain of one year for 60-80 minutes per day of reading instruction. Given this, they figured that students who were behind and were given additional reading instruction through pullout interventions, etc. would require about that many extra minutes of teaching to catch up. So, they monitored kids’ learning and provided interventions, and over a couple of years of that effort, managed to pull their schools up from about 70% of third graders meeting or exceeding standards to about 95% — and then they maintained that for several years.
Fielding and company’s general claim is that the effects of an intervention should be in proportion to the effects of regular teaching … thus, if most kids get 90 minutes per day teaching and, on average, they gain a year’s worth on a standardized measure, then giving some of the kids an extra 30 minutes teaching per day, should move those kids an additional three to four months. That would mean that they would pick up an extra grade level for every two to three years of intervention. I’m skeptical about the accuracy of that, but it is an interesting theory.
Meta-analyses have usually reported the average effect sizes for various reading interventions to be about .40 (e.g., Hattie, 2009). For example, one-to-one tutoring has a 0.41 effect (Elbaum, Vaughn, Tejero Hughes, & Watson Moody, 2000.
For example, in a review of such studies with students in grades 4-12, the average effect size with standardized tests was only .21 (Scammacca, Roberts, Vaughn, & Stuebing, 2015); and in another sample of studies, the impact on standardized comprehension tests was .36 (Wanzek, Vaughn, Scammacca, Gatlin, Walker, & Capin, 2016).
You can see how rough these estimates are, but let’s just shoot in the middle someplace: 0.25-0.30 (a statistic I obviously just made up, but you can see the basis on which I made it up — relying most heavily on the best studies, the best and most appropriate measures).
What does that mean? As long as we are talking about primary grade kids and typical standardized reading tests, the usual size of a standard deviation is about one year. In other words, if you took a 3rd grade Gates-MacGinitie and tested an average group of second and third graders with it, you’d find about one standard deviation difference in scores between the grade level groups. (Those connections between amount of time and standard deviation change as you move up the grades, so you can’t easily generalize up the grades what I am claiming here).
Thus, if you have a second-grader who is one full year behind at the beginning of the year (that is the class gets a 2.0 grade equivalent score in reading, but this child gets a 1.0), and the student is in a good classroom program and an effective intervention, we should see the class accomplishing a 3.0 (that would be the year’s gain for the year’s instruction), and the laggard student should score at a 2.25-2.30.
All things equal, if we kept up this routine for three to four years, this child would be expected to close the gap. That sounds great, but think of all the assumptions behind it: (1) the student will make the same gain from classroom teaching that everyone else does; (2) the intervention will be effective; (3) the intervention will be equally effective each year — no one will back off on their diligence just because the gap is being closed, and what was helpful to a second-grader will be equally helpful with a third-grader; (4) the intervention will continue to be offered year-to-year; and (5) that the tests will be equally representative of the learning elicited each year.
That tells you how much gain the group should make. Your question doesn’t tell how far behind the kids were when they started, nor does it tell how much gain was made by the 56% who didn’t reach grade level … so moving 44% of them to grade level in two years may or may not be very good. I could set up the problem — plugging in some made up numbers that would make the above estimates come out perfectly, which would suggest that their intervention is having average effectiveness … or I could plug in numbers that might lead you to think that this isn’t an especially effective intervention.
I have to admit, from all of this, I don’t know whether their intervention is a good one or not. However, this exercise suggests to me that I’d be seeking an intervention that provides at least, on average, a quarter to a third of a standard deviation in extra annual gain for students. And, that has some value.
References
Balu, R., Zhu, P., Doolittle, F., Schiller, E., Jenkins, J., & Gersten, R. (2015). Evaluation of response to intervention practices for elementary school reading. Washington, DC: U.S. Department of Education.
Fielding, L., Kerr, N., & Rosier, P. (2007). Annual growth for all students… Catch up growth for those who are behind. Kennewick, WA: New Foundation Press.
Johnson, P., & Allington, R. (1991). Remediation. In R. Barr, M. L. Kamil, P. B. Mosenthal, & P.D. Pearson (Eds.), Handbook of reading research (vol. 3, pp. 1013-1046). New York: Longman.
Scammacca. N.K., Roberts, G., Vaughn, S., & Stuebing, K.K. (2015). A meta-analysis of interventions for struggling readers in grades 4-12: 1980-2011. Journal of Learning Disabilities, 48, 369-390.
Sonnenschein, S., Stapleton, L. M., & Besnon, A. (2010). The relation between the type and of instruction and growth in children’s reading competencies. American Educational Research Journal, 47, 358-389.
Weiss, H.B., Little, P.M.D., Bouffard, S.M., Deschenes, S.N., & Malone, H.J. (2009). The federal role in out-of-school learning: After-school, summer school learning, and family instruction as critical learning supports. Washington, DC: Center on Education Policy.
Wanzek, J., Vaughn, S., Scammacca, N., Gatlin, B., Walker, M.A., & Capin, P. (2016). Meta-analyses of the effects of tier 2 type reading interventions in grades K-3. Educational Psychology Review, 28, 551-5
About the Author
Literacy expert Timothy Shanahan shares best practices for teaching reading and writing. Dr. Shanahan is an internationally recognized professor of urban education and reading researcher who has extensive experience with children in inner-city schools and children with special needs. All posts are reprinted with permission from Shanahan on Literacy .