Approximately 20% of U.S. children have difficulty learning to read (Grossen, 1997). It is widely recognized that early reading difficulties tend to persist over time. Juel (1988) demonstrated that first grade students who struggle with letter and word recognition plus phonological processing were highly likely to have reading difficulties in the later elementary grades. In response, there have been numerous efforts to develop early intervention programs for young children at risk for reading difficulties (e.g., Adams, Foorman, Lundberg & Beeler, 1998; Slavin, Madden, & Wasik, 1997; Torgesen & Bryant, 1993).
In a recent article related to the prevention of reading difficulties, Torgesen (2002) identified two types of skills that are required for successful reading comprehension. These include:
- General language comprehension
- Word recognition fluency
Torgesen emphasized that a prerequisite for recognizing and comprehending words is the acquisition of phonemic awareness skills. Increasingly, educators are recognizing the importance of phonological awareness as a building block to literacy. Therefore, it is important to include strategies that promote phonological awareness, in addition to strategies that promote letter and word recognition fluency and oral language skills, in programs aimed at helping young children at risk for reading difficulties.
Many school psychologists have a strong knowledge base in the areas of instructional intervention and outcome evaluation, enabling them to serve as a resource to educators in the design and evaluation of literacy development programs. The purpose of this article is to describe how school psychologists can partner with reading specialists and classroom teachers to evaluate the benefits of early intervention reading programs in their districts. This article will describe five domains to be considered in comprehensive outcome evaluations of early intervention reading programs:
- Instructional outcomes (e.g., phonemic awareness skills and letter recognition fluency)
- Process variables (e.g., amount of instruction and active engagement in reading); (c) procedural integrity
- Social validity
- Family involvement
Additionally, specific measures to evaluate outcomes in each of these domains will be discussed (see Table 1).
Table 1. Domains of assessment for early literacy programs
Assessment Domain | Brief Description | Examples of Measures |
---|---|---|
Instructional Outcomes (Alphabet Recognition) | Student’s fluency with regard to recognizing upper and lower case letters of the alphabet. |
|
Instructional Outcomes (Phonological Skills) | Student’s fluency with regard to a variety of phonological skills, including: rhyming, blending, and segmenting |
|
Academic Learning Time | Amount of time a student is engaged in instruction (i.e., on-task behavior) |
|
Amount of Instruction | Amount of time a student is provided with reading instruction (i.e., the dose of the intervention) |
|
Social Validity | The degree to which participants in the intervention find it acceptable, fair and appropriate. |
|
Procedural Integrity | The degree to which a program is implemented as intended |
|
Family Involvement | A wide range of activities that can include engaging in learning activities at home and in the community |
|
Instructional outcomes
Dynamic Indicators of Basic Early Literacy Skills (DIBELS)
The DIBELS (Good & Kaminski, 1996) was designed to assess early literacy skills of emergent readers. This measure identifies students who are not making sufficient progress in the acquisition of important early literacy skills, and it is useful for monitoring the effectiveness of reading interventions (Kaminski & Good, 1996). The DIBELS measures were designed to assess phonological awareness, knowledge of the alphabet and fluency with text. The measures assess a broad range of important early literacy skills (i.e., initial sounds, letter naming, phoneme segmentation, nonsense word reading) that are predictive of later reading proficiency (see http://dibels.uoregon.edu ).
Comprehensive Test of Phonological Processing (CTOPP)
The CTOPP (Wagner, Torgeson, & Rashotte, 1999) is a norm-referenced measure used to assess phonological awareness, phonological memory and rapid naming. The primary uses of this instrument are to:
- Identify individuals who are below their peers in phonological skills
- Document a student’s progress in response to intervention
- determine strengths and weaknesses among phonological processes
- Validate systematic instruction programs through research
The Phonological Awareness Composite of the CTOPP is especially useful for evaluating early literacy instruction. This composite combines three subtests (Elision, Blending Words and Sound Matching) to obtain a measure of a student’s ability to segment and blend sounds. These skills are thought to be of primary importance in later word decoding. The subtests also contain practice items that enable the examinee to learn the task and receive feedback before scoring begins. However, the samples used to derive the norms do not closely match those populations typically found within urban schools (i.e., students from racially and ethnically diverse backgrounds). In addition, this measure does not have alternate forms; therefore, it has limited utility for repeated measurement.
Wechsler Individual Achievement Test — Second Edition (WIAT-II).
The Word Reading subtest of the WIAT-II (Wechsler, 2001) provides a norm-referenced measure of reading decoding. The examinee is required to name letters of the alphabet, identify and generate rhyming words, match similar beginning and ending sounds, match sounds with letters and letter blends, and read words in isolation. The discrete skills measured in this subtest can provide useful information to parents and teachers about which skills the student has developed, and which should be targeted for supplemental instruction. However, as with the CTOPP, the sample used to obtain the norms for the WIAT-II does not closely match populations typically found within urban schools. Additionally, the test floor in this instrument also can be inadequate for children below 6 years of age (Flanagan, Ortiz, Alfonso, & Mascolo, 2002).
Process variables
Academic learning time
Academic learning time (ALT), defined as the amount of time a student is actively, successfully and productively involved in learning, is strongly related to academic achievement (Gettinger & Siebert, 2002). ALT is comprised of a number of components, including allocated time, instructional time, engaged time, and successful and productive learning time. For further explanation of ALT and its components, the reader is directed to Gettinger and Siebert (2002). School psychologists can use systematic, direct observations to assess academic learning time during reading interventions. The Code for Instructional Structure and Student Academic Response (CISSAR; Stanley & Greenwood, 1981) is an example of a system that school psychologists can use to assess environmental instructional variables. The CISSAR can be used to measure students’ active responses, including reading aloud, asking questions, answering questions and engaging in academic talk. Off-task behaviors and teacher behaviors can also be assessed with this system.
Amount of instruction
Torgesen (2002) argued that children with reading difficulties need more intense instruction in reading (i.e., more learning opportunities) compared with peers with average reading skills. Amount of instruction is a process variable that can be recorded quite simply by logging the number of reading sessions that made up the intervention.
Procedural integrity
Procedural integrity refers to the degree to which a program is implemented as intended. With regard to interpreting results from literature development programs, measures of procedural integrity provide an index of the degree to which discrete components of the program were implemented. Procedural integrity can be assessed through direct observations of instruction or by audio- or video-taping sessions and coding them at a later date (Ehrhardt, Barnett, Lentz, Stollar, & Reifin, 1996). Integrity checklists should be developed prior to the implementation of the program and should include the steps to follow in the lessons. Additionally, checklists can include process variables such as amount of praise provided to children during the session and the level of student engagement.
Family involvement
Both education researchers (Christenson & Sheridan, 2001; Miedel & Reynolds, 2000) and policymakers, such as the U.S. Department of Education (www.ed.gov/pubs/CompactforReading/kit_ack.html , encourage family involvement in early reading programs. Family involvement represents a wide range of activities, including: meeting children’s basic health and safety needs, communicating with teachers and administrators, serving as a parent volunteer or in school governance, and engaging in learning activities at home and in the community (Epstein & Dauber, 1991). In comparison to interventions based solely in the school, reading programs with a family involvement component have the potential for better academic outcomes because caregivers are able to support the acquisition of reading skills in the home (Hoover-Dempsey & Sandler, 1995). In addition, caregivers can support their child’s education by ensuring that the child regularly attends school and by explicitly teaching the value of learning.
Although research on the assessment of family involvement in education is still in its infancy, two measurement tools show particular promise for evaluating early reading programs. The Family Involvement Questionnaire (FIQ; Fantuzzo, Tighe, & Childs, 2000) is administered to caregivers of children in preschool to first grade and measures three dimensions of family involvement: school-based involvement, home-school conferencing and home-based involvement. Another sound measure of family involvement is the Parent-Teacher Involvement Questionnaire (PTIQ; Kohl, Lengua, McMahon, & Conduct Problems Prevention Research Group, 2000). The PTIQ has both a parent and a teacher version and has been validated with children in kindergarten and first grade. Other methods of assessing family involvement in early reading programs include asking parents to estimate the level of their involvement in various education-related activities (e.g., Hoover-Dempsey, Bassler, & Brissie, 1992) and reviewing school records to calculate the frequency and purpose of home-school contact.
Conclusions
Early recognition of reading difficulties and effective intervention to promote literacy skills are important to prevent life-long educational and social struggles. Given their skills in assessment and outcome evaluation, school psychologists can play an important role in working with educators to assess the effectiveness of early intervention reading programs in their school districts. The purpose of this article was to describe domains to consider when developing an outcome evaluation plan, as well as specific measures that can be used to assess each of these domains. Information from outcome evaluations of early literacy programs can be helpful to: monitor children’s progress, understand whether the current literacy program in the school district is effective or needs to be modified, and provide a rationale to administrators for continued program funding.
This article has been posted with the permission of NASP as part of the NASP-Reading Rockets Partnership. NASP retains the copyright of these materials. All reprint or use permission should be directed to NASP via [email protected].
Social validity
Social validity refers to the degree to which participants (e.g., students, teachers, parents and administrators) in behavioral and academic interventions find them acceptable in terms of:
Treatment acceptability is one aspect of social validity, referring to the perceived fairness and appropriateness of intervention procedures. Treatment acceptability is an important variable to assess in outcome evaluations of early literacy programs, as it is helpful in determining the likelihood that the instructional procedures in the program will be implemented (Reimers, Wacker, & Koeppl, 1987). There are several scales that can be used to assess treatment acceptability. The Treatment Evaluation Inventory (TEI; Kazdin, 1980) for parents and the Intervention Rating Profile (IRP-15; Witt & Elliott, 1985) for teachers are two examples. The Children’s Intervention Rating Profile (CIRP; Witt & Elliott, 1985) is a brief scale for assessing child perceptions of acceptability.