OSEP Grant Study


Maryanne Wolf






UCLA Graduate School of Education and Information Studies





Laura Rhinehart, Assistant Researcher


A new literacy program will be implemented at UCLA Lab School. Dr. Wolf's team will support the program's implementation as well as launching a research project through CONNECT to study its effectiveness. The school will provide early assessment to students in Kindergarten using a newly designed digital screener whose game-like tasks correspond to the major four areas of early reading predictors (Ozernov-Palchik et al., 2017). Participating teachers will be trained: 1) on the information available from the digital screener; 2) on evidence-based reading programs; 3) on how to match the profiles of the students with the emphases in the programs; and 4) on how to adapt components in the interventions to amplify the emphases needed by groups of children with dyslexia. Based on the four profiles of children that emerged from Ozernov-Palchik et al. study, we project that there will be similar clusters of students in this study who will benefit from carefully adapting the components of each intervention to their literacy needs. Finally, we will examine the factors that might moderate intervention effectiveness (e.g., different populations of students, the poverty level of the school, the language of instruction model of the school, and public/private setting). We are conducting separate studies involving students in Kindergarten through Grade 3 at a total of four different schools in Los Angeles.


This project will address the following research questions:
i. Does the digital screener accurately and efficiently identify discrete groups of kindergarten students with or at-risk for dyslexia and/or reading failure?
ii. Are early evidence-based interventions more effective in improving student reading achievement when differentially targeting the particular characteristics of students with, or at-risk of, dyslexia and/or reading failure?
iii. Do the effects of the interventions persist through the end of second grade? If so, for whom do these effects persist or not persist?
iv. How effective is the model at helping teachers: 1) make better use of early and ongoing assessment; 2) use differential interventions and adapt them to the needs of students; and 3) collaborate with parents to support students’ reading development, especially for students with, or at-risk of, dyslexia?


The goals of this project are to:
i. determine the ability of a new, digital screener both to identify students with, or at-risk of, dyslexia and in the process to differentiate profiles of children with discrete learning characteristics in Kindergarten;
ii. determine the overall effectiveness of different systematic intervention programs for particular profiles of students identified in Kindergarten with, or at-risk of dyslexia and/or reading failure;
iii. determine whether there are added benefits to students who receive interventions targeted and adapted to their areas of weakness over time;
iv. improve the training of teachers in multiple, evidence-based interventions and in adapting emphases in interventions according to needs of students over time;
v. increase the frequency and efficacy of elementary school personnel to understand and implement assessment results and to communicate results to parents so as to collaborate with parents in establishing high expectations for each student with, or at risk for, dyslexia.


The first important contribution of this research is that a new digital screener will be used and tested in this project as a potential universal tool for Kindergarten children in California. Related benefits for teachers include learning how to use this information to provide evidence-based reading research and strategies for improving reading achievement for students with, or at-risk of, dyslexia. The second major contribution is that the educational community can use our results to provide emphases on earlier, more targeted and appropriately differential instruction that can be adapted to the changing needs of the students. The same brain organization that makes these different types of individuals dyslexic in their early learning will give many of them significant advantages later in their lives—notably in areas that advantage skills in pattern analysis and visual processing like art, architecture, and surgery; radiology and finance; and entrepreneurship. But first we must keep them on track in school during the time of their greatest challenges. Few discoveries are more important to those of us who study dyslexia than to be able to predict it before the child has to endure ignominious public failures before peers, parents, and teachers. Not only does this frequent pattern of failure increase the risk of their drop out, delinquency, and imprisonment, but it prevents the potential contributions of often singularly talented individuals to society.
We hope to demonstrate to policy-makers, teachers, and parents of children with dyslexia-- or any form of reading challenge ---that the earlier we can provide information in usable form, the more likely that we can help children reach their potential. Finally, we hope that this demonstration model will be testimony to that fact that whatever we learn to do for children with dyslexia benefits all children, all teachers, and the whole of society


At the end of the project, we will create a Replication Guide for dissemination and replication in local education agencies (LEAs) and State education agencies (SEAs).

Findings will also be used to write manuscripts for publication in peer-reviewed journals.




Students must be in kindergarten in the first year of the study. Their teachers and parents all also eligible for this study.

Since the assessment battery is only available in English at this time, students who do not speak English will be excluded. Students with a disability that precludes their participation in standard academic assessments will also be excluded.

All teachers who teach or support students in our target grade levels are eligible to participate.

Parents who do not read English well enough to complete the consent form or background survey questions will be excluded.

The PI/coPI will determine student and parent eligibility by asking teachers if their students and their parents meet our eligibility criteria. We will ask teachers if any of their students do not typically participate in school-wide assessments because of a disability. In addition, we will ask teachers if there are students in their class who do not take assessments in English because they are still learning English. Students who have disabilities or English skills that prevent them from completing our assessment will not be screened, so they will not be eligible for our study.

To determine parent eligibility, the PI/coPI will ask participating teachers if there are any parents of students in their class that do not understand English. Students whose parents do not understand English will not be eligible for our study.


The three-year study outlined here will be performed at the UCLA Lab School. Kindergarteners attending the Lab School will be eligible for the study, and participating students will be followed for three years, until the end of 2nd grade. (We are conducting a similar study in three elementary schools in LAUSD.)

As part of their ongoing practices, the Lab School has begun a new literacy intervention program with the support of Dr. Maryanne Wolf, a noted UCLA literacy scholar. This research project is focused on studying the implementation and effectiveness of the program at the UCLA Lab School. As part of this research project, Dr. Wolf and her team will be analyzing the data collected from Lab School students' assessments and the resulting interventions for which they participated. Additionally, the Wolf team hopes to collect data through observation of interventions as well as voluntary surveys of Lab School teachers and students' parents of participating students once COVID restrictions allow for in person classrooms.

The literacy intervention program will yield results on students reading performance for which Dr. Wolf's team will be analyzing as part of this research project. Student data collected in year one includes assessments of all eligible kindergarteners that are screened using a new, game-like digital literacy screener on an iPad (EarlyBird). EarlyBird is a 20-minute, tablet-based game that assesses children’s literacy skills while they play. This screener combines of a number of evidenced-based pre-literacy assessments, including both oral and print assessments, into a single screener. Students will complete the screener at three time points before they begin intervention: fall kindergarten, spring kindergarten, and fall 1st grade. Students will take the following EarlyBird subtests at each timepoint: Blending, Letter Names, Letter Sounds, Vocab Receptive Matching, Oral Sentence Comp, and Rapid Automatized Naming (RAN).

Students’ performance on the EarlyBird assessment at several time points for which the Lab School with the support of Dr. Wolf's research team, will result in students being assisted to one of six reading profiles: advanced, average, phonological risk, naming speed risk, multi-deficit risk, and letter knowledge risk. Students in each of the four risk groups will receive targeted intervention, based on their risk group, or risk profile. These profiles are based on the risk profiles identified in the Ozernov-Palchik et. al (2017) study that the PI is a coauthor on. Given that the base rate of children with dyslexia, or more prevalently, those at risk for reading failure, ranges from 8-40% (Elliot & Grigorenko, 2014), it is expected that there will be between 5-25 identified students from the Lab School for intervention.

In first grade, year two of the study, students who have been identified as being in one of the four risk groups will participate in 100 contact hours of explicit treatment programs that have been selected to address their specific profiles of literacy need. Students will receive the intervention for one hour a day, four to five days a week for 20-25 weeks in small groups. All students will again be screened on EarlyBird in fall and spring of 1st grade. Students in the intervention will be tested at three additional targeted midpoint sessions for a total of five testing points at 0, 25, 50, 75 and 100 contact hours of treatment in order to best model change over the treatment period.

Additionally, in first grade, all students will be screened on a battery of widely used and validated measures of reading to refine the digital screeners’ reliability and validity. Trained testers from our study will administer these assessments individually to students in a private location at each elementary school, or over a Zoom session. This battery will take approximately 30 minutes per student. Students will take DIBELS assessment subtests including, letter naming fluency, phonemic segmentation fluency, nonsense word fluency, word reading fluency, and oral reading fluency. Students who are identified at-risk on the DIBELS assessment will be compared to students identified at-risk by the EarlyBird assessment.

In second grade, students who remain in one of the four risk groups, as determined by outcomes from the EarlyBird reading screener, will again participate in 100 contact hours of explicit treatment programs that have been selected to address their specific profiles of deficits. All 2nd graders will be screened using EarlyBird. Students in the intervention will again be tested at three additional targeted midpoint sessions for a total of five testing points at 0, 25, 50, 75 and 100 contact hours of treatment.

Using a repeated measures design for each subject, an index of treatment response will be developed to identify level of response to their particular intervention. A subject-level database will provide ongoing data screening, data correction, and quality control monitoring to ensure that the project database is valid on an ongoing basis.

Growth over the 100 intervention hours will be modeled, with five repeated measurements used to increase the reliability of individual response estimates. Outcomes will be evaluated within a hierarchical linear modeling framework that allows explicit modeling of change over time, individual differences (child profile classification), and instructional group variances. This longitudinal analysis is designed as multilevel (i.e., hierarchical and nested) data, modeling individual (Level 1) variables (i.e., changes in a child’s performance over time), Level 2 fixed effects (i.e., time-invariant individual characteristics/deficit profile type of risk characteristics) that might explain inter-individual variability in growth curves, and Level 3 fixed effects (e.g., shared instructional group membership/class/school factors) that can help reduce standard errors for individual responses.

This study represents a complex cluster-randomized trial comparing multiple interventions. To estimate a design effect emerging from the cluster randomization, intraclass correlations will be calculated from our published data. Power considerations for the repeated measurement component of the design relate to the ability to detect growth parameter differences within each component of the treatment, but also across the whole duration of intervention. Preliminary growth curve models have demonstrated slope reliabilities of .65 - .80 from participants receiving a similar intervention. Based on these estimates and utilizing the simulation results from Hertzog et al. (2008), we anticipate a power from .67 to .83 (median = .79) for detecting differences in slopes.

Students in the intervention will participate in one of four treatment programs: RAVE-O only, Wilson only, Empower and Wilson, and Empower and RAVE-O. The treatment will be provided by experienced teachers trained in the specific intervention programs with ongoing treatment integrity monitoring. Intervention teachers in our study will attend trainings for one or more intervention(s). RAVE-O and Empower require two days of training each, and Wilson requires one day of training. We will work with teachers and their principal to determine which teachers will provide each of the three interventions.

Teacher instruction during implementation of the interventions will be observed to
determine fidelity to implementation. Program implementation data will be collected from teachers through observation of each of the four small group interventions two times each year (1st grade and 2nd grade). The intervention fidelity instrument assesses adherence to the major components of the intervention and allows for the collection of both quantitative and qualitative data focusing on observable teacher behaviors for each lesson activity. (See attached Classroom Observation Form/Teacher Fidelity Form). Personally identifiable information will not be listed on the form. Instead, a number to identify that teacher will be written in the blank on the form next to “Teacher.” Based on the entire observation period, the observers will provide overall “percent of correct implementation” score. This score will be recorded in our research records, but it will not be linked to personally identifiable information. Intervention teachers will be asked to schedule observation times with research staff to conduct observations of two of their intervention sessions for each intervention, each year. Observations of sessions will last for the whole intervention session, but not more than 30 minutes each time. We will train all observers.

All of our observers will hold, minimally, a master’s degree or higher in education or a related field. Observers will attend the teacher training for the intervention program they are observing. In addition, they will be trained on the fidelity checklist. The training is one day in length and includes: review of the intervention to ensure familiarity with the curriculum and implementation, completion of practice observations with videos of intervention sessions to obtain interrater reliability. Observers must obtain a minimum interrater reliability of 90% prior to conducting observations in the field.

Two types of teachers will be part of our study: classroom teachers and intervention teachers. Intervention teachers will provide the interventions, and classroom teachers will be asked to allow their students to take the reading assessments that are part of our study and to allow selected students to participate in reading interventions. Intervention teachers will be asked to fill out short feedback forms on their satisfaction with the intervention training. (They will not put their names on these forms.) Both classroom teachers and intervention teachers will fill out surveys to determine their satisfaction with the intervention program. (Again, they will not put their names on these feedback forms.) In addition, both classroom teachers and intervention teachers will be asked to complete a short background survey, which will be linked to their participant number in our database. In year one, kindergarten classroom teachers will be asked to attend an hour-long training to learn about administering EarlyBird, so they can support administration during online learning, if necessary.

Parents of Lab School students will be asked to fill out a student background survey at the beginning of the year, and a parent satisfaction survey at the end of the year. Parent survey data on their child will be added to our database and linked to the child. Information from the parent satisfaction survey will not be added to our database, but will be used to improve and refine our program.

Student demographic data will be collected from the school. This data will include child: name, gender, date of birth, race/ethnicity, language program at school, special services, and parent income.

School data on students will be linked to names to enable us to link it to the data we collect in the field. Once linking is complete, a project study ID will be assigned, and identifying information will be stripped from the database. The file linking the project study ID to identifying information will be encrypted and stored in a directory that is only accessible by the PI and coPI. Thus, all analysis databases for this project will be stored on a secure server and maintained without identifying information. In addition, only the PI, coPI and project data managers will have access to these databases.

Parents at the Lab School will be notified via the parent newsletter about the study as part of their blanket consent review process. Additionally, we will collect signed consent forms for all participating teachers. Although Lab School teachers have been notified of the study through Lab School administration, we will distribute and collect a separate detailed consent form from Lab School teachers. Lab School teachers will not be compensated for participating in this study.

Note: Beginning next year, when our participants are in first grade, we are planning for students to take the assessments in-person, teachers to attend intervention trainings in-person, and teachers to deliver the interventions in-person, but we will submit changes to the IRB if school buildings remain closed/closed to researchers. We are in discussion with intervention and assessment developers who are working to create online options for assessments, teacher professional developments, and intervention programs.

Participants may withdraw from this research study at any time and for any reason, without penalty. To withdraw, participants (i.e., students' parents or teachers) can contact researchers. For evaluation and reporting purposes, researchers may ask a participant for their reason(s) for early withdrawal. If a participant withdraws from the study after giving consent, their data will not be retained or used. The data collected about the participant up to the point of withdrawal will be excluded from the database and any analysis.


Warning: Array to string conversion in /opt/data/www/connect/wp-content/themes/ucla-connect-test/templates/content-single-project.php on line 29


EarlyBird digital screener, DIBELS assessments


Digital screener: EarlyBird digital screener is a 20-minute, tablet-based game that assesses the child while they play. The digital screener combines a number of critical evidenced-based pre-literacy assessments into a single screener. It is a comprehensive assessment that gives information about risk of developmental dyslexia and general reading risk. The digital screener can be given to a whole class, or large group of about 10-15 students, and should take about 20 minutes.

To validate the screener, we will be assessing students on a variety of paper and pencil assessments (DIBELS assessments) in first grade. In its current form, each subtest takes about 5 minutes to complete, and the entire DIBELS battery/screener takes approximately 30 minutes.

The following DIBELS assessments will be given in first grade:
1. Letter Naming Fluency
2. Phoneme Segmentation Fluency
3. Nonsense Word Fluency
4. DIBELS Oral Reading Fluency
5. Retell Fluency
6. Word Use Fluency


We chose our measures based on a research study by Ozernov-Palchik et al. (2017). Dr. Wolf is also an author of this study. The measures used in that study were shown to be the best ones to determine students’ subtype of reading challenge.

Additionally, a new digital screener, which is shorter and more engaging, will be piloted to determine if it is as effective as the longer battery of paper and pencil assessments.


No informed consent for students, but participating teachers will be asked to fill out a consent form.


The risks of participating in this study are no greater than the risks of everyday life for this subject population.




This study is confidential. Participants will be assigned a subject ID number that will identify all data collected. A key file linking the subject ID number with their actual identifying information will be kept confidential, only assessable by the data manager and key investigators. The key file will be destroyed at the completion of human subjects interaction. If data is shared outside the research team, it will be deidentified and the key will not be shared.


For fall 2020, we are requesting access to the screener that the Lab School will be administering in the fall. The remainder of the research (observations, interventions, etc) will only begin once the UCLA Lab School is open to researchers.


Maryanne Wolf, the P.I., has an ongoing relationship with the Lab School.


Yes, see attached consent form


Kindergarten, 1st, and 2nd grade teachers, along with relevant support staff will be involved. Hilary Dearth will also be involved.

Teachers will lead the intervention groups, beginning next fall. We imagine the same teacher will lead the RAVE-O, Wilson, and Empower groups, and we will work with administration at the Lab School to determine the best teacher, teachers, or support staff person to provide the reading interventions.




Student demographic information: gender, age, race/ethnicity, language(s) spoken, special services/complex learners eligibility, parent income range







UCLA IRB#20-001008


Approved by UCLA IRB (see attached)


Warning: Array to string conversion in /opt/data/www/connect/wp-content/themes/ucla-connect-test/templates/content-single-project.php on line 29