Universal Screening

Universal Screening for Reading Difficulties

Screening Tools - Selection and Use

What to Screen

Decoding Dyslexia CA has created a downloadable summary of Screening by Domain Area and Grade Level  based on recommendations from National Center on Response to Intervention, RtI Action Network, the International Dyslexia Association and other State Departments of Education for review in establishing best practices.


Screening by Domain Area and Grade Level

*If you are viewing from a phone or tablet swipe left or right over the image above to see additional Infographics.

Since “dyslexia is strongly heritable, occurring in up to 50% of individuals who have a first-degree relative with dyslexia” (Gaab, 2017) initial screening should include family history.  In addition, if a particular student is showing signs of being at risk for dyslexia, it is important to closely monitor all siblings as well. Teacher input on a child’s phonological, linguistic and academic performance is also essential. Teachers can complete screening tools that require their rating of a child’s abilities on a scale to measure risk of reading disability.

Universal Screening

When to Screen

Screening should begin in the fall of kindergarten (recommended within the first month of school) and occur at least three times a year (fall, winter and spring) through third grade. Continued annual screening for fourth grade and up is recommended. It is imperative for screening to occur for all children (including English Language Learners), not just the ones “at risk” or who have already been determined to have reading failure. Screeners should target skills that are relevant for both the grade level and the time in the school year when the screener is administered.

Sometimes teachers raise the concern about assessing students early in kindergarten before skills have been taught. There is significant research on the benefits of screening for emergent literacy skills (prior to the start of reading instruction in elementary school) in an attempt to identify students who may be at risk for later reading difficulty so that additional support can be provided proactively and reduce the likelihood that children will later receive a learning disabled classification or experience significant academic difficulties (Wilson, & Lonigan, 2010).  The three emergent literacy skills that are most predictive of reading ability are phonological awareness, print knowledge, and oral language (Lonigan, 2006; Lonigan, et. al., 2008a; Whitehurst & Lonigan, 1998).

When identification and interventions for young children struggling to read are delayed, the variance of individual differences in reading will inevitably increase, widening the gap between strong and struggling readers (McNamara, et. al., 2011).

Selection Criteria

There are many commercially-available screeners out there but not all of them have been well-researched. Few screeners for students at risk for dyslexia are comprehensive in all areas that need screening and it is highly likely that a school will need to use more than one screening tool.  Caution should be used in selecting the screening tool to be used. In this section, we will focus on important criteria to consider in selecting the screening tool to be used as well as resources available for comparing screeners. The good news is, while you should understand the selection criteria, there are a number of free helpful resources that have already rated universal screening tools for you, according to the following criteria.

Some of the Selection Criteria Factors to Be Considered Are:

– the extent to which a screening tool is able to accurately classify students into “at risk for reading disability” and “not at risk for reading disability” categories. Classification Accuracy should be a primary area of importance and focus.

– the extent to which results generated from one population can be applied to another population. A tool is considered more generalizable if studies have been conducted on larger, more representative samples.

– the consistency with which a tool classifies from one administration to the next. A tool is considered reliable if it produces the same results when administering the test under different conditions, at different times, or using different forms of the test. Reliability should be 0.70 or greater. Reliability is necessary, but not sufficient, for a quality screener. To be of value, a screener must also be valid.

– is a measure of how well a given scale measures what it actually intends to measure; leaving nothing out and including nothing extra. Validity should be 0.60 or greater. In the case of a reading screener, it is validity that indicates how completely and accurately the assessment captures the reading performance of all students who take it. Validity is both much harder to achieve than reliability, and far more important.

Screener tools that have been subjected to rigorous peer-review should be given greater attention than ones that have not.

Also, the cost of screener compared to the other criteria listed above should be considered to ensure the best value for your investment. Tools must be practical, brief and simple enough to be implemented reliably on a wide scale under normal circumstances by trained personnel. School districts are encouraged to inventory and evaluate screening tools already in use and to supplement as necessary to minimize additional investments.  A number of commercially-available screeners are available for free or at a very low cost per student.


Commercial assessments have undergone psychometric analyses to determine reliability & validity. A “teacher-made” assessment cannot be referred to as reliable or valid if it has not been analyzed by a psychometrician.

Universal Screening

Types of Scores

A norm-referenced score compares an individual’s performance with the performance of others within a relevant norm group (e.g., other first grade students or students of the same age). Norm-referenced scores are generally reported as percentile ranks and standard scores.

Screening tools that are norm-referenced based on a diverse, national sample allow teachers to compare scores to other norm-referenced formative and summative assessments, and to track individual students’ performance from year to year in a useful way. Norm-referencing should always be preferred if an assessment is otherwise equal or superior to the available options.

School district staff will want to remember that when a district uses cut scores based on national, aggregated norms, these scores will not always align with the resulting percentages for their district. As a result, a cut score at the 20th percentile may identify more or less than 20 percent of their students, depending on the skill level of the class, grade, or school. In this situation, districts might consider choosing a cut score that reflects the performance abilities of students enrolled in their district. This should be done only if there is sufficient data to warrant the change and the school has ready access to trained statisticians who are familiar with test development and cut score selection.  

A criterion-referenced score is interpreted in terms of a set performance standard. In addition, unlike a norm-referenced score that targets a percentage of the population (i.e. the bottom 20 percent), the criterion-referenced score targets those students who are at or below a particular proficiency skill level based on a broader outcome measure. The criterion-referenced score reflects how well a student knows the expected skills or content in a particular curriculum. Some examples of screening assessments that use or offer criterion-referenced measures include DIBELS Next, DIBELS and AIMSweb.

Schools and districts must also consider the value of having a consistent cut score across the district so that comparisons can be made among schools. District-wide cut scores are also advantageous because they allow district administrators to identify educational trends and compare the effects of intervention implementation against non-intervention schools.

Given that the goal of RTI is to prevent poor outcomes for students, most screening assessments use cut scores that are restrictive and over-identify students who are at risk.

In a typical RtI framework, it is assumed that approximately 80% of students’ needs can be met with Tier 1 instruction and that approximately 15% will need Tier 2  leaving about 2-5% of students needing intensive Tier 3 intervention.


If your school (or district) is consistently having more than 20% of your student population failing to meet cut scores for universal screening, you should look at Tier 1 instructional quality and consider investing in improved instructional curriculum and further professional development. It is strongly recommended that a Structured Literacy™ approach be used as one strand of the existing English Language Arts curricula in Tier 1 to provide increased levels of reading failure prevention.

Key Resources - Screening Tools

1) The research teams at the GaabLab at Boston Children’s Hospital and the Gabrieli lab at MIT (thank you to Ola Ozernov-Palchik, Michelle Gonzalez, Lindsay Hillyer, Jeff Dieffenbach, John Gabrieli & Nadine Gaab) have provided a helpful summary of assessments/screeners for dyslexia risk and early literacy milestones that includes information by grade level, skills assessed and administration time… just to name a few features.  This summary is constantly being updated by the research teams as new assessments/screeners are identified.  

Use Caution

Prior to using the GaabLab/Gabrieli Lab resource, it is important to read the authors’ disclaimer. In addition to the disclaimer, please be aware that not all of the assessments/screeners listed meet the criteria of having been peer-reviewed/validated. If your assessment/screener shows on the list as not having been peer-reviewed/validated, you should consider choosing a different screener option.

2) The Center on Response to Intervention at American Institutes for Research (www.rti4success.org) conducts annual reviews of research studies on selected screening tools. It provides a free Screening Tools Chart that includes helpful rating information. The Screening Tools Chart also compares time to administer, cost, training & support and whether there are benchmarks/norms available for screening tools. The Center also provides some online self-paced training modules on Screening and related topics.

3) SEDL, a non-profit affiliate of American Institutes for Research has an online Reading Assessment Database that provides detailed overviews of various assessments with the ability to apply advanced search criteria (https://www.sedl.org) as does the National Center on Intensive Intervention (https://intensiveintervention.org/tools-charts/identifying-assessments).

4) The New Jersey Department of Education has developed a helpful checklist in its NJ Dyslexia Handbook entitled “Selecting A Universal Screener”.

5) Florida Center for Reading Research created a summary of standardized assessments that can be used for screening entitled “Pre-Kindergarten and Kindergarten Emergent Literacy Skills Assessments”.

Excerpts on Universal Screening from Other State Departments of Education Dyslexia Guidelines


Gaab, N. (2017, February). It’s a Myth That Young Children Cannot Be Screened for Dyslexia! Baltimore, MD: International Dyslexia Association.

International Dyslexia Association (2017). Universal Screening: K–2 Reading. Baltimore, MD: IDA.

Lonigan, C.J. (2006). Development, Assessment, and Promotion of Preliteracy Skills. Early Education and Development, 17(1), 91-114.

Lonigan, C.J., Schatschneider, C., & Westberg, L. (2008). Identification of Children’s Skills and Abilities Linked to Later Outcomes in Reading, Writing, and Spelling. Developing Early Literacy: Report of the National Early Literacy Panel, 55-106.

Mather, N., & Wendling, B.J. (2011). Essentials of Dyslexia Assessment and Intervention (Vol. 89). John Wiley & Sons.

McNamara, J.K., Scissons, M., & Gutknecth, N. (2011). A Longitudinal Study of Kindergarten Children at Risk for Reading Disabilities: The Poor Really Are Getting Poorer. Journal of Learning Disabilities, 44(5), 421-430.

National Center on Response to Intervention (2013, January). Screening Briefs Series— Brief #2: Cut Scores. Washington, DC: U.S. Department of Education, Office of Special Education Programs, National Center on Response to Intervention.

Wagner, R.K., Torgeson, J.K., Rashotte, C.A. & Pearson, N. (2013). Comprehensive Test of Phonological Processing Examiner’s Manual (2nd ed.), 62-63.

Whitehurst, G.J., & Lonigan, C.J. (1998). Child Development and Emergent Literacy. Child Development, 69(3), 848-872.

Wilson, S.B., & Lonigan, C.J. (2010). Identifying Preschool Children At-Risk of Later Reading Difficulties: Evaluation of Two Emergent Literacy Screening Tools. Journal of Learning Disabilities, 43(1), 10.1177/0022219409345007. http://doi.org/10.1177/0022219409345007).