Universal screening is the process of regularly checking every student’s performance (taking their “temperature”) periodically during the school year. This helps to identify students who are responding well to core instruction (on track), as well as those who may require supplemental (some risk) or intensive (high risk) intervention and instructional support as an integral component to an effective MTSS program.
FastBridge Learning recommends three screening periods for all students—fall, winter, and spring—because student performance can change drastically across the school year. A student may need support in the fall, but no longer require the added resources in the winter. Likewise, a student may score well in the fall, but struggle later in the year. FAST™ uses three screening periods to help teachers make informed decisions about intervention throughout the entire year at the school, class, and individual student level.
FAST provides evidence-based CBM and CAT tools for reading, mathematics, and behavioral screening that are brief and highly predictive of future outcomes—thereby maximizing instructional time and resources. Our unique multi-source, multi-method approach is designed to more accurately identify instructional groupings and reduce “false positives” and “false negatives” regarding students’ levels of proficiency, risk, and future outcomes. FAST screening measures include:
When used consistently with fidelity as part of an MTSS model, FAST provides teachers with exceptional and timely data to identify students at risk for academic and behavioral difficulty, as well as supports to implement the appropriate research-based intervention and instruction at the right time and build capacity for data-based decision-making.
Two of the most useful ways to look at results of a FAST screening assessment are:
Benchmarks: These are the standards by which student scores are judged. They are used to determine whether students are on track to be successful or are at-risk. These are noted in FAST as “!” or “!!” when students are at some risk (!) or high risk (!!).
FAST benchmarks are not based on the scores of students in your school or district. Rather, they compare a student’s level of achievement to a criterion or benchmark that is aligned with relevant outcomes (e.g., state-mandated achievement tests). Teachers can use these comparisons to identify how many students are falling behind. Within each assessment and grade, FAST provides fall, winter, and spring benchmarks.
Local Percentiles (aka Local Norms): These compare a student’s score to other student scores in your class, school, or district. In FAST, local percentiles are reported as percentile ranks and are color-coded.
Higher percentile ranks indicate better performance compared to lower percentile ranks. A percentile rank of 20 at the school level means that a student scored as well or better than 20% of other students in the same grade at your school (and not as good as 80%). Local percentiles / percentile ranks provide complementary information to benchmark data.
It might seem like these purposes are the same, but they are not.
Benchmark comparisons help teachers identify who is at, below, or above the expected level of performance for a particular grade.
Norm comparisons allow teachers consider how their students match up to other students in the same class, school, or district. Normative comparisons complement benchmark comparisons.