By: Holly Windram, PhD
As we approach the middle of the school year, tier 2 and tier 3 intervention delivery and progress monitoring are in place with at least 1-2 data review meetings having occurred. It’s not too soon to think about the plan for winter screening that typically occurs during December or January of a school year. Practically, the first step is to confirm – when is winter benchmarking happening in your school or district? There is an article in the FAST Knowledge Base titled “When will my next screening period become available?” which quickly explains how to see the screening dates set by your District/School Manger (hint – it’s right at the beginning of the article). After confirming the dates, the steps for winter benchmarking with FAST are the same as those in the fall, including (a) selecting the assessments, (b) arranging a schedule, (c) making the assessments available for local users, (d) conducting screening, and (e) reviewing data (Brown, 2017). However, within these steps are some considerations for school teams that are unique to winter benchmarking.
Selecting the assessments
To complete this step, the first question must be for what purpose do we want to assess? That is, what are the instructional questions we want the assessment to answer? Within an MTSS framework, for winter benchmarking, the questions we want to answer are:
- Have all students made growth towards their end-of-year benchmark targets? (i.e., is tier 1, core instruction effective)?
With about 4-5 months of instruction having occurred, it makes sense to screen all learners to get a check on the overall effectiveness of core instruction. Benchmarking gives us an indicator of how all students are progressing in terms of the instruction they receive every day. Winter benchmark data also tell us about how well core instruction is working to ensuring adequate growth for all learners. The anticipated scenario is that having used fall benchmark data to identify how to best strengthen core instruction for learners, winter benchmark data will show a) maintenance in learning for students who met fall benchmark (i.e., they meet winter benchmark suggesting growth is on-track towards the end-of-year target); and, b) students who were below the fall benchmark are now meeting the winter benchmark target (i.e., they have closed gaps in learning).
- Are there kids who met the fall benchmark target who do not meet the winter target?
This is the same purpose for fall screening in that we need a mechanism to sort kids into “risk” and “not at-risk” at this point in the year. One difference is at the fall benchmark we may have identified some learners that were meeting the fall benchmark but just barely, and we wanted to “keep a close eye” on them; so, we are intentional in focusing on those students. Further, we may have some learners who entered school after fall benchmarking or those who are unexpectedly showing other classroom indicators of slower educational growth. We expect winter benchmark data will validate and add instructional information to best meet their instructional needs.
- Are there students who did not meet the fall benchmark who we believe WILL meet the winter benchmark?
Progress monitoring data will give us an indicator of students who may not have been on-track that now are on track. These are students who may be at the top of the list for transitioning out of intervention.
- Do progress monitoring data and benchmark data corroborate?
We expect they should; but, what if they don’t? It’s an interesting and important question for teams to examine for learners receiving intervention as part of a problem-solving process. We must ensure understanding of what benchmark data and progress monitoring data can and cannot tell us about student learning, and how to apply those data for targeted and intensive instructional needs.
In selecting benchmark assessments, schools will use the same benchmark assessments used in the fall. Exceptions may include districts or schools who decide to add an additional content area. For example, if only aReading and CBMreading were conducted in the fall, but the district or school has decided to add the SAEBRS during the winter benchmark for social/behavior. Note that most FAST assessments are designed for benchmarking and screening purposes across PreK-12, and are described in the Assessment Quick Guide and Assessments by Level. Also note that when using earlyReading and earlyMath, the specific subtests included in the Composites for those assessments change at each screening period, but still yield a Composite score.
Arranging a Schedule
This is the most labor intensive part of planning seasonal benchmarking. There is no one “right” way to plan a seasonal benchmark schedule; however, beginning to plan 6-8 weeks in advance can help facilitate a process that is as smooth as possible. Note there is usually an extended school break in late December through early January in most schools/districts; so, include that in the winter benchmark planning timeline. Questions to consider when planning include:
- Should all students be screened in the winter?
- Are there high-achieving students for whom such data are not needed. See VanDerHeyden (2013) for more information.
- For computer administered assessments,
- How and when will students access the technology?
- When are computer labs available OR when can students access devices to take the assessments? For example, will there need to be temporary schedule changes made to accommodate computer lab use by all classrooms in the building during benchmarking?
- How will those changes impact instruction typically happening outside of the benchmarking window?
- For 1:1 assessments (e.g., CBMreading),
- When and where will these take place?
- Will you utilize a “swat team” approach where several staff will assess whole classrooms or grade levels within a designated period of 1-2 hours?
- Or will students come to a single location – like a gym or library – that is set up with individual staff OR
- Will teachers be expected to conduct the screenings for their own students, so subs will be needed?
No matter who is conducting the screenings, plan make-up dates. Ensure there is a plan for students and staff who are absent and/or there is an invalid score or session and will need additional time to complete the assessment. It is also very important to ensure everyone is trained. Are there new proctors or staff members who did not participate in fall benchmarking? Will a refresher training be required for those staff who only assist with benchmarking (this is recommended)? Steps to assist with training include:
- Prior the onset of winter benchmark data collection happening with students, schedule direct observation fidelity checks using the Observing & Rating Administrator Accuracy (ORAA) checklist available for every FAST assessment. Under the Training & Resources tab in the FAST system, once an assessment is selected, the ORAAs can be downloaded from Lesson 8 in each assessment training module as shown below.
Finally, include in the plan the dates/times/locations for when the winter benchmark data will be reviewed by school teams (e.g., grade level teams, PLCs, Administrators, etc.). Create a screening schedule that shows who, what, when, and where for each day of the benchmark period, and then get feedback. Letting teams and colleagues review the schedule in advance allows adjustments around other scheduled activities.
Making the assessments available for local users
For those using the FAST system, the District Manager or School Manager will set the dates during which chosen benchmark assessments will be available for administration and scoring. It’s also likely these dates were established at the beginning of the school year for all seasonal benchmarking timeframes, and, consequently, the window will automatically be open in the FAST system on the pre-determined start date. That said, verify both the dates and that the assessment will be available during those dates in advance so there’s not computer lab full of squirrely 2nd graders, no winter benchmark assessment available, and a frustrated classroom teacher who has to create an impromptu plan B for that hour (see how to do this here).
After developing a solid plan, the last two steps – conducting assessments and then reviewing the data – are what follows. While planning for any benchmarking is a steep time investment, the payoff is that resources and staff time are used most efficiently, and high-quality data are collected so high quality instructional decisions can be made for our kids. Yes, Murphy’s Law will always apply; but, with solid planning schools will be in an optimal position for a successful winter benchmarking process.
Brown, R. (2017, August 4 ). [Blog Post]. Getting Started with FastBridge Learning™. Tips for New Users. Retrieved from http://www.fastbridge.org/2017/08/getting-started/.
Oxford Reference. (2017). Murphy’s Law. Retrieved from http://www.oxfordreference.com/view/10.1093/oi/authority.20110803100217459.
VanDerHeyden, A. M. (2013). Universal screening may not be for everyone: Using a threshold model as a smarter way to determine risk. School Psychology Review, 42(4), 402.