skip to Main Content
FAST Status: All Systems Go!

FastBridge Research Foundations

From research to results: The method behind our assessments

Before you see one of our tools, we perform substantial research in designing, evaluating and validating how it works.

Our research base consists of a multi-step process to review all possible system measures. These measures are designed to be used as part of a problem-solving process embedded within a multi-tier system of support (MTSS).

  1. Initial development
    All our content starts in a university research lab directed by leading educational researchers who:

    • Create ideas for new assessments and tools by reviewing research on what schools need.
    • Develop a detailed research plan that defines the materials, participants, procedures and metrics.
    • When applicable, conduct a small pilot study to check the plan and identify any changes needed before starting a major study.
  2. Controlled studies
    Once the research plan is verified, we conduct one or more controlled studies. A “controlled” study means all the materials and steps are used in a precise way to test whether the specific method defined in the study leads to the expected outcome. If the results are not as expected, we may develop and conduct a new study that adjusts specific details. Once the study results show that the new idea helps students and teachers, we create a draft article and submit it to a peer-reviewed journal.
  3. Peer-reviewed articles
    Research articles submitted for peer review are sent to at least two expert researchers in that field who read the article and give detailed feedback. Only those articles that demonstrate high-quality research and significant outcomes are published. All our assessments have been published in one or more peer-reviewed articles, which can be found here.
  4. Lab status
    Once a new assessment has been deemed effective in peer review, it is submitted for possible use as part of our online system. Not all innovations are included right away; some assessments are added in “Lab” status. Lab status means that the assessment has been researched and verified in controlled studies but not with large numbers of students in everyday classroom settings. While in Lab status (usually one school year), the tool is available for trial use to customers. During that time, we collect data on how the tool works to decide when it’s ready to be endorsed.
  5. Endorsed assessment
    After the data indicates that a new assessment works as intended, it is released from Lab status and becomes endorsed. Endorsed assessments have multiple sources of high-quality research that indicate they help teachers identify and solve student problems in schools.

References

Ardoin, S. P., & Christ, T. J. (2009). Curriculum-based measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology Review, 38, 266-283.

Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding curriculum-based measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51, 1-18. doi:10.1016/j.jsp.2012.09.004

Ardoin, S. P., Eckert, T. L., Christ, T. J., White, M. J., Morena, L. S., January, S. A., & Hine, J. F. (2013). Examining variance in reading comprehension among developing readers: Words in context (curriculum-based measurement in reading) versus words out of context. School Psychology Review, 42, 243-261.

Ardoin, S. P., Williams, J. C., Christ, T. J., Klubnik, C., & Wellborn, C. (2010). Examining readability estimates’ predictions of students’ oral reading rate: Spache, lexile, and forcast. School Psychology Review, 39, 277-285.

Christ, T. J. (2006). Short-term estimates of growth using curriculum-based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128-133.

Christ, T. J., & Ardoin, S. P. (2009). Curriculum-based measurement of oral reading: Passage equivalence and probe-set development. Journal of School Psychology, 47, 55-75. doi:10.1016/j.jsp.2008.09.004

Christ, T. J., & Ardoin, S. P. (2015;2014;). Commentary on new metrics, measures, and uses for fluency data. Reading and Writing,28, 151-157. doi:10.1007/s11145-014-9513-4

Christ, T. J., & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36, 130-146.

Christ, T. J., Silberglitt, B., Yeo, S., & Cormier, D. (2010). Curriculum-based measurement of oral reading: An evaluation of growth rates and seasonal effects among students served in general and special education. School Psychology Review, 39, 447-462.

Christ, T. J., White, M. J., Ardoin, S. P., & Eckert, T. L., (2013). Curriculum based measurement in reading: Consistence and validity across best, fastest, and question reading conditions. School Psychology Review, 42, 415-436.

Christ, T. J., Zopluoglu, C., Long, J. D., & Monaghen, B. D. (2012). Curriculum-based measurement of oral reading: Quality of progress monitoring outcomes. Exceptional Children, 78, 356-373.

Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51, 19-57. doi:10.1016/j.jsp.2012.11.001

Hintze, J. M., & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33-, 204-217.

January, S. -A.A., Ardoin, S. P. , Christ, T.J., Eckert, T.L., & White, M.J. (in press). Evaluating the interpretations and use of curriculum-based measurement in reading and word lists for universal screening in first and second grade. School Psychology Review.

Kendeou, P., McMaster, K. L., & Christ, T. J. (2016). Reading comprehension: Core components and processes. Policy Insights from the Behavioral and Brain Sciences, doi:10.1177/2372732215624707

Thornblad, S. C., & Christ, T. J. (2014). Curriculum-based measurement of reading: Is 6 weeks of daily progress monitoring enough? School Psychology Review, 43, 19-29.

Van Norman, E. R., Christ, T. J., & Zopluoglu, C. (2013). The effects of baseline estimation on the reliability, validity, and precision of CBM-R growth estimates. School Psychology Quarterly, 28, 239-255. doi:10.1037/spq0000023

Yeo, S., Fearrington, J. Y., & Christ, T. J. (2012). Relation between CBM-R and CBM-mR slopes: An application of latent growth modeling. Assessment for Effective Intervention, 37, 147-158. doi:10.1177/1534508411420129

Christ, T. J., Johnson‐Gros, K. N., & Hintze, J. M. (2005). An examination of alternate assessment durations when assessing multiple‐skill computational fluency: The generalizability and dependability of curriculum‐based outcomes within the context of educational decisions. Psychology in the Schools, 42, 615-622. doi:10.1002/pits.20107

Christ, T. J., Nelson, P. M., Van Norman, E. R., Chafouleas, S. M., & Riley-Tillman, T. C. (2014). Direct behavior rating: An evaluation of time-series interpretations as consequential validity. School Psychology Quarterly 29, 157-170. doi:10.1037/spq0000029

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M., & Boice, C. H. (2010). Direct behavior rating (DBR): Generalizability and dependability across raters and observations. Educational and Psychological Measurement, 70, 825-843. doi:10.1177/0013164410366695

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S., & Jaffery, R. (2011). Direct behavior rating: An evaluation of alternate definitions to assess classroom behaviors. School Psychology Review, 40, 181-199.

Christ, T. J., Scullin, S., Tolbize, A., & Jiban, C. L. (2008). Implications of recent research: Curriculum-based measurement of math computation. Assessment for Effective Intervention, 33, 198-205. doi:10.1177/1534508407313480

Christ, T. J., & Schanding, G. T., Jr. (2007). Curriculum-based measures of computational skills: A comparison of group performance in novel, reward, and neutral conditions. School Psychology Review, 36, 147-158.

Christ, T. J., & Vining, O. (2006). Curriculum-based measurement procedures to develop multiple-skill mathematics computation probes: Evaluation of random and stratified stimulus-set arrangements. School Psychology Review, 35, 387-400.

Hintze, J. M., Christ, T. J., & Keller, L. A. (2002). The generalizability of CBM survey-level mathematics assessments: Just how many samples do we need? School Psychology Review, 31, 514-528.

Chafouleas, S. M., Briesch, A. M., Riley-Tillman, T. C., Christ, T. J., Black, A. C., & Kilgus, S. P. (2010). An investigation of the generalizability and dependability of direct behavior rating single item scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students. Journal of School Psychology, 48, 219-246. doi:10.1016/j.jsp.2010.02.001

Chafouleas, S. M., Christ, T. J., Riley-Tillman, T. C., Briesch, A. M., & Chanese, J. A. M. (2007). Generalizability and dependability of direct behavior ratings to assess social behavior of preschoolers. School Psychology Review, 36, 63-79.

Chafouleas, S. M., Kilgus, S. P., Jaffery, R., Riley-Tillman, T. C., Welsh, M., & Christ, T. J. (2013). Direct behavior rating as a school-based behavior screener for elementary and middle grades. Journal of School Psychology, 5, 367-385. doi:10.1016/j.jsp.2013.04.002

Christ, T. J., Nelson, P. M., Van Norman, E. R., Chafouleas, S. M., & Riley-Tillman, T. C. (2014). Direct behavior rating: An evaluation of time-series interpretations as consequential validity. School Psychology Quarterly, 29, 157-170. doi:10.1037/spq0000029

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M., & Boice, C. H. (2010). Direct behavior rating (DBR): Generalizability and dependability across raters and observations. Educational and Psychological Measurement, 70, 825-843. doi:10.1177/0013164410366695

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S., & Jaffery, R. (2011). Direct behavior rating: An evaluation of alternate definitions to assess classroom behaviors. School Psychology Review, 40, 181-199.

Eklund, K., Kilgus, S., von der Embse, N.P., *Beardmore, M., & *Tanner, N. (in press). Use of universal screening scores to predict distal academic and behavioral outcomes: A multi-level approach. Psychological Assessment.

Fabiano, G.A., Vujnovic, R., Pelham, W.E., Waschbusch, D.A., Massetti, G.M., Yu, J., Pariseau, M.E., Naylor, J., Robins, M.L., Carnefix, T., Greiner, A.R., Volker, M. (2010). Enhancing the effectiveness of special education programming for children with ADHD using a daily report card. School Psychology Review, 39, 219-239.

Fabiano, G.A., Vujnovic, R., Naylor, J., Pariseau, M., & Robins, M.L. (2009). An investigation of the technical adequacy of a daily behavior report card (DBRC) for monitoring progress of students with attention-deficit/hyperactivity disorder in special education placements. Assessment for Effective Intervention, 34, 231-241.

Kilgus, S. P., Chafouleas, S. M., & Riley-Tillman, T. C. (2013). Development and initial validation of the Social and Academic Behavior Risk Screener for elementary grades. School Psychology Quarterly, 28, 210-226. doi:10.1037/spq0000024

Kilgus, S. P., & Eklund, K. (2016). Consideration of base rates within universal screening for behavioral and emotional risk: A novel procedural framework. School Psychology Forum, 10, 120-130.

Kilgus, S.P., Eklund, K.R., von der Embse, N.P., *Taylor, C.N., & *Sims, W.A. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) Teacher Rating Scale and multiple gating procedure within elementary and middle school samples. Journal of School Psychology, 58, 21-39. Doi: 10.1016/j.jsp.2016.07.001

Kilgus, S.P., Kazmerski, J.S., *Taylor, C.N., & von der Embse, N.P. (2016). Intervention Selection Profile—Function (ISP-Fx): A brief and direct method for functional behavioral assessment. School Psychology Quarterly. doi: 10.1037/spq0000156

Kilgus, S. P., Riley-Tillman, T. C., Chafouleas, S. M., Christ, T. J., & Welsh, M. E. (2014). Direct behavior rating as a school-based behavior universal screener: Replication across sites. Journal of School Psychology, 52, 63-82. doi:10.1016/j.jsp.2013.11.002

Kilgus, S.P., *Sims, W., von der Embse, N.P., & Riley-Tillman, T.C. (2015). Confirmation of models for interpretation and use of the Social and Academic Behavior Risk Screener. School Psychology Quarterly, 30, 335-352. doi: 10.1037/spq0000087.

Kilgus, S. P., *Sims, W., von der Embse, N.P., & *Taylor, C. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) teacher rating scale. Assessment for Effective Intervention. doi: 10.1177/1534508415623269

Kilgus, S.P., Bowman, N.A., Christ, T.J., & Taylor, C.N. (2017).  Predicting academics via behavior within in an elementary sample: An evaluation of the social, academic, and emotional behavior risk screener (SAEBRS). Psychology in the Schools, 54, 246–260.

Nelson, P.M. & Christ, T.J. (2016). Reliability and agreement in student ratings of the class environment. School Psychology Quarterly. Advanced online publication.

Nelson, P.M., Demers, J., & Christ, T.J. (2014). The Responsive Environmental Assessment for Classroom Teaching (REACT): Exploring the dimensionality of student perceptions of the instructional environment. School Psychology Quarterly, 29, 182-197.

Nelson, P.M. & Hall, G.E., & Christ, T.J. (2016). The consistency of student perceptions of the class environment. Journal of Applied School Psychology, 32, 254-267.

Nelson, P.M., Reddy, L., Dudek, C., & Lekwa, A. (in press). Student and observer ratings of the class environment: A preliminary investigation of convergence. School Psychology Quarterly.

Nelson, P.M., Ysseldyke, J.E., & Christ, T.J. (2015). Student perceptions of the classroom environment: Actionable feedback as a guide for improving core instruction. Assessment for Effective Intervention, 1-12.

Riley-Tillman, T. C., Christ, T. J., Chafouleas, S. M., Boice-Mallach, C. H., & Briesch, A. (2011). The impact of observation duration on the accuracy of data obtained from direct behavior rating (DBR). Journal of Positive Behavior Interventions, 13, 119-128. doi:10.1177/1098300710361954

Skaar, N. R., Christ, T. J., & Jacobucci, R. (2014). Measuring adolescent prosocial and health risk behavior in schools: Initial development of a screening measure. School Mental Health, 6, 137-149. doi:10.1007/s12310-014-9123-y

von der Embse, N.P., *Iaccarino, S., *Mankin, A., Kilgus, S., & Magen, E. (in press). Development and factor structure of the Social, Academic, and Emotional Behavior Risk Screener Student Rating Scale (SAEBRS-SRS). Assessment for Effective Intervention.

von der Embse, N.P., Pendergast, L., Kilgus, S. P., & Eklund, K. (2016). Evaluating the applied use of a mental health screener: Structural validity of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS). Psychological Assessment. doi: 10.1037/pas0000253

von der Embse, N.P., Kilgus, S.P., Iaccarino, S., & Levi-Neilson, S. (invited revision and resubmission). Screening for student mental health risk: Diagnostic accuracy and predictive validity of the Social, Academic, and Emotional Behavior Risk Screener-Student Rating Scale (SAEBRS-SRS). School Mental Health.

Vujnovic, R.K., Fabiano, G.A., Pariseau, M.E., Naylor, J. (2013). Parameters of adherence to a yearlong daily report card (DRC) intervention for students with Attention-Deficit Hyperactivity Disorder (ADHD). Journal of Educational and Psychological Consultation, 23, 140-163.

Request a copy of the FastBridge Technical Manual or the FastBridge Benchmarks and Norms Guide.

Calling all investigators!

Are you a doctoral student, faculty member or state department of education employee? We welcome the use of our measures in ongoing research. Please contact Senior Academic Officer Dr. Rachel Brown at rachel@fastbridge.org to learn if your study might be eligible for low- or no-cost access to our assessments.

Back To Top