skip to Main Content
FAST Status: All Systems Go!

Truth Telling in Education

By: Dr. Peter M. Nelson—Penn State University College of Education

Truth-telling is a topic that is frequently visited in research. Without too much effort, one can find a handful of researchers who fall short of the bar for honesty each year. Likewise, there are a plenty of far-fetched claims for “research-based” products, diets, or fads.

Given the rate of falsified data and embellished product descriptions, it isn’t particularly surprising that these issues are also relevant in the context of education. In fact, truth-telling—both on the part of researchers and those who purport to disseminate research—is more important than ever in education.

As educators, we want to see students do well. We want to use assessments, curricula, and instructional strategies that help us create an ideal learning environment for students. However, our desire to deliver the best kind of service might also make us susceptible to dubious products or strategies.

The term “research-based” is the ultimate buzz word in education—maybe too much of a buzz word. For example, after a quick perusal of the product pavilion at most conferences I usually can’t help but think of Inigo Montoya’s famous line in The Princess Bride: “You keep using that word. I do not think it means what you think it means.”

Although there is no perfect way to guard against false claims in education, there are a few general guidelines that might help. In a 2011 article about pseudoscience, Scott Lillienfield and his colleagues describe some ways in which educators might protect themselves from adopting faulty tools or practices.

In particular, they warn against products with:

  • An over-reliance on testimonial and anecdotal evidence. Single-case testimonials are a powerful tool to describe the impact of a program or instructional strategy; however, a testimonial doesn’t provide much evidence. Remember that these developers are tasked with providing an argument for the use of a particular product or strategy. An argument based primarily on testimonials is a red flag. In those cases, it might be helpful to consider whether there are any peer-reviewed research articles in support of the product or strategy.
  • Extraordinary claims. Despite a lot of great work to evaluate specific assessments, interventions, and teaching strategies, there is no “perfect” test or fool-proof mechanism for instruction. If you come across claims expressed in absolute terms (e.g., always, never) or if there don’t seem to be boundary conditions (i.e., conditions under which a test or strategy might not be appropriate), that might be a cue to look a little closer.

Finally, it’s useful to recognize that we all might harbor some biases of our own. In their review, Lillienfield et al. highlight over 20 “cognitive errors” that we might make as data-based decision-makers. In short, those errors are primarily related to the personal biases we have and the conditions that might engender those biases.

For example, if we already hold some beliefs about the quality of a particular test or instructional method, we might only pick up on information that supports those beliefs. That particular bias (confirmation bias) is likely exacerbated if we surround ourselves with people who think just like us.

So, in some ways, truth-telling in education is very much related to self-criticism. In fact, that is the nature of scientific thinking—rather than creating the perfect environment for a statistically significant effect, we generally seek to disprove our ideas via rigorous evaluation. It is not necessarily a bad thing to hold general beliefs that impact your view on educational research—in some ways, those beliefs are inevitable. However, it’s critical that we remind ourselves of those biases and seek out (rather than ignore) contradictory information. In practice, this might include a thorough and objective review of existing practices and decision-making teams that include members with a wide range of experiences.

Lilienfeld, S. O., Ammirati, R., & David, M. (2012). Distinguishing science from pseudoscience in school psychology: Science and scientific thinking as safeguards against human error. Journal of school psychology50(1), 7-36.

Join Dr. Peter Nelson and Dr. Theodore J. Christ in our upcoming Ask the Experts webinar on Wednesday, December 2nd at 3 p.m. CT when they’ll be providing guidance and insights on effectively communicating data about achievement and growth to parents. Learn more and register today!

Dr. Nelson is an Assistant Professor of School Psychology at Penn State University. He completed his doctoral training in school psychology at the University of Minnesota after obtaining his M.A. in education from the University of Mississippi. A former high school teacher, his primary research interests focus on data-based decision-making, prevention, and intervention in the classroom setting. He has published and presented on issues related to effective math intervention, classroom environment assessment, teacher development, screening, and progress monitoring.

Leave a Reply

Your email address will not be published. Required fields are marked *

Back To Top