Ed researchers fail reliability standard

Ed researchers fail reliability standard

(District of Columbia) Students of science learn that reliable experiments – those that prove a hypothesis – produce the same results when replicated by others.  It appears, however, that reliability may not be prioritized in the field of education science, according to a study published last week by the American Educational Research Association.

A review of the complete publication history of the current top 100 educational journals found that less than one half of one percent of the research studies published were replication studies that attempted to verify previously published research, according to the study’s authors, Matthew C. Makel of Duke University and Jonathan A. Plucker of the University of Connecticut.

“Replication is a cornerstone of science and based on our results, that cornerstone is missing from modern education research,” Makel said in an online video about the study, which appeared last week in AERA’s journal Educational Researcher.

The findings may raise concerns among educators and policymakers given the fact that the federal Elementary and Secondary Education Act requires schools to utilize programs, teaching methods and professional development strategies that are based on scientific research.

“In the education world I think we constantly struggle to get policymakers to pay attention to our research findings and use those findings to create evidence-based policy,” Makel said. “And because that’s such a struggle in the first place, once we do get policymakers’ attention we really want to be confident that what we’re telling them is actually accurate.”

Makel noted that the many elements at play in an educational setting can impact whether a reform shown to work well in North Carolina, for example, will also work in California. Student demographics, teaching methods, funding levels and state and local policies are just some of the variables that can impact the success of an educational intervention or reform.

“Replications will help uncover the precision with which we know the size of the effects, not to mention the extent to which they generalize across contexts,” the authors wrote. “As a field we need to weed out false and narrow findings and buttress findings that generalize across contexts.”

Similar concerns exist in other fields about the low-numbers of studies verified through replication, the authors say. Psychology, advertising, economics, medicine, public health, genomics and computer science have all been criticized for a lack of replication research.  

However, Makel and Plucker found that only .13 percent of education studies were replications of previous investigations, compared to 1.07 percent in psychology, a field for which Makel conducted a similar study in 2012.

The authors also found that of those replication studies in education, about half were conducted by the same research team that conducted the original study. Not surprisingly, the majority of these investigators – 67 percent – successfully replicated their previous work.

If that work was published in the same journal as the original study, the rate of successful replication was nearly 89 percent, raising concerns that both researchers and journals that publish their research may have a bias toward confirming their own previously published work.

In Makel and Plucker’s analysis, just half of the replication studies conducted by a completely new research team successfully replicated the original results, implying a 50-50 chance that published studies are actually valid.

A primary reason education researchers have little interest in testing the results of previous research, Makel and Plucker say, is that replication studies in education tend to lack prestige.

Many journals have either explicit policies against publishing replication studies or known biases against publishing what they may consider “old news,” the authors say. Researchers, whose careers often hinge on publication, are unlikely to choose investigations that are apt to be rejected in the end.

Publisher bias also impacts the availability of funding for replication studies, since funders generally desire publicity for the result of their investments.

A predilection for “novel” research, however, can be detrimental to the field of education if not tested, say Makel and Plucker. “Although potentially beneficial for the individual researchers, an overreliance on large effects from single studies drastically weakens the field as well as the likelihood of effective evidence-based policy.”

The authors cite the well-known 1998 study in the medical field that purported a causal link between childhood vaccinations and the development of autism, which set off an enormous public reaction and anti-vaccination movement which continues today.

Although studies refuting the findings, originally published in The Lancent medical journal, emerged right away, a replication study disproving the findings wasn’t published until a decade later. 

The failed replication, along with the discovery of academic misconduct, led to a retraction of the original study 12 years after publication. Makel and Plucker question whether public health may have been better served if a replication study had been required prior to publication.

The authors offer several suggestions for increasing the publication of replications studies in education journals:

  • Establish best practices for conducting replication research.
  • Develop a new article type, the registered replication report, which has already been launched in the field of psychology. Using a common, registered research protocol, independent labs would work to replicate research findings with results reported in aggregate.
  • Implement undergraduate and graduate training in the conduct of replications.
  • Reserve space in education journals for the publication of replication study results.
  • Revise editorial polices to encourage the submission of replication studies.

“If we want to know if an education intervention works or if what a teacher is doing in the classroom is effective or isn’t effective,” Makel said, “then replication is a key ingredient to the research. We don’t want to rely on just one individual study.”

 

For more information, read Makel and Plucker’s study: “Facts are More Important Than Novelty:  Replication n the Education Sciences.”

...read more