Printer-Friendly Version Email This Article
Development of the Evaluative Method for Evaluating and Determining Evidence-Based Practices in Autism
by Brian Reichow, Fred R. Volkmar, Domenic V. Cicchetti
Reichow, B. & Volkmar, F. (2008). Development of the Evaluative Method for Evaluating and Determining Evidence-Based Practices in Autism. Journal of Autism and Developmental Disorders, 38, 1311-1319.
Objective: The authors recognized a growing gap between research knowledge and applicability of research in real world settings. In addition, a National Research Council Committee (2001) determined that there was no single practice for young children with autism (ages 2-8) that met the conventional, medical-model definitions of evidence-based practice. The shortcomings of evidence-based practice (EBP) criteria and the increasing demands to find effective treatments for young children with Autism were discussed. The need for a new methodology for this type of evaluation was highlighted.
Method: The authors set forth to develop a multi faceted and stratified approach to determining whether a practice is truly evidence-based. They created the Evaluative Method for Determining EBP in Autism. This tool is comprised of 3 instruments: (1) two rubrics (one for group studies and one for single subject research) to assess report quality (rigor) (2) guidelines to evaluate the strength of the research report and (3) criteria for the determination of EBP. Not only does this tool provide a standardized process to evaluate research, it also has taken into consideration the importance and proliferation of single subject studies in this field by assigning them value within the rubric.
The first instrument, the rubric, is meant to rate primary and secondary indicators of quality within a particular research report. Definitions of these indicators are included and there is a rubric for both group research and single subject studies. Next, the guidelines for the evaluation of the research report strength (which represents the second instrument) operationalize levels of research report strength. Finally, the third instrument, Criteria for EBP, aggregates the strength ratings across numerous studies to determine the quantity of empirical support for a specific practice.
Results: Reliability and validity of the Evaluative Tool were measured. Through field trials the rubric reliability was assessed. This instrument scored good to excellent on inter-rater agreement. Subsequent values were even higher. Validity was tested and proved in three areas: concurrent validity, content validity and face validity. Definitions of validity measures are operationalized.
Conclusion: This new method of determining EBP for young children with Autism was overall determined to be a reliable and valid avenue to review related research. High agreement across applications and individuals support the evaluative method as a reliable tool and demonstrated validity fell within the good to excellent range. Shortcomings were highlighted and included: the need to assess validity within a more diverse population of evaluators, and the instrument needs to be continually evaluated for reliability and validity as it’s applied to more relevant research studies. According to the authors, potential assets of this novel process for evaluating EBP include: it is a practitioner friendly method to better merge the research to practice gap, the method can be generalized to use by professionals who work with other populations, and this tool successfully integrates results from both group and single subject research designs.