#PEDroTacklesBarriers to evidence-based physiotherapy: statistical skills

The ‘#PEDroTacklesBarriers to evidence-based physiotherapy’ campaign will help you to tackle the four biggest barriers to evidence-based physiotherapy – lack of time, language, lack of access, and lack of statistical skills.

If you are new to the campaign, we suggest that you start at the beginning by looking at earlier posts on strategies to tackle the barriers of lack of time and language. These are available on the campaign webpage, blog, Twitter (@PEDro_database) or Facebook (@PhysiotherapyEvidenceDatabase.PEDro).

Over the next months, we discuss strategies to tackle the barrier of statistical skills in evidence-based physiotherapy. A lack of statistical skills is a common barrier to interpreting evidence and implementing evidence-based physiotherapy.

This month, three clinician-researchers including the Scientific Editor of the Journal of Physiotherapy, tackle the barrier of lack of statistical skills by discussing the methods used to conduct, analyse, report, and interpret randomised controlled trials.

Aidan Cashin
Exercise Physiologist and researcher, University of New South Wales, AustraliaArea of practice: Comparative effectiveness of interventions for people with chronic pain
Kate Scrivener
Physiotherapist, educator and researcher, Macquarie University, AustraliaArea of practice: Post-stroke physiotherapy intervention and research.
Mark Elkins
Scientific Editor of Journal of PhysiotherapyArea of practice: Physical and pharmacological therapies in respiratory disease and improving the understanding and application of published research by clinicians.

Interpreting comparative effects in trials
High-quality randomised controlled trials are a great source of evidence to support clinical decisions about which treatment may be best for the patients you work with. When interpreting the findings from trials, it is important to consider both how the outcomes are reported and what the treatment is being compared to.

Trial outcomes are often measured and reported as the ‘within-group’ change in outcomes or as the ‘between-group’ difference in outcomes. The distinction between within-group comparison and between-group comparison is critical when interpreting the results of trials. The between-group difference represents the treatment effect because it does not include natural history, regression to the mean, and nonspecific effects of receiving care which are included in the within-group change.

The treatment effect in trials is always comparative, meaning that the treatment benefit (or harm) is interpreted relative to the other treatment(s) in the trial. This is an important issue because the choice of comparison group will have a big influence on the interpretation of the size of the effect and whether the comparison was a fair test of the treatment.

Choosing the ideal comparison group is not straightforward and is heavily influenced by the research question (spanning the spectrum of efficacy to effectiveness research). For example, guideline-based care may be a suitable comparator if researchers were interested in investigating if the treatment was better than current practice.

The choice of comparison group is also important when trials are synthesised in systematic reviews. It is important that meta-analyses of systematic reviews combine trials with similar treatments, and trials that have similar comparison groups.

Sign up to the PEDro Newsletter to receive the latest news