A prejudice, preference or partiality that prevents objective consideration of an issue or situation; in statistics, bias is a tendency of an estimate to deviate in one direction from a true value, which can result in underestimation or overestimation of the effects of something. Usually, there is more interest in showing that an intervention works, rather than showing that it does not work, and this can lead to results being overestimated, which is a form of bias. This may happen intentionally, but more often than not it is unintentional and even unrecognised by the researchers. Good systematic reviews and meta-analyses assess the potential bias in studies, which helps to provide a more accurate picture of the effects of an intervention.
In a controlled trial, the group that acts as a comparator for one or more experimental interventions. In a case-control study, the group without the outcome of interest.
A quantitative measure of the difference between two groups. In systematic reviews and meta-analyses of interventions, effect sizes are calculated based on the ‘standardised mean difference’ between two groups in a trial – very roughly, this is the difference between the average score of participants in the intervention group, and the average score of participants in the control group. Effect sizes are usually reported using the label ‘d=’, and in the form of a fraction, such as d=0.2 or d=0.5. More information
EVIDENCE-BASED PROGRAMME OR INTERVENTION
An evidence-based programme or intervention is one that has been proven effective in rigorous outcome evaluations, particularly randomised controlled trials (RCTs), and ideally in more than one high-quality RCT.
A graphical representation of a meta-analysis.
A non-systematic review of published or unpublished literature about a particular topic. This differs from a systematic review, which is a literature review that has a more focused research question and aims to identify all relevant studies on a particular topic.
For instance, a literature review of programmes to improve children’s reading might provide general information about studies of children’s reading programmes, including programmes which have been evaluated using simple measures, as well as those evaluated in randomised controlled trials. A systematic review would ask the question ‘How effective are reading programmes for children aged 6–10?’ The systematic review would seek to identify published and unpublished trials which tested reading programmes for children aged 6–10, using specific criteria, and would assess the results in a more precise way than a literature review.
A method for statistically combining the results of similar studies that are included in a systematic review, to come to a conclusion about the overall effects of an intervention.
The process of randomly allocating participants into one of the groups in a controlled trial (usually the groups are an ‘intervention’ group and a ‘control’ group). There are two components to randomisation: the generation of a random sequence, and its implementation, ideally in a way so that the person who is entering participants into a study is not aware of the sequence. This significantly reduces the risk that the person who is assigning participants to either the intervention or control group biases this allocation. See Randomise Me
A measure of whether a treatment had an effect that could not have occurred by chance, and therefore is likely to have been brought about by an intervention. Statistical significance tells you if something had an effect on something else (but it does not tell you the magnitude of that effect – whether it was big or small; see Effect size).
A comprehensive review of all relevant research about the efficacy of a treatment or intervention; involves systematic and transparent identification, selection, synthesis and appraisal of studies. Can be accompanied by a meta-analysis. A systematic review usually involves the synthesis of results from multiple studies.