A meta-analysis is a method for statistically combining the results of studies that are included in a systematic review, to come to a conclusion about the overall effects of an intervention.
A single research trial or ordinary literature review can tell you if an intervention has a statistically significant effect. In other words, it will report on whether a treatment had an effect that could not have occurred by chance, and therefore is likely to have been brought about by the intervention.
Meta-analysis, on the other hand, tells you not just whether there was a significant effect, but about the direction (positive or negative) and the magnitude of the effects (how strong they were). This is reported using effect sizes, and visually in the form of a forest plot.
WHY DO SOME SYSTEMATIC REVIEWS USE META-ANALYSIS, BUT OTHERS DON’T?
In any systematic review, the studies that are included will differ in a variety of ways. When studies differ substantially – for example, when some are randomised and others are non-randomised controlled trials – it is usually best not to combine them in a single meta-analysis. It is more meaningful and accurate to conduct two meta-analyses: one which includes all of the randomised trials, and another which includes the non-randomised trials.
This is also true when the studies in a systematic review are asking different questions. For example, let’s say there are three studies in a systematic review which are all reporting on trials of a single parenting intervention called the Parenting Programme. Two of the trials were undertaken to answer the question ‘Does the Parenting Programme reduce child abuse?’ The third trial was undertaken to find out ‘Does the Parenting Programme improve children’s reading abilities?’ In this case, the three trials cannot be combined in a single meta-analysis. Instead, the two trials measuring child abuse can be meta-analysed together, while the single trial measuring reading ability would not be meta-analysed (because a meta-analysis requires at least two trials).
Studies which have difference designs (e.g. randomised and non-randomised trials) and those which ask different questions (as in the scenario above) are two clear examples of when not to meta-analyse trials together. However, in other cases, the decision about whether trials are similar enough to include within a single meta-analysis is largely a matter of judgement. High-quality systematic reviews often include a justification of the authors’ decision to meta-analyse or not.