Direct comparisons

Consider whether health actions were compared within a study rather than across studies.

Only very rarely are all the possible health actions for a condition compared in a single study, so it may be necessary to make comparisons of people in one study to people in another study (indirect comparisons). For indirect (between study) comparisons to be reliable, participants, as well as other features of the studies, must be similar.


For many conditions (e.g., depression) there are more than two possible treatments (for example, different medicines, or types of psychotherapy) and unreliable or no direct comparisons of all the treatments in a single study.  When this is the case, indirect comparisons may provide the best available evidence to inform decisions. For example, there may be comparisons of drug A with a placebo and comparisons of drug B with a placebo, but no studies that compare drug A with drug B directly. In this case, indirect comparisons between studies may be needed to inform a decision about whether to use drug A or drug B.

When indirect comparisons are made, it is important to consider that even when they are based on randomized trials, there can be important, possibly unknown, differences between the studies besides the health actions they assessed.  Differences in characteristics of the participants, the way the comparisons were done, or the outcome measures can result in misleading estimates of the effects of the health actions.

Informal indirect comparisons – e.g., assuming that drug A is more effective than drug B simply because drug A had a larger effect compared to placebo than drug B – can be misleading and should be avoided.

An increasing number of systematic reviews of multiple treatments for a condition use what is called “network meta-analysis”. As with any systematic review, the reliability of estimates of treatment effects from network meta-analyses depends on the methods used to find relevant studies, assess the trustworthiness of those studies, and put together the results. In addition, network meta-analyses need to assess the similarity of the included studies (apart from the health actions being compared).


In a systematic review of different doses of aspirin to prevent blockages of the blood vessels after heart bypass surgery, researchers found five randomized trials that compared aspirin with placebo. Two trials tested medium-dose, and three trials tested low-dose aspirin. Based on the indirect comparison, the results suggested the possibility of a larger effect with medium-dose aspirin. However, there were other characteristics of the trials that might be responsible for the differences found between the effects of the different doses of aspirin. The apparent difference in effectiveness of low-dose and medium dose aspirin may have been because of differences in the patients included in the trials, health actions other than taking aspirin, and how outcomes were measured in the low-dose compared to the medium-dose trials.

Remember: If indirect comparisons (across studies) are needed to inform choices about health actions, consider carefully whether there are differences between the studies besides the health actions that were compared.

Back to Top