Summary
A meta-analysis is essentially a fancy way to get an average effect size across multiple studies. Each study contributes its own mean difference to the average; usually this is the average difference between a treatment group and a control group. But not all studies should contribute equally.
As an overly simple example, imagine you wanted to get the average effect from two randomized controlled trials. One of those trials had 1,000 participants, and one trial had 10. The results from the trial with 10 participants are much less certain than the results from the trial with 1,000 participants. Thus, if we average the results of these two trials together, the former should contribute much more to the average than the latter. This can be done by taking the weighted average and weighting the smaller trial much less than the larger trial.
Thus, the weighted mean difference is the average of differences across trials, with each trial’s contribution being weighted according to its precision. The precision of a trial depends on two factors: its sample size (i.e., how many participants it has) and how scattered the results within the trial were (i.e., whether many people had roughly the same size of effect, or if each person had wildly different effects). Figure 1 depicts the results of two simulated trials that demonstrates what we mean by “scattered”. There, the results of two trials are depicted in red and blue. Both trials have the same average effect and sample size. The only difference between the two trials is that the one depicted in red had participants who experienced wildly different effects, so the results are more scattered. Even though both trials had the same number of participants and the same average effect, the blue trial would have a higher weight than the trial in red because of its less-scattered results. In Figure 1’s example, the study in blue would have a weight of around 0.8, and the study in red would have a weight of around 0.2, even though the trials’ sample sizes are the same.