Summary
In the most general sense, “effect size” refers to the magnitude of the effect produced by an intervention. For example, the degree to which blood pressure is reduced in mmHg in response to beetroot supplementation. More commonly, however, effect size is a statistical concept used in meta-analyses to describe the magnitude of an effect across multiple studies.
Effect size is especially important in meta-analyses because several studies may report the same outcome but they may not measure that outcome in the same way. For example, three studies might all investigate the effects of exercise on depression, but one might measure depression using the Beck Depression Inventory, another using the Hamilton Depression Rating Scale, and another using the EQ-5D. In this case, it’d be fair to want to include these studies in the same meta-analysis, but because they use different scales, a researcher couldn’t just average the change in depression score across them. Instead, they’d use an effect size calculation to come up with a numerical value that describes an effect independent of a particular measurement scale.
The two most common measures of effect size are standardized mean difference (SMD) and Cohen’s d. Technically, they’re slightly different, but both can be interpreted similarly:[1]
- SMD/d=0.20: small effect, not likely to be noticed (e.g., a supplement adds 0.3 lbs to a deadlift max).
- SMD/d=0.50: medium effect, might be noticed by someone who is looking for a change or is trained to notice a change (e.g., a supplement reduces certain symptoms of ADHD in school children, as reported by a teacher trained in the condition).
- SMD/d=0.80: large effect, likely to be noticed by most people (e.g., 1 year of resistance training results in gaining 25 lbs of muscle).