Using metapower

library(metapower)

Computing Meta-analytic Power

Suppose that we plan to meta-analyze all published findings to compute a summary effect size estimate for the group difference between typically developing individuals and individuals with autism on a measure of face recognition ability. In order to plan the study accordingly, we must choose plausible values for the following:

  1. Expected effect size
  2. Expected sample size per group
  3. Expected number of studies

for our meta-analysis of face recognition deficits in autism

  1. We expect that face recognition deficits in ASD are small (Cohen’s d = 0.25)
  2. Sample sizes in autism research are generally small. We expect the average group size to be 20.
  3. Face recognition is frequently studied in autism; therefore, we expect to find 30 studies.

To do this with metapower, we use the core function mpower()

my_power <- mpower(effect_size = .25, sample_size = 20, k = 30, es_type = "d")

Note that we specify this a random-effects model (model = "random). For fixed-effects model, use model = "fixed".

print(my_power)
#> 
#>  Estimated Meta-Analytic Power 
#> 
#>  Expected Effect Size:              0.25 
#>  Expected Sample Size (per group):  20 
#>  Expected Number of Studies;        30 
#>  Expected between-study sd:         
#> 
#>  Estimated Power: Main effect 
#> 
#>  Fixed-Effects Model                             0.990698 
#>  Random-Effects Model (Low Heterogenity):        0.962092 
#>  Random-Effects Model (Moderate Heterogeneity):  0.8621495 
#>  Random-Effects Model (Large Heterogeneity):     0.57799 
#> 
#>  Estimated Power: Test of Homogenity 
#> 
#>  Fixed-Efects Model                              NA 
#>  Random-Effects Model (Low Heterogeneity):       0.2926194 
#>  Random-Effects Model (Moderate Heterogeneity):  0.9782353 
#>  Random-Effects Model (Large Heterogeneity):     1

The first part of the output shows the expected input values, where the main results are shown in the bottom portion, mainly, Estimated Power. Under this set of values, our power to detect a mean difference under a Fixed-Effects model is 99.07%. Furthermore, we can look at the power under a Random-Effects model under various heterogeneity levels (e.g., Low, Moderate, Large). For the output regarding Estimated Power: Test of Homogenity, please see below

Given that power analysis require a lot of assumptions, it is generally advisable to look at power across a range of input values. To visualize the power curve for these set of input parameters, use power_plot() to generate a ggobject that is modifiable and by default, shows 5x as many studies as the user inputs.

power_plot(my_power)

For fixed-effects model, power curves are shown for a range of effect sizes, whereas random-effects model shows power across a range of heterogeneity values, \(\tau^2\)

For users wanting more flexibility in visualization, the mpower object contains a data frame $df containing all data populating the ggobject,

str(my_power$df)
#> 'data.frame':    447 obs. of  9 variables:
#>  $ k_v           : int  2 3 4 5 6 7 8 9 10 11 ...
#>  $ es_v          : num  0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 0.125 ...
#>  $ effect_size   : num  0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 0.25 ...
#>  $ n_v           : num  20 20 20 20 20 20 20 20 20 20 ...
#>  $ variance      : num  0.1 0.1 0.1 0.1 0.1 ...
#>  $ fixed_power   : num  0.0864 0.1051 0.1239 0.143 0.1621 ...
#>  $ random_power_s: num  0.0772 0.0911 0.1051 0.1192 0.1334 ...
#>  $ random_power_m: num  0.068 0.0772 0.0864 0.0957 0.1051 ...
#>  $ random_power_l: num  0.059 0.0635 0.068 0.0726 0.0772 ...

Power for the Test of Homogeneity

For Fixed-Effects Model, the test of homogeneity examines whether the amount of variation among effect sizes is greater than that of sampling error alone. TO compute this, you must use the argument sd to assign a value representing the average difference between the effect sizes and the mean effect.

(homogen_power <- mpower(effect_size = .25, sample_size = 20, k = 30, es_type = "d", sd = 3))
#> 
#>  Estimated Meta-Analytic Power 
#> 
#>  Expected Effect Size:              0.25 
#>  Expected Sample Size (per group):  20 
#>  Expected Number of Studies;        30 
#>  Expected between-study sd:         3 
#> 
#>  Estimated Power: Main effect 
#> 
#>  Fixed-Effects Model                             0.990698 
#>  Random-Effects Model (Low Heterogenity):        0.962092 
#>  Random-Effects Model (Moderate Heterogeneity):  0.8621495 
#>  Random-Effects Model (Large Heterogeneity):     0.57799 
#> 
#>  Estimated Power: Test of Homogenity 
#> 
#>  Fixed-Efects Model                              0.2966788 
#>  Random-Effects Model (Low Heterogeneity):       0.2926194 
#>  Random-Effects Model (Moderate Heterogeneity):  0.9782353 
#>  Random-Effects Model (Large Heterogeneity):     1

For Random-Effects models, the test of homogeneity evaluates whether the variance component, \(\tau^2\), is different than zero. metapower automatically uses small, moderate, and large values of \(\tau^2\).

We can visualize this across a range with homogen_power_plot

homogen_power_plot(homogen_power)

Example 2: Power analysis for moderation analysis (categorical variables)

Although researchers are primarily interested in conducting meta-analysis to quantify the main effect of a specific phenomenon, It is very common to evaluate the moderation of this overall effect based on a number of study- and/or sample-related characteristics such as task paradigm or age group (e.g., children, adolescents, adults). To compute the statistical power for the detection of categorical moderators, we use the function mod_power() with a few additional arguments, mainly:

  1. Expected number of groups (n_groups):
  2. Expected effect size of each group(effect_sizes):

for our meta-analysis of face recognition deficits in autism

We may expect that face recognition tasks have larger effect sizes then face perception tasks; therefore, we specify 2 groups and their respective expected effect sizes:

  1. n_groups = 2
  2. effect_sizes = c(.2,.5)
my_mod <- mod_power(n_groups = 2, 
                    effect_sizes = c(.2,.5), 
                    sample_size = 20,
                    k = 30,
                    es_type = "d")
print(my_mod)
#> 
#>  Power Analysis for Categorical Moderators: 
#> 
#>  Number of groups:                  2 
#>  Expected Effect Sizes:             0.2 0.5 
#>  Expected Sample Size (per group):  20 
#>  Expected Number of Studies:        30 
#> 
#>  Esimated Power 
#> 
#>  Fixed-Effects Model (Between-Group):                          0.4458675 
#>  Fixed-Effects Model (Within-Group):                           NA 
#>  Random-Effects Model (Between-Group, Small Heterogneity):     0.02504419 
#>  Random-Effects Model (Between-Group, Moderate Heterogneity):  0.02506628 
#>  Random-Effects Model (Between-Group, Large Heterogneity):     0.02513259

Given, this set of expected values, we have 44.59% to detect between-group differences under a Fixed-Effects model. As expected, moderator effects are much harder to detect and more studies are required, especially when heterogeneity is high.