Automated Interpretation of Indices of Effect Size

Why?

The metrics used in statistics (indices of fit, model performance or parameter estimates) can be very abstract. A long experience is required to intuitively “feel” the meaning of their values. In order to facilitate the understanding of the results they are facing, many scientists use (often implicitly) some set of rules of thumb. Thus, in order to validate and standardize such interpretation grids, some authors validated and published them in the form of guidelines.

One of the most famous interpertation grid was proposed by Cohen (1988) for a series of widely used indices, such as the correlation r (r = .20, small; r = .40, moderate and r = .60, large) or the standardized difference (Cohen’s d). However, there is now a clear evidence that Cohen’s guidelines (which he himself later disavowed; Funder, 2019) are much too stringent and not particularly meaningful taken out of context (Funder and Ozer 2019). This led to the emergence of a litterature discussing and creating new sets of rules of thumb.

Altough everybody agrees on the fact that effect size interpretation in a study should be justified with a rationale (and depend on the context, the field, the litterature, the hypothesis, etc.), these pre-baked rules can still nevertheless be useful to give a rough idea or frame of reference to understand scientific results.

The package effectsize implements such sets of rules of thumb for a variety of indices in a flexible and explicit fashion, helping you understanding and reporting your results in a scientific yet meaningful way. Again, readers should keep in mind that these thresholds, altough “validated”, remain arbitrary. Thus, their use should be discussed on a case-by-case basis depending on the field, hypotheses, prior results and so on, to avoid their crystalisation, as for the infamous p < .05 example.

Moreovere, some authors suggest the counter-intuitive idea that very large effects, especially in the context of psychological research, is likely to be a “gross overestimate that will rarely be found in a large sample or in a replication” (Funder and Ozer 2019). They suggest that smaller effect size are worth taking seriously (as they can be potentially consequential), as well as more believable.

Supported Indices

Coefficient of determination (R2)

Falk and Miller (1992)

interpret_r2(x, rules = "falk1992")

Cohen (1988)

interpret_r2(x, rules = "cohen1988")

Chin and others (1998)

interpret_r2(x, rules = "chin1998")

Hair, Ringle, and Sarstedt (2011)

interpret_r2(x, rules = "hair2011")

Correlation r

interpret_r(x, rules = "funder2019")

Gignac and Szodorai (2016)

Gignac’s rules of thumb are actually one of few interpretation grid justified and based on actual data, in this case on the distribution of effect magnitudes in the litterature.

interpret_r(x, rules = "gignac2016")

Cohen (1988)

interpret_r(x, rules = "cohen1988")

Evans (1996)

interpret_r(x, rules = "evans1996")

Standardized Difference d (Cohen’s d)

The sandardized difference can be obtained through the standardization of linear model’s parameters or data, in which they can be used as indices of effect size.

interpret_d(x, rules = "funder2019")

Gignac and Szodorai (2016)

Gignac’s rules of thumb are actually one of few interpretation grid justified and based on actual data, in this case on the distribution of effect magnitudes in the litterature.

interpret_d(x, rules = "gignac2016")

Cohen (1988)

interpret_d(x, rules = "cohen1988")

Sawilowsky (2009)

interpret_d(x, rules = "sawilowsky2009")

Odds ratio

Odds ratio, and log odds ratio, are often found in epidemiological studies. However, they are also the parameters of logistic regressions, where they can be used as indices of effect size. Note that the (log) odds ratio from logistic regression coefficients are unstandardized, as they depend on the scale of the predictor. In order to apply the following guidelines, make sure you standardize your predictors!

Chen, Cohen, and Chen (2010)

interpret_odds(x, rules = "chen2010")

Cohen (1988)

interpret_odds(x, rules = "cohen1988")

This converts (log) odds ratio to standardized difference d using the following formula (Cohen 1988; Sánchez-Meca, Marı́n-Martı́nez, and Chacón-Moscoso 2003):

d <- log_odds * (sqrt(3) / pi)

Omega Squared

The Omega squared is a measure of effect size used in ANOVAs. It is an estimate of how much variance in the response variables are accounted for by the explanatory variables. Omega squared is widely viewed as a lesser biased alternative to eta-squared, especially when sample sizes are small.

Field (2013)

interpret_omega_squared(x, rules = "field2013")

Bayes Factor (BF)

Bayes factors (BF) are continuous measures of relative evidence, with a Bayes factor greater than 1 giving evidence in favor of one of the models (the numerator), and a Bayes factor smaller than 1 giving evidence in favor of the other model (the denominator). Yet, it is common to interpret the magnitude of relative evidence based on conventions of intervals (presented below), such that the values of a BF10 (comparing the alternative to the null) can be interpreted as:

For human readability, it is recommended to report BFs so that the ratios are larger than 1 - for example, it’s harder to understand a BF10=0.07 (indicating the data are 0.07 times more probable under the alternative) than a BF01=1/0.07=14.3 (indicating the data are 14.3 times more probable under the null. BFs between 0 and 1, indicating evidence against the hypothesis, can be converted via bf = 1 / abs(bf).

One can report Bayes factors using the following sentence:

There is a strong evidence against the null hypothesis (BF = 12.2).

Jeffreys (1961)

interpret_bf(x, rules = "jeffreys1961")

Raftery (1995)

interpret_bf(x, rules = "raftery1995")

Bayesian Convergence Diagnostic (Rthat and Effective Sample Size)

Experts have suggested thresholds value to help interpreting and convergence and sampling quality. As such, Rhat should not be larger than 1.1 (Gelman, Rubin, and others 1992) or 1.01 (Vehtari et al. 2019). An effective sample size (ESS) greater than 1,000 is sufficient for stable estimates (Bürkner and others 2017).

Other Bayesian Indices (% in ROPE, pd)

The interpretation of Bayesian indices is detailed in this article.

References

Bürkner, Paul-Christian, and others. 2017. “Brms: An R Package for Bayesian Multilevel Models Using Stan.” Journal of Statistical Software 80 (1): 1–28.

Chen, Henian, Patricia Cohen, and Sophie Chen. 2010. “How Big Is a Big Odds Ratio? Interpreting the Magnitudes of Odds Ratios in Epidemiological Studies.” Communications in Statistics—Simulation and Computation 39 (4): 860–64.

Chin, Wynne W, and others. 1998. “The Partial Least Squares Approach to Structural Equation Modeling.” Modern Methods for Business Research 295 (2): 295–336.

Cohen, Jacob. 1988. “Statistical Power Analysis for the Social Sciences.”

Evans, James D. 1996. Straightforward Statistics for the Behavioral Sciences. Thomson Brooks/Cole Publishing Co.

Falk, R Frank, and Nancy B Miller. 1992. A Primer for Soft Modeling. University of Akron Press.

Field, Andy. 2013. Discovering Statistics Using Ibm Spss Statistics. sage.

Funder, David C, and Daniel J Ozer. 2019. “Evaluating Effect Size in Psychological Research: Sense and Nonsense.” Advances in Methods and Practices in Psychological Science, 2515245919847202.

Gelman, Andrew, Donald B Rubin, and others. 1992. “Inference from Iterative Simulation Using Multiple Sequences.” Statistical Science 7 (4): 457–72.

Gignac, Gilles E, and Eva T Szodorai. 2016. “Effect Size Guidelines for Individual Differences Researchers.” Personality and Individual Differences 102: 74–78.

Hair, Joe F, Christian M Ringle, and Marko Sarstedt. 2011. “PLS-Sem: Indeed a Silver Bullet.” Journal of Marketing Theory and Practice 19 (2): 139–52.

Jeffreys, Harold. 1961. “Theory of Probability, Clarendon.” Oxford.

Raftery, Adrian E. 1995. “Bayesian Model Selection in Social Research.” Sociological Methodology 25: 111–64.

Sawilowsky, Shlomo S. 2009. “New Effect Size Rules of Thumb.”

Sánchez-Meca, Julio, Fulgencio Marı́n-Martı́nez, and Salvador Chacón-Moscoso. 2003. “Effect-Size Indices for Dichotomized Outcomes in Meta-Analysis.” Psychological Methods 8 (4): 448.

Vehtari, Aki, Andrew Gelman, Daniel Simpson, Bob Carpenter, and Paul-Christian Bürkner. 2019. arXiv Preprint arXiv:1903.08008.