Introduction: Marginal Effects at Specific Values

Daniel Lüdecke

2020-07-27

Marginal effects at specific values or levels

This vignettes shows how to calculate marginal effects at specific values or levels for the terms of interest. It is recommended to read the general introduction first, if you haven’t done this yet.

The terms-argument not only defines the model terms of interest, but each model term can be limited to certain values. This allows to compute and plot marginal effects for (grouping) terms at specific values only, or to define values for the main effect of interest.

There are several options to define these values, which always should be placed in square brackets directly after the term name and can vary for each model term.

  1. Concrete values are separated by a comma: terms = "c172code [1,3]". For factors, you could also use factor levels, e.g. terms = "Species [setosa,versicolor]".

  2. Ranges are specified with a colon: terms = c("c12hour [30:80]", "c172code [1,3]"). This would plot all values from 30 to 80 for the variable c12hour. By default, the step size is 1, i.e. [1:4] would create the range 1, 2, 3, 4. You can choose different step sizes with by, e.g. [1:4 by=.5].

  3. Convenient shortcuts to calculate common values like mean +/- 1 SD (terms = "c12hour [meansd]"), quartiles (terms = "c12hour [quart]") or minumum and maximum values (terms = "c12hour [minmax]"). See values_at() for the different options.

  4. A function name. The function is then applied to all unique values of the indicated variable, e.g. terms = "hp [exp]". You can also define own functions, and pass the name of it to the terms-values, e.g. terms = "hp [own_function]".

  5. A variable name. The values of the variable are then used to define the terms-values, e.g. first, a vector is defined: v = c(1000, 2000, 3000) and then, terms = "income [v]".

  6. If the first variable specified in terms is a numeric vector, for which no specific values are given, a “pretty range” is calculated (see pretty_range()), to avoid memory allocation problems for vectors with many unique values. To select all values, use the [all]-tag, e.g. terms = "mpg [all]". If a numeric vector is specified as second or third variable in term (i.e. if this vector represents a grouping structure), representative values (see values_at()) are chosen, which is typically mean +/- SD.

  7. To create a pretty range that should be smaller or larger than the default range (i.e. if no specific values would be given), use the n-tag, e.g. terms = "age [n=5]" or terms = "age [n = 12]". Larger values for n return a larger range of predicted values.

  8. Especially useful for plotting group levels of random effects with many levels, is the sample-option, e.g. terms = "Subject [sample=9]", which will sample nine values from all possible values of the variable Subject.

Specific values and value range

library(ggeffects)
library(ggplot2)
data(efc)
fit <- lm(barthtot ~ c12hour + neg_c_7 + c161sex + c172code, data = efc)

mydf <- ggpredict(fit, terms = c("c12hour [30:80]", "c172code [1,3]"))
mydf
#> 
#> # Predicted values of Total score BARTHEL INDEX
#> # x = average number of hours of care per week
#> 
#> # c172code = low level of education
#> 
#>  x | Predicted |   SE |         95% CI
#> --------------------------------------
#> 30 |     67.15 | 1.59 | [64.04, 70.26]
#> 38 |     65.12 | 1.56 | [62.06, 68.18]
#> 47 |     62.84 | 1.55 | [59.81, 65.88]
#> 55 |     60.81 | 1.55 | [57.78, 63.85]
#> 63 |     58.79 | 1.56 | [55.72, 61.85]
#> 80 |     54.48 | 1.63 | [51.28, 57.68]
#> 
#> # c172code = high level of education
#> 
#>  x | Predicted |   SE |         95% CI
#> --------------------------------------
#> 30 |     68.58 | 1.62 | [65.42, 71.75]
#> 38 |     66.56 | 1.62 | [63.39, 69.73]
#> 47 |     64.28 | 1.63 | [61.08, 67.47]
#> 55 |     62.25 | 1.66 | [59.01, 65.50]
#> 63 |     60.23 | 1.69 | [56.91, 63.54]
#> 80 |     55.92 | 1.80 | [52.39, 59.45]
#> 
#> Adjusted for:
#> * neg_c_7 = 11.84
#> * c161sex =  1.76
ggplot(mydf, aes(x, predicted, colour = group)) + geom_line()

Defining value ranges is especially useful when variables are, for instance, log-transformed. ggpredict() then typically only uses the range of the log-transformed variable, which is in most cases not what we want. In such situation, specify the range in the terms-argument.

data(mtcars)
mpg_model <- lm(mpg ~ log(hp), data = mtcars)

# x-values and predictions based on the log(hp)-values
ggpredict(mpg_model, "hp")
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>    x | Predicted |   SE |         95% CI
#> ----------------------------------------
#> 3.80 |     58.27 | 4.38 | [49.69, 66.86]
#> 4.00 |     57.72 | 4.32 | [49.26, 66.18]
#> 4.40 |     56.69 | 4.20 | [48.46, 64.93]
#> 4.60 |     56.21 | 4.15 | [48.08, 64.34]
#> 4.80 |     55.76 | 4.10 | [47.73, 63.79]
#> 5.00 |     55.32 | 4.05 | [47.38, 63.25]
#> 5.20 |     54.89 | 4.00 | [47.05, 62.73]
#> 5.80 |     53.72 | 3.87 | [46.14, 61.30]

# x-values and predictions based on hp-values from 50 to 150
ggpredict(mpg_model, "hp [50:150]")
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>  50 |     30.53 | 1.32 | [27.95, 33.11]
#>  63 |     28.04 | 1.07 | [25.94, 30.14]
#>  75 |     26.17 | 0.90 | [24.41, 27.93]
#>  87 |     24.57 | 0.77 | [23.07, 26.07]
#> 100 |     23.07 | 0.67 | [21.77, 24.37]
#> 113 |     21.75 | 0.60 | [20.57, 22.94]
#> 125 |     20.67 | 0.58 | [19.54, 21.80]
#> 150 |     18.71 | 0.59 | [17.54, 19.87]

By default, the step size for a range is 1, like 50, 51, 52, .... If you need a different step size, use by=<stepsize> inside the brackets, e.g. "hp [50:60 by=.5]". This would create a range from 50 to 60, with .5er steps.

# range for x-values with .5-steps
ggpredict(mpg_model, "hp [50:60 by=.5]")
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>     x | Predicted |   SE |         95% CI
#> -----------------------------------------
#> 50.00 |     30.53 | 1.32 | [27.95, 33.11]
#> 51.50 |     30.21 | 1.29 | [27.69, 32.73]
#> 52.50 |     30.01 | 1.26 | [27.53, 32.48]
#> 53.50 |     29.80 | 1.24 | [27.36, 32.24]
#> 55.00 |     29.50 | 1.21 | [27.12, 31.88]
#> 56.50 |     29.22 | 1.19 | [26.89, 31.54]
#> 57.50 |     29.03 | 1.17 | [26.74, 31.31]
#> 60.00 |     28.57 | 1.12 | [26.37, 30.77]

Choosing representative values

Especially in situations where we have two continuous variables in interaction terms, or where the “grouping” variable is continuous, it is helpful to select representative values of the grouping variable - else, predictions would be made for too many groups, which is no longer helpful when interpreting marginal effects.

You can use

data(efc)
# short variable label, for plot
attr(efc$c12hour, "label") <- "hours of care"
fit <- lm(barthtot ~ c12hour * c161sex + neg_c_7, data = efc)

mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [meansd]"))
plot(mydf)


mydf <- ggpredict(fit, terms = c("c161sex", "c12hour [quart]"))
plot(mydf)

Transforming values with functions

The brackets in the terms-argument also accept the name of a valid function, to (back-)transform predicted valued. In this example, an alternative would be to specify that values should be exponentiated, which is indicated by [exp] in the terms-argument:

# x-values and predictions based on exponentiated hp-values
ggpredict(mpg_model, "hp [exp]")
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>  52 |     30.11 | 1.28 | [27.61, 32.61]
#>  66 |     27.54 | 1.02 | [25.54, 29.55]
#>  93 |     23.85 | 0.71 | [22.45, 25.25]
#> 105 |     22.54 | 0.64 | [21.30, 23.79]
#> 113 |     21.75 | 0.60 | [20.57, 22.94]
#> 150 |     18.71 | 0.59 | [17.54, 19.87]
#> 205 |     15.34 | 0.79 | [13.80, 16.89]
#> 335 |     10.06 | 1.28 | [ 7.55, 12.56]

It is possible to define any function, also custom functions:

# x-values and predictions based on doubled hp-values
hp_double <- function(x) 2 * x
ggpredict(mpg_model, "hp [hp_double]")
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>     x | Predicted |   SE |         95% CI
#> -----------------------------------------
#>  7.90 |     50.39 | 3.49 | [43.54, 57.24]
#>  8.38 |     49.76 | 3.42 | [43.05, 56.47]
#>  9.07 |     48.91 | 3.33 | [42.39, 55.43]
#>  9.31 |     48.63 | 3.30 | [42.17, 55.09]
#>  9.45 |     48.46 | 3.28 | [42.04, 54.88]
#> 10.02 |     47.83 | 3.21 | [41.55, 54.12]
#> 10.65 |     47.18 | 3.13 | [41.04, 53.32]
#> 11.63 |     46.23 | 3.03 | [40.30, 52.17]

Using values from a variable (vector)

val <- c(100, 200, 300)
ggpredict(mpg_model, "hp [val]")
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#> 100 |     23.07 | 0.67 | [21.77, 24.37]
#> 200 |     15.61 | 0.77 | [14.11, 17.11]
#> 300 |     11.24 | 1.16 | [ 8.97, 13.51]

Pretty value ranges

This section is intended to show some examples how the plotted output differs, depending on which value range is used. Some transformations, like polynomial or spline terms, but also quadratic or cubic terms, result in many predicted values. In such situation, predictions for some models lead to memory allocation problems. That is why ggpredict() “prettifies” certain value ranges by default, at least for some model types (like mixed models).

To see the difference in the “curvilinear” trend, we use a quadratic term on a standardized variable.

library(sjmisc)
library(sjlabelled)
library(lme4)
data(efc)

efc$c12hour <- std(efc$c12hour)
efc$e15relat <- as_label(efc$e15relat)

m <- lmer(
  barthtot ~ c12hour + I(c12hour^2) + neg_c_7 + c160age + c172code + (1 | e15relat), 
  data = efc
)

me <- ggpredict(m, terms = "c12hour")
plot(me)

Turn off “prettifying”

As said above, ggpredict() “prettifies” the vector, resulting in a smaller set of unique values. This is less memory consuming and may be needed especially for more complex models.

You can turn off automatic “prettifying” by adding the "all"-shortcut to the terms-argument.

me <- ggpredict(m, terms = "c12hour [all]")
plot(me)

This results in a smooth plot, as all values from the term of interest are taken into account.

Using different ranges for prettifying

To modify the “prettifying”, add the "n"-shortcut to the terms-argument. This allows you to select a feasible range of values that is smaller (and hence less memory consuming) them "terms = ... [all]", but still produces smoother plots than the default prettyfing.

me <- ggpredict(m, terms = "c12hour [n=2]")
plot(me)

me <- ggpredict(m, terms = "c12hour [n=10]")
plot(me)

Marginal effects conditioned on specific values of the covariates

By default, the typical-argument determines the function that will be applied to the covariates to hold these terms at constant values. By default, this is the mean-value, but other options (like median or mode) are possible as well.

Use the condition-argument to define other values at which covariates should be held constant. condition requires a named vector, with the name indicating the covariate.

data(mtcars)
mpg_model <- lm(mpg ~ log(hp) + disp, data = mtcars)

# "disp" is hold constant at its mean
ggpredict(mpg_model, "hp [exp]")
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>  52 |     25.61 | 1.87 | [21.94, 29.28]
#>  66 |     24.20 | 1.43 | [21.38, 27.01]
#>  93 |     22.16 | 0.85 | [20.50, 23.82]
#> 105 |     21.44 | 0.67 | [20.12, 22.76]
#> 113 |     21.01 | 0.59 | [19.85, 22.16]
#> 150 |     19.33 | 0.57 | [18.22, 20.44]
#> 205 |     17.48 | 0.99 | [15.53, 19.42]
#> 335 |     14.56 | 1.88 | [10.89, 18.24]
#> 
#> Adjusted for:
#> * disp = 230.72

# "disp" is hold constant at value 200
ggpredict(mpg_model, "hp [exp]", condition = c(disp = 200))
#> 
#> # Predicted values of mpg
#> # x = hp
#> 
#>   x | Predicted |   SE |         95% CI
#> ---------------------------------------
#>  52 |     26.30 | 1.70 | [22.97, 29.62]
#>  66 |     24.88 | 1.27 | [22.40, 27.36]
#>  93 |     22.85 | 0.72 | [21.45, 24.25]
#> 105 |     22.13 | 0.58 | [20.99, 23.27]
#> 113 |     21.69 | 0.54 | [20.65, 22.74]
#> 150 |     20.02 | 0.68 | [18.68, 21.35]
#> 205 |     18.16 | 1.17 | [15.87, 20.45]
#> 335 |     15.25 | 2.06 | [11.21, 19.29]

Marginal effects for each level of random effects

Marginal effects can also be calculated for each group level in mixed models. Simply add the name of the related random effects term to the terms-argument, and set type = "re".

In the following example, we fit a linear mixed model and first simply plot the marginal effetcs, not conditioned on random effects.

library(sjlabelled)
library(lme4)
data(efc)
efc$e15relat <- as_label(efc$e15relat)
m <- lmer(neg_c_7 ~ c12hour + c160age + c161sex + (1 | e15relat), data = efc)
me <- ggpredict(m, terms = "c12hour")
plot(me)

Changing the type to type = "re" still returns population-level predictions by default. The major difference between type = "fe" and type = "re" is the uncertainty in the variance parameters. This leads to larger confidence intervals for marginal effects with type = "re".

me <- ggpredict(m, terms = "c12hour", type = "re")
plot(me)

To compute marginal effects for each grouping level, add the related random term to the terms-argument. In this case, confidence intervals are not calculated, but marginal effects are conditioned on each group level of the random effects.

me <- ggpredict(m, terms = c("c12hour", "e15relat"), type = "re")
plot(me)

Marginal effects, conditioned on random effects, can also be calculated for specific levels only. Add the related values into brackets after the variable name in the terms-argument.

me <- ggpredict(m, terms = c("c12hour", "e15relat [child,sibling]"), type = "re")
plot(me)

If the group factor has too many levels, you can also take a random sample of all possible levels and plot the marginal effects for this subsample of group levels. To do this, use term = "<groupfactor> [sample=n]".

data("sleepstudy")
m <- lmer(Reaction ~ Days + (1 + Days | Subject), data = sleepstudy)
me <- ggpredict(m, terms = c("Days", "Subject [sample=8]"), type = "re")
plot(me)