To demonstrate how to use efficiency augmentation and the propensity score utilities available in the personalized
package, we simulate data with two treatments. The treatment assignments are based on covariates and hence mimic an observational setting with no unmeasured confounders.
In this simulation, the treatment assignment depends on covariates and hence we must model the propensity score \(\pi(x) = Pr(T = 1 | X = x)\). In this simulation we will assume that larger values of the outcome are better.
library(personalized)
set.seed(1)
n.obs <- 1000
n.vars <- 50
x <- matrix(rnorm(n.obs * n.vars, sd = 3), n.obs, n.vars)
# simulate non-randomized treatment
xbetat <- 0.5 + 0.25 * x[,21] - 0.25 * x[,41]
trt.prob <- exp(xbetat) / (1 + exp(xbetat))
trt <- rbinom(n.obs, 1, prob = trt.prob)
# simulate delta
delta <- (0.5 + x[,2] - 0.5 * x[,3] - 1 * x[,11] + 1 * x[,1] * x[,12] )
# simulate main effects g(X)
xbeta <- x[,1] + x[,11] - 2 * x[,12]^2 + x[,13] + 0.5 * x[,15] ^ 2
xbeta <- xbeta + delta * (2 * trt - 1)
# simulate continuous outcomes
y <- drop(xbeta) + rnorm(n.obs)
Estimation of the propensity score is a fundamental aspect of the estimation of individualized treatment rules (ITRs). The personalized
package offers support tools for construction of the propensity score function used by the fit.subgroup()
function. The support is via the create.propensity.function()
function. This tool allows for estimation of the propensity score in high dimensions via glmnet
. In high dimensions it can be important to account for regularization bias via cross-fitting (https://arxiv.org/abs/1608.00060); the create.propensity.function()
offers a cross-fitting approach for high-dimensional propensity score estimation. A basic usage of this function with cross-fitting (with 10 folds) is as follows:
# arguments can be passed to cv.glmnet via `cv.glmnet.args`
prop.func <- create.propensity.function(crossfit = TRUE,
nfolds.crossfit = 10,
cv.glmnet.args = list(type.measure = "auc", nfolds = 5))
prop.func
can then be passed to fit.subgroup()
as follows:
subgrp.model <- fit.subgroup(x = x, y = y,
trt = trt,
propensity.func = prop.func,
loss = "sq_loss_lasso",
nfolds = 10) # option for cv.glmnet (for ITR estimation)
summary(subgrp.model)
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
## cutpoint: 0
## propensity
## function: propensity.func
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Outcomes:
## Recommended 0 Recommended 1
## Received 0 -10.5061 (n = 116) -17.5969 (n = 284)
## Received 1 -15.1773 (n = 371) -11.0091 (n = 229)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## 4.6712 (n = 487)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 6.5878 (n = 513)
##
## NOTE: The above average outcomes are biased estimates of
## the expected outcomes conditional on subgroups.
## Use 'validate.subgroup()' to obtain unbiased estimates.
##
## ---------------------------------------------------
##
## Benefit score quantiles (f(X) for 1 vs 0):
## 0% 25% 50% 75% 100%
## -16.5151 -3.0487 0.2011 3.6675 14.8049
##
## ---------------------------------------------------
##
## Summary of individual treatment effects:
## E[Y|T=1, X] - E[Y|T=0, X]
##
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -33.0302 -6.0975 0.4022 0.7844 7.3349 29.6098
##
## ---------------------------------------------------
##
## 9 out of 50 interactions selected in total by the lasso (cross validation criterion).
##
## The first estimate is the treatment main effect, which is always selected.
## Any other variables selected represent treatment-covariate interactions.
##
## Trt1 V2 V3 V4 V11 V12 V21 V29
## Estimate 0.6624 0.6976 -0.3349 -0.003 -0.0212 -0.0435 -1.1087 -0.0102
## V36 V41
## Estimate 0.2934 0.7411
Efficiency in estimating ITRs can be improved by including an augmentation term. The optimal augmentation term generally is a function of the main effects of the full outcome regression model marginalized over the treatment. Especially in high dimensions, regularization bias can potentially have a negative impact on performance. Cross-fitting is again another reasonable approach to circumventing this issue. Augmentation functions can be constructed (with cross-fitting as an option) via the create.augmentation.function()
function, which works similarly as the create.propensity.function()
function. The basic usage is as follows:
aug.func <- create.augmentation.function(family = "gaussian",
crossfit = TRUE,
nfolds.crossfit = 10,
cv.glmnet.args = list(type.measure = "mae", nfolds = 5))
aug.func
can be used for augmentation by passing it to fit.subgroup()
like:
subgrp.model.aug <- fit.subgroup(x = x, y = y,
trt = trt,
propensity.func = prop.func,
augment.func = aug.func,
loss = "sq_loss_lasso",
nfolds = 10) # option for cv.glmnet (for ITR estimation)
summary(subgrp.model.aug)
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
## cutpoint: 0
## augmentation
## function: augment.func
## propensity
## function: propensity.func
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Outcomes:
## Recommended 0 Recommended 1
## Received 0 -6.4022 (n = 135) -19.8831 (n = 265)
## Received 1 -17.8593 (n = 202) -11.3133 (n = 398)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## 11.4572 (n = 337)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 8.5698 (n = 663)
##
## NOTE: The above average outcomes are biased estimates of
## the expected outcomes conditional on subgroups.
## Use 'validate.subgroup()' to obtain unbiased estimates.
##
## ---------------------------------------------------
##
## Benefit score quantiles (f(X) for 1 vs 0):
## 0% 25% 50% 75% 100%
## -6.3155 -0.5672 0.9020 2.6326 9.1404
##
## ---------------------------------------------------
##
## Summary of individual treatment effects:
## E[Y|T=1, X] - E[Y|T=0, X]
##
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## -12.631 -1.134 1.804 1.886 5.265 18.281
##
## ---------------------------------------------------
##
## 1 out of 50 interactions selected in total by the lasso (cross validation criterion).
##
## The first estimate is the treatment main effect, which is always selected.
## Any other variables selected represent treatment-covariate interactions.
##
## Trt1 V2
## Estimate 0.9793 0.7474
We first run the training/testing procedure to assess the performance of the non-augmented estimator:
valmod <- validate.subgroup(subgrp.model, B = 5,
method = "training_test",
train.fraction = 0.75)
valmod
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
##
## validation method: training_test_replication
## cutpoint: 0
## replications: 5
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Test Set Outcomes:
## Recommended 0
## Received 0 -14.3377 (SE = 4.1142, n = 29.6)
## Received 1 -13.2299 (SE = 1.3556, n = 93.2)
## Recommended 1
## Received 0 -15.5863 (SE = 3.2318, n = 67.8)
## Received 1 -10.9153 (SE = 4.6092, n = 59.4)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## -1.1079 (SE = 4.6909, n = 122.8)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 4.671 (SE = 6.9031, n = 127.2)
##
## Est of
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:
## 1.9651 (SE = 5.1967)
Then we compare with the augmented estimator. Although this is based on just 5 replications, we can see that the augmented estimator is much better at discriminating between benefitting and non-benefitting patients, as evidenced by the large treatment effect among those predicted to benefit by the augmented estimator versus the smaller conditional treatment effect above.
valmod.aug <- validate.subgroup(subgrp.model.aug, B = 5,
method = "training_test",
train.fraction = 0.75)
valmod.aug
## family: gaussian
## loss: sq_loss_lasso
## method: weighting
##
## validation method: training_test_replication
## cutpoint: 0
## replications: 5
##
## benefit score: f(x),
## Trt recom = 1*I(f(x)>c)+0*I(f(x)<=c) where c is 'cutpoint'
##
## Average Test Set Outcomes:
## Recommended 0
## Received 0 -10.2154 (SE = 4.8772, n = 39.8)
## Received 1 -16.7413 (SE = 2.5499, n = 54.6)
## Recommended 1
## Received 0 -18.1801 (SE = 2.9749, n = 59.8)
## Received 1 -11.1547 (SE = 1.9546, n = 95.8)
##
## Treatment effects conditional on subgroups:
## Est of E[Y|T=0,Recom=0]-E[Y|T=/=0,Recom=0]
## 6.5259 (SE = 6.4483, n = 94.4)
## Est of E[Y|T=1,Recom=1]-E[Y|T=/=1,Recom=1]
## 7.0254 (SE = 4.2785, n = 155.6)
##
## Est of
## E[Y|Trt received = Trt recom] - E[Y|Trt received =/= Trt recom]:
## 6.3519 (SE = 2.8961)