Package MKinfer includes a collection of functions for the computation of various confidence intervals (Altman et al. 2000; Hedderich and Sachs 2018) including bootstrapped versions (Davison and Hinkley 1997) as well as Hsu (Hedderich and Sachs 2018), permutation (Janssen 1997), bootstrap (Davison and Hinkley 1997) and multiple imputation (Barnard and Rubin 1999) t-test.
We first load the package.
library(MKinfer)
There are several functions for computing confidence intervals. We can compute 12 different confidence intervals for binomial proportions (DasGupta, Cai, and Brown 2001); e.g.
## default: "wilson"
binomCI(x = 12, n = 50)
##
## wilson confidence interval
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## prob 0.1429739 0.3741268
##
## sample estimate:
## prob
## 0.24
##
## additional information:
## standard error of prob
## 0.05896867
## Clopper-Pearson interval
binomCI(x = 12, n = 50, method = "clopper-pearson")
##
## clopper-pearson confidence interval
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## prob 0.1306099 0.3816907
##
## sample estimate:
## prob
## 0.24
## identical to
binom.test(x = 12, n = 50)$conf.int
## [1] 0.1306099 0.3816907
## attr(,"conf.level")
## [1] 0.95
For all intervals implemented see the help page of function binomCI. One can also compute bootstrap intervals via function boot.ci of package boot (Davison and Hinkley 1997) as well as one-sided intervals.
There are several functions for computing confidence intervals. We can compute different confidence intervals for the difference of two binomial proportions (independent (Newcombe 1998a) and paired case (Newcombe 1998b)); e.g.
## default: wilson
binomDiffCI(a = 5, b = 0, c = 51, d = 29)
## [1] "two.sided"
##
## wilson confidence interval (independent proportions)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## difference of independent proportions -0.03813715 0.19256
##
## sample estimate:
## difference of proportions
## 0.08928571
##
## additional information:
## proportion of group 1 proportion of group 2
## 0.08928571 0.00000000
## default: wilson with continuity correction
binomDiffCI(a = 212, b = 144, c = 256, d = 707, paired = TRUE)
## [1] "two.sided"
##
## wilson-cc confidence interval (paired data)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## difference of proportions (paired data) -0.1141915 -0.05543347
##
## sample estimate:
## difference of proportions
## -0.08491281
##
## additional information:
## proportion of group 1 proportion of group 2
## 0.2699014 0.3548143
For all intervals implemented see the help page of function binomDiffCI. One can also compute boostrap intervals via function boot.ci of package boot (Davison and Hinkley 1997) as well as one-sided intervals.
We can compute confidence intervals for mean and SD (Altman et al. 2000, Davison and Hinkley (1997)).
x <- rnorm(50, mean = 2, sd = 3)
## mean and SD unknown
normCI(x)
##
## Exact confidence interval(s)
##
## 95 percent confidence intervals:
## 2.5 % 97.5 %
## mean 1.704110 3.325494
## sd 2.382851 3.554686
##
## sample estimates:
## mean sd
## 2.514802 2.852573
##
## additional information:
## SE of mean
## 0.4034147
meanCI(x)
##
## Exact confidence interval(s)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## mean 1.70411 3.325494
##
## sample estimates:
## mean sd
## 2.514802 2.852573
##
## additional information:
## SE of mean
## 0.4034147
sdCI(x)
##
## Exact confidence interval(s)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## sd 2.382851 3.554686
##
## sample estimates:
## mean sd
## 2.514802 2.852573
##
## additional information:
## SE of mean
## 0.4034147
## SD known
normCI(x, sd = 3)
##
## Exact confidence interval(s)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## mean 1.68326 3.346344
##
## sample estimate:
## mean
## 2.514802
##
## additional information:
## SE of mean
## 0.4242641
## mean known
normCI(x, mean = 2, alternative = "less")
##
## Exact confidence interval(s)
##
## 95 percent confidence interval:
## 0 % 95 %
## sd 0 3.428001
##
## sample estimate:
## sd
## 2.852573
## bootstrap
meanCI(x, boot = TRUE)
##
## Bootstrap confidence interval(s)
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level Normal Basic Studentized
## 95% ( 1.728, 3.297 ) ( 1.710, 3.280 ) ( 1.735, 3.365 )
##
## Level Percentile BCa
## 95% ( 1.749, 3.319 ) ( 1.781, 3.351 )
## Calculations and Intervals on Original Scale
##
## sample estimates:
## mean sd
## 2.514802 2.852573
We can compute confidence interval for the difference of means (Altman et al. 2000; Hedderich and Sachs 2018; Davison and Hinkley 1997).
x <- rnorm(20)
y <- rnorm(20, sd = 2)
## paired
normDiffCI(x, y, paired = TRUE)
##
## Confidence interval (paired)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## mean of differences -0.7983274 1.204696
##
## sample estimates:
## mean of differences sd of differences
## 0.2031841 2.1399160
##
## additional information:
## SE of mean of differences
## 0.4784998
## compare
normCI(x-y)
##
## Exact confidence interval(s)
##
## 95 percent confidence intervals:
## 2.5 % 97.5 %
## mean -0.7983274 1.204696
## sd 1.6273856 3.125501
##
## sample estimates:
## mean sd
## 0.2031841 2.1399160
##
## additional information:
## SE of mean
## 0.4784998
## bootstrap
normDiffCI(x, y, paired = TRUE, boot = TRUE)
##
## Bootstrap confidence interval (paired)
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level Normal Basic Studentized
## 95% (-0.6959, 1.1197 ) (-0.7027, 1.1178 ) (-0.7671, 1.2333 )
##
## Level Percentile BCa
## 95% (-0.7114, 1.1091 ) (-0.6798, 1.1318 )
## Calculations and Intervals on Original Scale
##
## sample estimates:
## mean of differences sd of differences
## 0.2031841 2.1399160
## unpaired
y <- rnorm(10, mean = 1, sd = 2)
## classical
normDiffCI(x, y, method = "classical")
##
## Classical confidence interval (unpaired)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## difference in means -1.204658 0.6782862
##
## sample estimate:
## difference in means
## -0.2631862
##
## additional information:
## SE of difference in means Cohen's d (SMD)
## 0.4596119 -0.2217775
##
## mean of x SD of x mean of y SD of y
## 0.2664444 0.7538198 0.5296306 1.7837353
## Welch (default as in case of function t.test)
normDiffCI(x, y, method = "welch")
##
## Welch confidence interval (unpaired)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## difference in means -1.564321 1.037948
##
## sample estimate:
## difference in means
## -0.2631862
##
## additional information:
## SE of difference in means Cohen's d (SMD)
## 0.5887133 -0.1359095
##
## mean of x SD of x mean of y SD of y
## 0.2664444 0.7538198 0.5296306 1.7837353
## Hsu
normDiffCI(x, y, method = "hsu")
##
## Hsu confidence interval (unpaired)
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## difference in means -1.594948 1.068576
##
## sample estimate:
## difference in means
## -0.2631862
##
## additional information:
## SE of difference in means Cohen's d (SMD)
## 0.5887133 -0.1359095
##
## mean of x SD of x mean of y SD of y
## 0.2664444 0.7538198 0.5296306 1.7837353
## bootstrap: assuming equal variances
normDiffCI(x, y, method = "classical", boot = TRUE, bootci.type = "bca")
##
## Bootstrap confidence interval (equal variances, unpaired)
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level BCa
## 95% (-1.2945, 0.9072 )
## Calculations and Intervals on Original Scale
##
## sample estimate:
## difference in means
## -0.2631862
##
## additional information:
## SE of difference in means Cohen's d (SMD)
## 0.4596119 -0.2217775
##
## mean of x SD of x mean of y SD of y
## 0.2664444 0.7538198 0.5296306 1.7837353
## bootstrap: assuming unequal variances
normDiffCI(x, y, method = "welch", boot = TRUE, bootci.type = "bca")
##
## Bootstrap confidence interval (unequal variances, unpaired)
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level BCa
## 95% (-1.2764, 0.9485 )
## Calculations and Intervals on Original Scale
##
## sample estimate:
## difference in means
## -0.2631862
##
## additional information:
## SE of difference in means Cohen's d (SMD)
## 0.5887133 -0.1359095
##
## mean of x SD of x mean of y SD of y
## 0.2664444 0.7538198 0.5296306 1.7837353
In case of unequal variances and unequal sample sizes per group the classical confidence interval may have a bad coverage (too long or too short), as is indicated by the small Monte-Carlo simulation study below.
M <- 100
CIhsu <- CIwelch <- CIclass <- matrix(NA, nrow = M, ncol = 2)
for(i in 1:M){
x <- rnorm(10)
y <- rnorm(30, sd = 0.1)
CIclass[i,] <- normDiffCI(x, y, method = "classical")$conf.int
CIwelch[i,] <- normDiffCI(x, y, method = "welch")$conf.int
CIhsu[i,] <- normDiffCI(x, y, method = "hsu")$conf.int
}
## coverage probabilies
## classical
sum(CIclass[,1] < 0 & 0 < CIclass[,2])/M
## [1] 0.78
## Welch
sum(CIwelch[,1] < 0 & 0 < CIwelch[,2])/M
## [1] 0.96
## Hsu
sum(CIhsu[,1] < 0 & 0 < CIhsu[,2])/M
## [1] 0.96
We provide 12 different confidence intervals for the (classical) coefficient of variation (Gulhar, Kibria, and Ahmed 2012; Davison and Hinkley 1997); e.g.
x <- rnorm(100, mean = 10, sd = 2) # CV = 0.2
## default: "miller"
cvCI(x)
##
## Miller (1991) confidence interval
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## CV 0.2052464 0.2760637
##
## sample estimate:
## CV
## 0.240655
##
## additional information:
## standard error of CV
## 0.01806597
## Gulhar et al. (2012)
cvCI(x, method = "gulhar")
##
## Gulhar et al (2012) confidence interval
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## CV 0.2112968 0.279563
##
## sample estimate:
## CV
## 0.240655
## bootstrap
cvCI(x, method = "boot")
##
## Bootstrap confidence interval
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level Normal Basic
## 95% ( 0.2064, 0.2784 ) ( 0.2050, 0.2771 )
##
## Level Percentile BCa
## 95% ( 0.2042, 0.2763 ) ( 0.2112, 0.2868 )
## Calculations and Intervals on Original Scale
##
## sample estimate:
## CV
## 0.240655
For all intervals implemented see the help page of function cvCI.
We start with the computation of confidence intervals for quantiles (Hedderich and Sachs 2018; Davison and Hinkley 1997).
x <- rexp(100, rate = 0.5)
## exact
quantileCI(x = x, prob = 0.95)
##
## exact confidence interval
##
## 95.14464 percent confidence interval:
## lower upper
## 95 % quantile 4.650895 7.016497
##
## sample estimate:
## 95 % quantile
## 5.806478
## asymptotic
quantileCI(x = x, prob = 0.95, method = "asymptotic")
##
## asymptotic confidence interval
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## 95 % quantile 4.650895 10.50095
##
## sample estimate:
## 95 % quantile
## 5.806478
## boot
quantileCI(x = x, prob = 0.95, method = "boot")
##
## bootstrap confidence interval
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level Normal Basic
## 95% ( 5.049, 7.288 ) ( 5.540, 7.567 )
##
## Level Percentile BCa
## 95% ( 4.046, 6.073 ) ( 4.653, 7.016 )
## Calculations and Intervals on Original Scale
##
## sample estimate:
## 95 % quantile
## 5.806478
Next, we consider the median.
## exact
medianCI(x = x)
##
## exact confidence interval
##
## 95.23684 percent confidence intervals:
## lower upper
## median 0.9814398 1.808118
## median 1.0749343 2.059112
##
## sample estimate:
## median
## 1.267268
## asymptotic
medianCI(x = x, method = "asymptotic")
##
## asymptotic confidence interval
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## median 1.069589 1.812961
##
## sample estimate:
## median
## 1.267268
## boot
medianCI(x = x, method = "boot")
##
## bootstrap confidence interval
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level Normal Basic
## 95% ( 0.756, 1.627 ) ( 0.691, 1.462 )
##
## Level Percentile BCa
## 95% ( 1.072, 1.843 ) ( 1.070, 1.811 )
## Calculations and Intervals on Original Scale
##
## sample estimate:
## median
## 1.267268
It often happens that quantile confidence intervals are not unique. Here the minimum length interval might be of interest.
medianCI(x = x, minLength = TRUE)
##
## minimum length exact confidence interval
##
## 95.23684 percent confidence interval:
## lower upper
## median 0.9814398 1.808118
##
## sample estimate:
## median
## 1.267268
Finally, we take a look at MAD (median absolute deviation) where by default the standardized MAD is used (see function mad).
## exact
madCI(x = x)
##
## exact confidence interval
##
## 95.23684 percent confidence intervals:
## lower upper
## MAD 0.906061 1.507945
## MAD 1.165173 1.620682
##
## sample estimate:
## MAD
## 1.263843
## aysymptotic
madCI(x = x, method = "asymptotic")
##
## asymptotic confidence interval
##
## 95 percent confidence interval:
## 2.5 % 97.5 %
## MAD 1.131868 1.556916
##
## sample estimate:
## MAD
## 1.263843
## boot
madCI(x = x, method = "boot")
##
## bootstrap confidence interval
##
## BOOTSTRAP CONFIDENCE INTERVAL CALCULATIONS
## Based on 9999 bootstrap replicates
##
## CALL :
## boot.ci(boot.out = boot.out, conf = 1 - alpha, type = bootci.type)
##
## Intervals :
## Level Normal Basic
## 95% ( 1.007, 1.445 ) ( 0.971, 1.366 )
##
## Level Percentile BCa
## 95% ( 1.162, 1.557 ) ( 1.147, 1.532 )
## Calculations and Intervals on Original Scale
##
## sample estimate:
## MAD
## 1.263843
## unstandardized
madCI(x = x, constant = 1)
##
## exact confidence interval
##
## 95.23684 percent confidence intervals:
## lower upper
## MAD 0.6111297 1.017095
## MAD 0.7858981 1.093135
##
## sample estimate:
## MAD
## 0.8524502
The Hsu two-sample t-test is an alternative to the Welch two-sample t-test using a different formula for computing the degrees of freedom of the respective t-distribution (Hedderich and Sachs 2018). The following code is taken and adapted from the help page of the t.test function.
t.test(1:10, y = c(7:20)) # P = .00001855
##
## Welch Two Sample t-test
##
## data: 1:10 and c(7:20)
## t = -5.4349, df = 21.982, p-value = 1.855e-05
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -11.052802 -4.947198
## sample estimates:
## mean of x mean of y
## 5.5 13.5
t.test(1:10, y = c(7:20, 200)) # P = .1245 -- NOT significant anymore
##
## Welch Two Sample t-test
##
## data: 1:10 and c(7:20, 200)
## t = -1.6329, df = 14.165, p-value = 0.1245
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -47.242900 6.376233
## sample estimates:
## mean of x mean of y
## 5.50000 25.93333
hsu.t.test(1:10, y = c(7:20))
##
## Hsu Two Sample t-test
##
## data: 1:10 and c(7:20)
## t = -5.4349, df = 9, p-value = 0.0004137
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -11.329805 -4.670195
## sample estimates:
## mean of x mean of y SD of x SD of y
## 5.50000 13.50000 3.02765 4.18330
hsu.t.test(1:10, y = c(7:20, 200))
##
## Hsu Two Sample t-test
##
## data: 1:10 and c(7:20, 200)
## t = -1.6329, df = 9, p-value = 0.1369
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -48.740846 7.874179
## sample estimates:
## mean of x mean of y SD of x SD of y
## 5.50000 25.93333 3.02765 48.32253
## Traditional interface
with(sleep, t.test(extra[group == 1], extra[group == 2]))
##
## Welch Two Sample t-test
##
## data: extra[group == 1] and extra[group == 2]
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean of x mean of y
## 0.75 2.33
with(sleep, hsu.t.test(extra[group == 1], extra[group == 2]))
##
## Hsu Two Sample t-test
##
## data: extra[group == 1] and extra[group == 2]
## t = -1.8608, df = 9, p-value = 0.09569
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.5007773 0.3407773
## sample estimates:
## mean of x mean of y SD of x SD of y
## 0.750000 2.330000 1.789010 2.002249
## Formula interface
t.test(extra ~ group, data = sleep)
##
## Welch Two Sample t-test
##
## data: extra by group
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean in group 1 mean in group 2
## 0.75 2.33
hsu.t.test(extra ~ group, data = sleep)
##
## Hsu Two Sample t-test
##
## data: extra by group
## t = -1.8608, df = 9, p-value = 0.09569
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.5007773 0.3407773
## sample estimates:
## mean of x mean of y SD of x SD of y
## 0.750000 2.330000 1.789010 2.002249
One and two sample bootstrap t-tests with equal or unequal variances in the two sample case (Efron and Tibshirani 1993).
boot.t.test(1:10, y = c(7:20)) # without bootstrap: P = .00001855
##
## Bootstrapped Welch Two Sample t-test
##
## data: 1:10 and c(7:20)
## bootstrapped p-value = 2e-04
## 95 percent bootstrap percentile confidence interval:
## -10.771429 -5.242143
##
## Results without bootstrap:
## t = -5.4349, df = 21.982, p-value = 1.855e-05
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -11.052802 -4.947198
## sample estimates:
## mean of x mean of y
## 5.5 13.5
boot.t.test(1:10, y = c(7:20, 200)) # without bootstrap: P = .1245
##
## Bootstrapped Welch Two Sample t-test
##
## data: 1:10 and c(7:20, 200)
## bootstrapped p-value = 0.0214
## 95 percent bootstrap percentile confidence interval:
## -46.3 -6.0
##
## Results without bootstrap:
## t = -1.6329, df = 14.165, p-value = 0.1245
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -47.242900 6.376233
## sample estimates:
## mean of x mean of y
## 5.50000 25.93333
## Traditional interface
with(sleep, boot.t.test(extra[group == 1], extra[group == 2]))
##
## Bootstrapped Welch Two Sample t-test
##
## data: extra[group == 1] and extra[group == 2]
## bootstrapped p-value = 0.08021
## 95 percent bootstrap percentile confidence interval:
## -3.15 0.00
##
## Results without bootstrap:
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean of x mean of y
## 0.75 2.33
## Formula interface
boot.t.test(extra ~ group, data = sleep)
##
## Bootstrapped Welch Two Sample t-test
##
## data: extra by group
## bootstrapped p-value = 0.08481
## 95 percent bootstrap percentile confidence interval:
## -3.16 -0.02
##
## Results without bootstrap:
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean in group 1 mean in group 2
## 0.75 2.33
One and two sample permutation t-tests with equal (Efron and Tibshirani 1993) or unequal variances (Janssen 1997) in the two sample case.
perm.t.test(1:10, y = c(7:20)) # without permutation: P = .00001855
##
## Permutation Welch Two Sample t-test
##
## data: 1:10 and c(7:20)
## (Monte-Carlo) permutation p-value < 2.2e-16
## 95 percent (Monte-Carlo) permutation percentile confidence interval:
## -12.400000 -3.657143
##
## Results without permutation:
## t = -5.4349, df = 21.982, p-value = 1.855e-05
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -11.052802 -4.947198
## sample estimates:
## mean of x mean of y
## 5.5 13.5
## permutation confidence interval sensitive to outlier!
perm.t.test(1:10, y = c(7:20, 200)) # without permutation: P = .1245
##
## Permutation Welch Two Sample t-test
##
## data: 1:10 and c(7:20, 200)
## (Monte-Carlo) permutation p-value < 2.2e-16
## 95 percent (Monte-Carlo) permutation percentile confidence interval:
## -36.866667 1.966667
##
## Results without permutation:
## t = -1.6329, df = 14.165, p-value = 0.1245
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -47.242900 6.376233
## sample estimates:
## mean of x mean of y
## 5.50000 25.93333
## Traditional interface
with(sleep, perm.t.test(extra[group == 1], extra[group == 2]))
##
## Permutation Welch Two Sample t-test
##
## data: extra[group == 1] and extra[group == 2]
## (Monte-Carlo) permutation p-value = 0.07581
## 95 percent (Monte-Carlo) permutation percentile confidence interval:
## -3.32 0.16
##
## Results without permutation:
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean of x mean of y
## 0.75 2.33
## Formula interface
perm.t.test(extra ~ group, data = sleep)
##
## Permutation Welch Two Sample t-test
##
## data: extra by group
## (Monte-Carlo) permutation p-value = 0.08121
## 95 percent (Monte-Carlo) permutation percentile confidence interval:
## -3.38 0.18
##
## Results without permutation:
## t = -1.8608, df = 17.776, p-value = 0.07939
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -3.3654832 0.2054832
## sample estimates:
## mean in group 1 mean in group 2
## 0.75 2.33
Function mi.t.test can be used to compute a multiple imputation t-test by applying the approch of Rubin (Rubin 1987) in combination with the adjustment of Barnard and Rubin (Barnard and Rubin 1999).
## Generate some data
set.seed(123)
x <- rnorm(25, mean = 1)
x[sample(1:25, 5)] <- NA
y <- rnorm(20, mean = -1)
y[sample(1:20, 4)] <- NA
pair <- c(rnorm(25, mean = 1), rnorm(20, mean = -1))
g <- factor(c(rep("yes", 25), rep("no", 20)))
D <- data.frame(ID = 1:45, variable = c(x, y), pair = pair, group = g)
## Use Amelia to impute missing values
library(Amelia)
## Loading required package: Rcpp
## ##
## ## Amelia II: Multiple Imputation
## ## (Version 1.7.6, built: 2019-11-24)
## ## Copyright (C) 2005-2020 James Honaker, Gary King and Matthew Blackwell
## ## Refer to http://gking.harvard.edu/amelia/ for more information
## ##
res <- amelia(D, m = 10, p2s = 0, idvars = "ID", noms = "group")
## Per protocol analysis (Welch two-sample t-test)
t.test(variable ~ group, data = D)
##
## Welch Two Sample t-test
##
## data: variable by group
## t = -6.9214, df = 33.69, p-value = 5.903e-08
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -2.628749 -1.435125
## sample estimates:
## mean in group no mean in group yes
## -1.0862469 0.9456901
## Intention to treat analysis (Multiple Imputation Welch two-sample t-test)
mi.t.test(res$imputations, x = "variable", y = "group")
##
## Multiple Imputation Welch Two Sample t-test
##
## data: Variable variable: group no vs group yes
## t = -6.6457, df = 25.142, p-value = 5.63e-07
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -2.668011 -1.405862
## sample estimates:
## mean (no) SD (no) mean (yes) SD (yes)
## -1.0809584 0.9379070 0.9559786 1.0203219
## Per protocol analysis (Two-sample t-test)
t.test(variable ~ group, data = D, var.equal = TRUE)
##
## Two Sample t-test
##
## data: variable by group
## t = -6.8168, df = 34, p-value = 7.643e-08
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -2.637703 -1.426171
## sample estimates:
## mean in group no mean in group yes
## -1.0862469 0.9456901
## Intention to treat analysis (Multiple Imputation two-sample t-test)
mi.t.test(res$imputations, x = "variable", y = "group", var.equal = TRUE)
##
## Multiple Imputation Two Sample t-test
##
## data: Variable variable: group no vs group yes
## t = -6.5736, df = 26.042, p-value = 5.656e-07
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -2.673828 -1.400046
## sample estimates:
## mean (no) SD (no) mean (yes) SD (yes)
## -1.0809584 0.9379070 0.9559786 1.0203219
## Specifying alternatives
mi.t.test(res$imputations, x = "variable", y = "group", alternative = "less")
##
## Multiple Imputation Welch Two Sample t-test
##
## data: Variable variable: group no vs group yes
## t = -6.6457, df = 25.142, p-value = 2.815e-07
## alternative hypothesis: true difference in means is less than 0
## 95 percent confidence interval:
## -Inf -1.5135
## sample estimates:
## mean (no) SD (no) mean (yes) SD (yes)
## -1.0809584 0.9379070 0.9559786 1.0203219
mi.t.test(res$imputations, x = "variable", y = "group", alternative = "greater")
##
## Multiple Imputation Welch Two Sample t-test
##
## data: Variable variable: group no vs group yes
## t = -6.6457, df = 25.142, p-value = 1
## alternative hypothesis: true difference in means is greater than 0
## 95 percent confidence interval:
## -2.560374 Inf
## sample estimates:
## mean (no) SD (no) mean (yes) SD (yes)
## -1.0809584 0.9379070 0.9559786 1.0203219
## One sample test
t.test(D$variable[D$group == "yes"])
##
## One Sample t-test
##
## data: D$variable[D$group == "yes"]
## t = 4.5054, df = 19, p-value = 0.0002422
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.5063618 1.3850184
## sample estimates:
## mean of x
## 0.9456901
mi.t.test(res$imputations, x = "variable", subset = D$group == "yes")
##
## Multiple Imputation One Sample t-test
##
## data: Variable variable
## t = 4.6847, df = 18.494, p-value = 0.0001725
## alternative hypothesis: true mean is not equal to 0
## 95 percent confidence interval:
## 0.5280752 1.3838820
## sample estimates:
## mean SD
## 0.9559786 1.0203219
mi.t.test(res$imputations, x = "variable", mu = -1, subset = D$group == "yes",
alternative = "less")
##
## Multiple Imputation One Sample t-test
##
## data: Variable variable
## t = 9.5851, df = 18.494, p-value = 1
## alternative hypothesis: true mean is less than -1
## 95 percent confidence interval:
## -Inf 1.309328
## sample estimates:
## mean SD
## 0.9559786 1.0203219
mi.t.test(res$imputations, x = "variable", mu = -1, subset = D$group == "yes",
alternative = "greater")
##
## Multiple Imputation One Sample t-test
##
## data: Variable variable
## t = 9.5851, df = 18.494, p-value = 6.655e-09
## alternative hypothesis: true mean is greater than -1
## 95 percent confidence interval:
## 0.6026297 Inf
## sample estimates:
## mean SD
## 0.9559786 1.0203219
## paired test
t.test(D$variable, D$pair, paired = TRUE)
##
## Paired t-test
##
## data: D$variable and D$pair
## t = -1.3532, df = 35, p-value = 0.1847
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.6976921 0.1395993
## sample estimates:
## mean of the differences
## -0.2790464
mi.t.test(res$imputations, x = "variable", y = "pair", paired = TRUE)
##
## Multiple Imputation Paired t-test
##
## data: Variables variable and pair
## t = -1.0455, df = 39.974, p-value = 0.3021
## alternative hypothesis: true difference in means is not equal to 0
## 95 percent confidence interval:
## -0.5680741 0.1807237
## sample estimates:
## mean of difference SD of difference
## -0.1936752 1.2426515
We provide a simple wrapper function that allows to compute a classical repeated measures one-way ANOVA as well as a respective mixed effects model. In addition, the non-parametric Friedman and Quade tests can be computed.
set.seed(123)
outcome <- c(rnorm(10), rnorm(10, mean = 1.5), rnorm(10, mean = 1))
timepoints <- factor(rep(1:3, each = 10))
patients <- factor(rep(1:10, times = 3))
rm.oneway.test(outcome, timepoints, patients)
##
## Repeated measures 1-way ANOVA
##
## data: outcome , timepoints and patients
## F = 6.5898, num df = 2, denom df = 18, p-value = 0.007122
rm.oneway.test(outcome, timepoints, patients, method = "lme")
##
## Mixed-effects 1-way ANOVA
##
## data: outcome , timepoints and patients
## F = 7.3674, num df = 2, denom df = 18, p-value = 0.004596
rm.oneway.test(outcome, timepoints, patients, method = "friedman")
##
## Friedman rank sum test
##
## data: x, g and id
## Friedman chi-squared = 12.8, df = 2, p-value = 0.001662
rm.oneway.test(outcome, timepoints, patients, method = "quade")
##
## Quade test
##
## data: x, g and id
## Quade F = 7.2906, num df = 2, denom df = 18, p-value = 0.004795
Volcano plots can be used to visualize the results when many tests have been applied. They show a measure of effect size in combination with (adjusted) p values.
## Generate some data
x <- matrix(rnorm(1000, mean = 10), nrow = 10)
g1 <- rep("control", 10)
y1 <- matrix(rnorm(500, mean = 11.75), nrow = 10)
y2 <- matrix(rnorm(500, mean = 9.75, sd = 3), nrow = 10)
g2 <- rep("treatment", 10)
group <- factor(c(g1, g2))
Data <- rbind(x, cbind(y1, y2))
## compute Hsu t-test
pvals <- apply(Data, 2, function(x, group) hsu.t.test(x ~ group)$p.value,
group = group)
## compute log-fold change
logfc <- function(x, group){
res <- tapply(x, group, mean)
log2(res[1]/res[2])
}
lfcs <- apply(Data, 2, logfc, group = group)
volcano(lfcs, p.adjust(pvals, method = "fdr"),
effect.low = -0.25, effect.high = 0.25,
xlab = "log-fold change", ylab = "-log10(adj. p value)")
The function imputeSD can be used to impute standard deviations for changes from baseline adopting the approach of the Cochrane handbook (Higgins et al. 2019, Section 6.5.2.8).
SD1 <- c(0.149, 0.022, 0.036, 0.085, 0.125, NA, 0.139, 0.124, 0.038)
SD2 <- c(NA, 0.039, 0.038, 0.087, 0.125, NA, 0.135, 0.126, 0.038)
SDchange <- c(NA, NA, NA, 0.026, 0.058, NA, NA, NA, NA)
imputeSD(SD1, SD2, SDchange)
## SD1 SD2 SDchange Cor SDchange.min SDchange.mean SDchange.max
## 1 0.149 0.149 NA NA 0.04491608 0.05829769 0.06913600
## 2 0.022 0.039 NA NA 0.01915642 0.02050235 0.02176520
## 3 0.036 0.038 NA NA 0.01132754 0.01460887 0.01727787
## 4 0.085 0.087 0.026 0.9545639 0.02600000 0.02600000 0.02600000
## 5 0.125 0.125 0.058 0.8923520 0.05800000 0.05800000 0.05800000
## 6 NA NA NA NA NA NA NA
## 7 0.139 0.135 NA NA 0.04148755 0.05374591 0.06368696
## 8 0.124 0.126 NA NA 0.03773311 0.04894677 0.05803262
## 9 0.038 0.038 NA NA 0.01145511 0.01486787 0.01763200
Function pairwise.fun enables the application of arbitrary functions for pairwise comparisons.
pairwise.wilcox.test(airquality$Ozone, airquality$Month,
p.adjust.method = "none")
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
## Warning in wilcox.test.default(xi, xj, paired = paired, ...): cannot compute
## exact p-value with ties
##
## Pairwise comparisons using Wilcoxon rank sum test with continuity correction
##
## data: airquality$Ozone and airquality$Month
##
## 5 6 7 8
## 6 0.19250 - - -
## 7 3e-05 0.01414 - -
## 8 0.00012 0.02591 0.86195 -
## 9 0.11859 0.95887 0.00074 0.00325
##
## P value adjustment method: none
## To avoid the warnings
library(exactRankTests)
## Package 'exactRankTests' is no longer under development.
## Please consider using package 'coin' instead.
pairwise.fun(airquality$Ozone, airquality$Month,
fun = function(x, y) wilcox.exact(x, y)$p.value)
## 5 vs 6 5 vs 7 5 vs 8 5 vs 9 6 vs 7 6 vs 8
## 1.897186e-01 1.087205e-05 6.108735e-05 1.171796e-01 1.183753e-02 2.333564e-02
## 6 vs 9 7 vs 8 7 vs 9 8 vs 9
## 9.528659e-01 8.595683e-01 5.281796e-04 2.714694e-03
sessionInfo()
## R version 4.0.0 Patched (2020-04-24 r78288)
## Platform: x86_64-pc-linux-gnu (64-bit)
## Running under: Linux Mint 19.3
##
## Matrix products: default
## BLAS: /usr/lib/x86_64-linux-gnu/libf77blas.so.3.10.3
## LAPACK: /home/kohlm/RTOP/Rbranch/lib/libRlapack.so
##
## locale:
## [1] LC_CTYPE=de_DE.UTF-8 LC_NUMERIC=C
## [3] LC_TIME=de_DE.UTF-8 LC_COLLATE=C
## [5] LC_MONETARY=de_DE.UTF-8 LC_MESSAGES=de_DE.UTF-8
## [7] LC_PAPER=de_DE.UTF-8 LC_NAME=C
## [9] LC_ADDRESS=C LC_TELEPHONE=C
## [11] LC_MEASUREMENT=de_DE.UTF-8 LC_IDENTIFICATION=C
##
## attached base packages:
## [1] stats graphics grDevices utils datasets methods base
##
## other attached packages:
## [1] exactRankTests_0.8-31 Amelia_1.7.6 Rcpp_1.0.4.6
## [4] MKinfer_0.5
##
## loaded via a namespace (and not attached):
## [1] gmp_0.5-13.6 pillar_1.4.3 compiler_4.0.0 arrangements_1.1.8
## [5] tools_4.0.0 boot_1.3-24 digest_0.6.25 evaluate_0.14
## [9] lifecycle_0.2.0 tibble_3.0.1 gtable_0.3.0 nlme_3.1-147
## [13] lattice_0.20-41 pkgconfig_2.0.3 rlang_0.4.5 yaml_2.2.1
## [17] xfun_0.13 MKdescr_0.6 stringr_1.4.0 knitr_1.28
## [21] vctrs_0.2.4 grid_4.0.0 glue_1.4.0 R6_2.4.1
## [25] foreign_0.8-78 rmarkdown_2.1 ggplot2_3.3.0 farver_2.0.3
## [29] magrittr_1.5 scales_1.1.0 ellipsis_0.3.0 htmltools_0.4.0
## [33] colorspace_1.4-1 labeling_0.3 stringi_1.4.6 munsell_0.5.0
## [37] crayon_1.3.4
Altman, Douglas G., David Machin, Trevor N. Bryant, and Martin J. Gardner, eds. 2000. Statistics with Confidence: Confidence Intervals and Statistical Guidelines. BMJ Books.
Barnard, J, and D B Rubin. 1999. “Miscellanea. Small-Sample Degrees of Freedom with Multiple Imputation.” Biometrika 86 (4). Oxford University Press (OUP): 948–55. doi:10.1093/biomet/86.4.948.
DasGupta, Anirban, T. Tony Cai, and Lawrence D. Brown. 2001. “Interval Estimation for a Binomial Proportion.” Statistical Science 16 (2). Institute of Mathematical Statistics: 101–33. doi:10.1214/ss/1009213286.
Davison, A. C., and D. V. Hinkley. 1997. Bootstrap Methods and Their Applications. Cambridge: Cambridge University Press. http://statwww.epfl.ch/davison/BMA/.
Efron, Bradley, and Robert J Tibshirani. 1993. An Introduction to the Bootstrap (Chapman & Hall/Crc Monographs on Statistics and Applied Probability). Chapman; Hall/CRC.
Gulhar, Monika, B M Golam Kibria, and Nasar Ahmed. 2012. “A Comparison of Some Confidence Intervals for Estimating the Population Coefficient of Variation: A Simulation Study.” SORT 36 (January): 45–68.
Hedderich, Jürgen, and Lothar Sachs. 2018. Angewandte Statistik: Methodensammlung Mit R. Springer Berlin Heidelberg. doi:10.1007/978-3-662-56657-2.
Higgins, Julian P.T., James Thomas, Jacqueline Chandler, Miranda Cumpston, Tianjing Li, Matthew J. Page, and Vivian A. Welch, eds. 2019. Cochrane Handbook for Systematic Reviews of Interventions. Wiley. doi:10.1002/9781119536604.
Janssen, Arnold. 1997. “Studentized Permutation Tests for Non-I.i.d. Hypotheses and the Generalized Behrens-Fisher Problem.” Statistics & Probability Letters 36 (1). Elsevier BV: 9–21. doi:10.1016/s0167-7152(97)00043-6.
Newcombe, R. G. 1998a. “Interval Estimation for the Difference Between Independent Proportions: Comparison of Eleven Methods.” Statistics in Medicine 17 (8). Wiley: 873–90. doi:10.1002/(sici)1097-0258(19980430)17:8<873::aid-sim779>3.0.co;2-i.
———. 1998b. “Improved confidence intervals for the difference between binomial proportions based on paired data.” Stat Med 17 (22): 2635–50.
Rubin, Donald B. 1987. Multiple Imputation for Nonresponse in Surveys (Wiley Series in Probability and Statistics). Wiley.