Introduction

The MXM R Package, short for the latin 'Mens ex Machina' ( Mind from the Machine ), is a collection of utility functions for feature selection, cross validation and Bayesian Networks. It is well known in the literature that the problem of learning the structure of Bayesian networks is very hard to tackle, because of the computational complexity. MXM offers many Bayesian Network construction algorithms, taking advantage of different heuristics that are described in the published algorithms.

In this tutorial we will learn how to construct the skeleton of a Bayesian network. For this task, we are going to use three different algorithms.

  1. Max-Min Hill-Climbing (MMHC): The algorithm combines ideas from local learning, constraint-based, and search-and-score techniques in a principled and effective way. It first reconstructs the skeleton of a Bayesian network and then performs a Bayesian-scoring greedy hill-climbing search to orient the edges.(Tsamardinos et al., 2006)

  2. PC: The algorithm is a structure learning/causal discovery algorithm developed at Carnegie Mellon University by Peter Spirtes and Clark Glymour and is implemented in MXM package as proposed by Spirtes et al. (2001).

  3. Partial Correlation based on Forward Selection: In this approach, the network construction uses the partial correlation based forward regression.

For simplicity, in this tutorial, we will use a dataset referred as “The Wine Dataset”.

Loading Data

The Wine Dataset contains the results of a chemical analysis of wines grown in a specific area of Italy. Three types of wine are represented in the 178 samples, with the results of 13 chemical analyses recorded for each sample. Note that the “Type” variable was transformed into a categorical variable.

So, first of all, for this tutorial analysis, we are loading the 'MXM' library and 'dplyr' library for handling easier the dataset, but note that the second one is not necessary for the analysis.

### ~ ~ ~ Load Packages ~ ~ ~ ###
library(MXM) 
library(dplyr)

On a next step we are downloading and opening the dataset, defining also the column names.

### ~ ~ ~ Load The Dataset ~ ~ ~ ###
wine.url <- "ftp://ftp.ics.uci.edu/pub/machine-learning-databases/wine/wine.data"
wine <- read.csv(wine.url,
                 check.names = FALSE,
                 header = FALSE) 
wine[, 1] <- as.factor(wine[, 1])
head(wine)
##   V1    V2   V3   V4   V5  V6   V7   V8   V9  V10  V11  V12  V13  V14
## 1  1 14.23 1.71 2.43 15.6 127 2.80 3.06 0.28 2.29 5.64 1.04 3.92 1065
## 2  1 13.20 1.78 2.14 11.2 100 2.65 2.76 0.26 1.28 4.38 1.05 3.40 1050
## 3  1 13.16 2.36 2.67 18.6 101 2.80 3.24 0.30 2.81 5.68 1.03 3.17 1185
## 4  1 14.37 1.95 2.50 16.8 113 3.85 3.49 0.24 2.18 7.80 0.86 3.45 1480
## 5  1 13.24 2.59 2.87 21.0 118 2.80 2.69 0.39 1.82 4.32 1.04 2.93  735
## 6  1 14.20 1.76 2.45 15.2 112 3.27 3.39 0.34 1.97 6.75 1.05 2.85 1450
str(wine)
## 'data.frame':    178 obs. of  14 variables:
##  $ V1 : Factor w/ 3 levels "1","2","3": 1 1 1 1 1 1 1 1 1 1 ...
##  $ V2 : num  14.2 13.2 13.2 14.4 13.2 ...
##  $ V3 : num  1.71 1.78 2.36 1.95 2.59 1.76 1.87 2.15 1.64 1.35 ...
##  $ V4 : num  2.43 2.14 2.67 2.5 2.87 2.45 2.45 2.61 2.17 2.27 ...
##  $ V5 : num  15.6 11.2 18.6 16.8 21 15.2 14.6 17.6 14 16 ...
##  $ V6 : int  127 100 101 113 118 112 96 121 97 98 ...
##  $ V7 : num  2.8 2.65 2.8 3.85 2.8 3.27 2.5 2.6 2.8 2.98 ...
##  $ V8 : num  3.06 2.76 3.24 3.49 2.69 3.39 2.52 2.51 2.98 3.15 ...
##  $ V9 : num  0.28 0.26 0.3 0.24 0.39 0.34 0.3 0.31 0.29 0.22 ...
##  $ V10: num  2.29 1.28 2.81 2.18 1.82 1.97 1.98 1.25 1.98 1.85 ...
##  $ V11: num  5.64 4.38 5.68 7.8 4.32 6.75 5.25 5.05 5.2 7.22 ...
##  $ V12: num  1.04 1.05 1.03 0.86 1.04 1.05 1.02 1.06 1.08 1.01 ...
##  $ V13: num  3.92 3.4 3.17 3.45 2.93 2.85 3.58 3.58 2.85 3.55 ...
##  $ V14: int  1065 1050 1185 1480 735 1450 1290 1295 1045 1045 ...
colnames(wine) <- c('Type', 'Alcohol', 'Malic', 'Ash', 
                    'Alcalinity', 'Magnesium', 'Phenols', 
                    'Flavanoids', 'Nonflavanoids',
                    'Proanthocyanins', 'Color', 'Hue', 
                    'Dilution', 'Proline')

MMHC

We are going to use MMHC on the above dataset in order to construct the skeleton of the network. MMHC uses conditional independence test on each variable. Firstly, MMPC runs on every variable. The backward phase (see Tsamardinos et al., 2006) takes place automatically. After all variables have been used, the matrix is checked for inconsistencies which are then corrected.

A trick mentioned in that paper to make the procedure faster is the following. In the k-th variable, the algorithm checks how many previously scanned variables have an edge with the k-th variable and keeps them along with the next (unscanned) variables (variables without edges are discarded).

This trick reduces time, but can lead to different results. For example, if the i-th variable is removed, the k-th node might not remove an edge between the j-th variable, if the i-th variable that d-separates them is missing.

Whole skeleton

So, knowing how the algorithm works, it is time to look into the arguments of the function that constructs the whole skeleton of the network.

mmhc.skel : Arguments

dataset : A matrix with the variables. The user must know if they are continuous or if they are categorical. When the matrix includes categorical data (i.e. 0, 1, 2, 3 where each number indicates a category) the minimum number for each variable must be 0. data.frame is also supported, as the dataset in this case is converted into a matrix. Here we use the whole Wine dataset

max_k : The maximum conditioning set to use in the conditional independence test (see Details of SES or MMPC). Here we choose 3

threshold : Threshold ( suitable values in (0, 1) ) for assessing p-values significance. Here we choose the default value of 0.05.

test : The conditional independence test to use. Default value is “testIndFisher”. This procedure allows for “testIndFisher”, “testIndSPearman” for continuous variables and “gSquare” for categorical variables. In case the dataset is a data.frame with mixed types of data, we recommend to set the argument as NULL. Then an appropriate test will be selected. See MMPC for the automatic choice of tests. Here we choose NULL, so that the appropriate test can be automatically selected.

type : The type of variable selection to take place for each variable (or node). The default (and standard) is “MMPC”. You can also choose “SES” and thus allow for multiple signatures of variables to be connected to a variable. Here we choose to use MMPC

backward : If TRUE, the backward (or symmetry correction) phase will be implemented. This removes any falsely included variables in the parents and children set of the target variable. The mmpcbackphase is therefore called. Here we set the argument to be TRUE.

symmetry : In order for an edge to be added, a statistical relationship must have been found from both directions. If you want this symmetry correction to take place, leave this boolean variable TRUE. When set to FALSE, an edge will not be added if a relationship between Y and X is detected but not between X and Y. Here we set the argument to be TRUE.

nc : How many cores to use. This plays an important role in cases of many variables. If the function is called at a multicore machine, this is a must option. Here we use only one core, since the dataset is small.

ini.pvalue : A list with the matrix of the univariate p-values. In cases where mmhc.skel is run more than once, the univariate associations need not be calculated again. Here we do not provide an initial list.

hash : A boolean variable which indicates whether (TRUE) or not (FALSE) to store the statistics calculated during MMPC or SES execution in a hash-type object. Default value is FALSE. If TRUE a hashObject is produced. Here we set it TRUE, because we would like to see the accurate count of the the number of tests performed.

mmhc.skel : Run

### ~ ~ ~ Running MMHC skeleton with MMPC ~ ~ ~ ###
MmhcSkeleton <- MXM::mmhc.skel(dataset    = wine[, 1:8], 
                               max_k      = 3, 
                               threshold  = 0.05,
                               test       = NULL,
                               type       = "MMPC",
                               backward   = TRUE,
                               symmetry   = TRUE,
                               nc         = 1,
                               ini.pvalue = NULL,
                               hash       = TRUE)

mmhc.skel : Output

The main reason of running MMHC would be to construct the network. Information about the edges between the nodes are appended in a adjacency matrix, where a value of 1 in G[i,j] appears also in G[j,i] and indicates that i,j have an edge between them. $G is the adjacency matrix.

head(MmhcSkeleton$G)
##            Type Alcohol Malic Ash Alcalinity Magnesium Phenols Flavanoids
## Type          0       1     0   1          1         0       0          1
## Alcohol       1       0     0   0          0         0       0          0
## Malic         0       0     0   0          0         0       0          0
## Ash           1       0     0   0          1         1       0          0
## Alcalinity    1       0     0   1          0         0       0          0
## Magnesium     0       0     0   1          0         0       0          0

In order to see the graph, we may use the plotnetwork function and an interactive graph is constructed.

## MXM::plotnetwork(MmhcSkeleton$G, "MMHC with MMPC Network")

The summary statistics about the edges (minimum, maximum, mean, median number of edges) and the density of edges (#edges / n(n-1)/2, where n is the number of variables), can be found in the $info and $density respectively.

MmhcSkeleton$info
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    0.00    1.00    1.50    1.75    2.25    4.00
MmhcSkeleton$density
## [1] 0.25

The matrix with the final p-values is also returned

head(MmhcSkeleton$pvalue)
##                  Type      Alcohol      Malic        Ash  Alcalinity
## Type         0.000000 -34.57419750 -1.2899274  -8.401684 -16.7240722
## Alcohol    -34.574197   0.00000000 -1.5602574  -1.144140  -0.9777993
## Malic       -1.289927  -1.56025742  0.0000000  -0.548314  -2.4545047
## Ash         -8.401684  -1.14414005 -0.5483140   0.000000 -26.0703528
## Alcalinity -16.724072  -0.97779927 -2.4545047 -26.070353   0.0000000
## Magnesium   -2.601032  -0.02754831 -0.7564002  -4.525253  -1.3136856
##              Magnesium     Phenols   Flavanoids
## Type       -2.60103201 -0.04979364 -52.94060381
## Alcohol    -0.02754831 -2.21132896  -0.33078218
## Malic      -0.75640022 -0.44128360  -0.08744842
## Ash        -4.52525285 -2.45121237  -2.07053875
## Alcalinity -1.31368558 -0.46776108  -0.29538868
## Magnesium   0.00000000 -1.49005503  -0.24959600

and the p-values of all pairwise univariate associations refers to the $ini.pvalue matrix.

head(MmhcSkeleton$ini.pvalue)
##           [,1]       [,2]        [,3]       [,4]       [,5]        [,6]
## [1,]   0.00000 -77.252723 -26.3781997 -12.790326 -32.143710 -11.8691788
## [2,] -81.69325   0.000000  -1.5602574  -5.384322 -10.594634  -8.2696065
## [3,] -30.81859  -1.560257   0.0000000  -3.552083  -9.271166  -0.7564002
## [4,] -12.39241  -5.384322  -3.5520829   0.000000 -21.281510  -9.1596408
## [5,] -29.99076 -10.594634  -9.2711662 -21.281510   0.000000  -1.3138901
## [6,] -11.33496  -8.212376  -0.7586787  -9.090335  -1.313686   0.0000000
##            [,7]        [,8]
## [1,] -61.991352 -110.258167
## [2,]  -9.306339   -6.529742
## [3,] -12.246011  -18.232525
## [4,]  -2.451212   -2.070539
## [5,] -11.297231  -13.399195
## [6,]  -5.499187   -4.730965

The number of the tests that SES or MMPC (Here it is about MMPC, since this selection method was used) are included in the $ntests variable

MmhcSkeleton$ntests
## univariate tests             Type          Alcohol            Malic 
##                0               58               14               14 
##              Ash       Alcalinity        Magnesium          Phenols 
##               25               19               14               17 
##       Flavanoids 
##               19

and if SES was used, a vector denoting which variables had more than one signature, i.e. more than one set of variables associated with them, would be appended in the $ms.

Finally, a numeric vector where the first element is the user time, the second element is the system time and the third element is the elapsed time is appended in the $runtime variable.

MmhcSkeleton$runtime
##    user  system elapsed 
##    1.52    0.00    1.63

Specific node

It is clear from the above run, that the whole skeleton of the network was constructed. In case we are interested only about one node, local.mmhc.skel can be used. The arguments here are less, since the backward phase takes place automatically, the symmetry can not be examined, there is no need for parallel computations etc. Here we use as examined node the first column (Type) of the dataset ( node = 1).

### ~ ~ ~ Running MMHC skeleton with MMPC ~ ~ ~ ###
MmhcLocalSkeleton <- MXM::local.mmhc.skel(dataset    = wine[, 1:5], 
                               node       = 1,
                               max_k      = 3, 
                               threshold  = 0.05,
                               test       = NULL)

PC

We are going to use PC on the above dataset in order to construct the skeleton of the network.

pc.skel : Arguments

dataset : A matrix with the variables. It has to be clear to the user whether the data are continuous or categorical. For categorical data, the user must transform the data.frame into a matrix. In addition, the numerical matrix must have values starting from 0. For example, 0, 1, 2, instead of “A”, “B” and “C”. In the case of mixed variables, (continuous, binary and ordinal) a data.frame must be provided and the non continuous variables (binary also) must be ordered factors. Here we use the whole Wine dataset.

method : For continuous data, the choices are “pearson”, “spearman” or “distcor”. The latter uses the distance correlation and should not be used with a great number of observations as it is by default really slow. For categorical data, this must be “cat”.For a mix of continuous, binary and ordinal data, “comb.fast” or “comb.mm” should be chosen. These two methods perform the symmetric test for mixed data (Tsagris et al., 2017). Here we choose comb.fast, since more than one type of data exist in the Wine dataset.

id : This is to be used in the glmm.pc.skel only. It is a vector for identifying the grouped data, the correlated observations, the subjects. * Here * we do not use this argument.

alpha : The significance level ( suitable values in (0, 1) ) for assessing the p-values. Here we choose the default value 0.01.

rob : This is for robust estimation of the Pearson correlation coefficient. Here we choose the default value FALSE.

R : The number of permutations to be conducted. This is taken into consideration for the “pc.skel” only. The Pearson correlation coefficient is calculated and the p-value is assessed via permutations. Here we choose R=2 for simplicity.

ncores : How many cores to use. Here we do not choose the number of cores.

stat : If you have the initial test statistics (univariate associations) values supply them here. Here we do not have any, therefore we are not using the specific argument.

ini.pvalue : If you have the initial p-values of the univariate associations supply them here. Here we do not have any, therefore we are not using the specific argument.

pc.skel : Run

### ~ ~ ~ Running MMHC skeleton with MMPC ~ ~ ~ ###
PcSkeleton <- MXM::pc.skel(dataset    = wine[, 1:8], 
                               method      = "comb.fast" ,
                               alpha       = 0.01,
                               rob       = FALSE,
                               R   = 1)

pc.skel : Output

The main reason of running PC, just like MMHC, would be to construct the network. Information about the edges between the nodes are appended in a adjacency matrix. Again, $G is the adjacency matrix, just like in MMHC.

head(PcSkeleton$G)
##            Type Alcohol Malic Ash Alcalinity Magnesium Phenols Flavanoids
## Type          0       1     1   1          1         0       0          1
## Alcohol       1       0     0   0          0         0       0          0
## Malic         1       0     0   0          0         0       0          0
## Ash           1       0     0   0          1         0       0          0
## Alcalinity    1       0     0   1          0         0       0          0
## Magnesium     0       0     0   0          0         0       0          0

The network interactive network can be again created.

## MXM::plotnetwork(PcSkeleton$G, "PC Network")

The values info, density, pvalue, ini.pvalue and runtime include the similar information as with MMHC.

Here we call them only for comparing the results.

PcSkeleton$info
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##    0.00    1.00    1.50    1.75    2.00    5.00
PcSkeleton$density
## [1] 0.25
head(PcSkeleton$pvalue)
##                  Type      Alcohol      Malic        Ash  Alcalinity
## Type         0.000000 -53.12342795 -7.6163785  -7.974090 -13.0122570
## Alcohol    -53.123428   0.00000000 -1.5602574  -1.705724  -0.9777993
## Malic       -7.616379  -1.56025742  0.0000000  -3.552083  -3.2387142
## Ash         -7.974090  -1.70572446 -3.5520829   0.000000 -21.2815102
## Alcalinity -13.012257  -0.97779927 -3.2387142 -21.281510   0.0000000
## Magnesium   -3.951545  -0.02776847 -0.7564002  -4.525253  -1.3138901
##              Magnesium    Phenols  Flavanoids
## Type       -3.95154463 -3.0910480 -52.9406038
## Alcohol    -0.02776847 -3.8416319  -0.3307822
## Malic      -0.75640022 -0.5788210  -0.2209009
## Ash        -4.52525285 -2.4512124  -2.0705388
## Alcalinity -1.31389013 -0.4677611  -3.2327091
## Magnesium   0.00000000 -1.4900550  -0.2495960
head(PcSkeleton$ini.pvalue)
##           [,1]       [,2]        [,3]       [,4]       [,5]        [,6]
## [1,]   0.00000 -81.693248 -30.8185851 -12.392410 -29.990762 -11.6223614
## [2,] -81.69325   0.000000  -1.5602574  -5.384322 -10.594634  -8.2696065
## [3,] -30.81859  -1.560257   0.0000000  -3.552083  -9.271166  -0.7564002
## [4,] -12.39241  -5.384322  -3.5520829   0.000000 -21.281510  -9.1596408
## [5,] -29.99076 -10.594634  -9.2711662 -21.281510   0.000000  -1.3138901
## [6,] -11.62236  -8.269607  -0.7564002  -9.159641  -1.313890   0.0000000
##            [,7]        [,8]
## [1,] -63.712666 -113.848714
## [2,]  -9.306339   -6.529742
## [3,] -12.246011  -18.232525
## [4,]  -2.451212   -2.070539
## [5,] -11.297231  -13.399195
## [6,]  -5.507681   -4.731090
PcSkeleton$runtime
##    user  system elapsed 
##    0.53    0.00    0.55

Furthermore, the pc.skel object includes information about the test statistics of the univariate associations …

head(PcSkeleton$stat)
##                 Type    Alcohol      Malic       Ash Alcalinity  Magnesium
## Type         0.00000 135.077624 36.9434250 13.312901  35.771637 12.4295843
## Alcohol    135.07762   0.000000  1.5823981  8.245177  18.743225 13.9277185
## Malic       36.94342   1.582398  0.0000000  4.867305  15.978879  0.5257716
## Ash         13.31290   8.245177  4.8673049  0.000000  43.061994 15.7486903
## Alcalinity  35.77164  18.743225 15.9788786 43.061994   0.000000  1.2307620
## Magnesium   12.42958  13.927719  0.5257716 15.748690   1.230762  0.0000000
##              Phenols Flavanoids
## Type       93.733010 233.925873
## Alcohol    16.051566  10.456738
## Malic      22.273425  35.774268
## Ash         2.977418   2.362009
## Alcalinity 20.234475  24.789646
## Magnesium   8.480164   7.015208

… the maximum value of k, the maximum cardinality of the conditioning set at which the algorithm stopped…

PcSkeleton$kappa
## [1] 3

… and a list with the separating sets for every value of k.

PcSkeleton$sepset[[1]][1:5,]
##      X Y SepVar 1        stat logged.p-value
## [1,] 6 8        7 0.078907336    -0.24959600
## [2,] 2 4        1 1.798509596    -1.70572446
## [3,] 6 7        8 1.480336516    -1.49005503
## [4,] 2 8        7 0.130488679    -0.33078218
## [5,] 2 6        1 0.001181986    -0.02776847

Partial Correlation based on Forward Selection

Finally, an other network construction approach would be through the use of Partial Correlation based on Forward Selection. In this Session we are going to demonstrate an example, using Forward Regression. In contrast to MMHC algorithm where MMPC or SES algorithms are run for every variable, here we are using forward regression. Partial correlation forward regression can be proven very efficient, since only correlations are being calculated.** However, it can only be applied on continuous data. **

The corfs.networks function shares some arguments with the mmhc.skel and pc.skel like:

x : A matrix with continuous data. Therefore we not including the first column of the dataset (Type), which is categorical.

threashold : Threshold ( suitable values in (0, 1) ) for assessing p-values significance. Here we choose the default value of 0.05.

symmetry : In order for an edge to be added, a statistical relationship must have been found from both directions. If you want this symmetry correction to take place, leave this boolean variable TRUE. When set to FALSE, an edge will not be added if a relationship between Y and X is detected but not between X and Y. Here we set the argument to be TRUE.

nc : How many cores to use. Here we use only one core, since the dataset is small.

However, there are some unique arguments:

tolb : The difference in the BIC between two successive values. Here we use the default value 2, which means that in case the BIC difference between two successive models is less than 2, the process stops and the last variable, even though significant does not enter the model.

tolr : The difference in the adjusted R2 between two successive values. Here we use the default value 0.02, which means that in case the difference of adjusted R2 between two successive models is less than 0.02, the process stops and the last variable, even though significant does not enter the model.

stopping : The stopping rule. “BIC” is the default value, but can change to “ar2” and in this case the adjusted R2 is used. If you want both of these criteria to be satisfied, type “BICR2”. Here we choose "BICR2".

### ~ ~ ~  Running FS skeleton  ~ ~ ~ ###
FSSkeleton <- MXM::corfs.network(x = as.matrix(wine[, -1]), 
                               threshold  = 0.05,
                               symmetry = TRUE,
                               tolb       = 2,
                               tolr = 0.02,
                               stopping   = "BICR2",
                               nc         = 1)
## Warning in 0:a: numerical expression has 2 elements: only the first used

## Warning in 0:a: numerical expression has 2 elements: only the first used
## Warning in 0:a: numerical expression has 3 elements: only the first used

## Warning in 0:a: numerical expression has 3 elements: only the first used
## Warning in 0:a: numerical expression has 2 elements: only the first used
## Warning in 0:a: numerical expression has 3 elements: only the first used
## Warning in 0:a: numerical expression has 4 elements: only the first used

## Warning in 0:a: numerical expression has 4 elements: only the first used
## Warning in 0:a: numerical expression has 2 elements: only the first used
## Warning in 0:a: numerical expression has 5 elements: only the first used

As far as the output is considered, in the $G matrix we find again the adjacency matrix that can create an interactive plot.

## MXM::plotnetwork(FSSkeleton$G, "Partial Correlation based on FS Network")

The values runtime, density, info and ntests include information similar to mmhc.skel.

Conclusion

Now you are ready to construct the undirected bayesian skeleton of your data!
Thank you for your attention.
Hope that you found this tutorial helpful.

Session Info {.unnumbered}

All analyses have been applied on:

sessionInfo()
## R version 3.6.3 (2020-02-29)
## Platform: x86_64-w64-mingw32/x64 (64-bit)
## Running under: Windows 10 x64 (build 18363)
## 
## Matrix products: default
## 
## locale:
## [1] LC_COLLATE=C                  LC_CTYPE=Greek_Greece.1253   
## [3] LC_MONETARY=Greek_Greece.1253 LC_NUMERIC=C                 
## [5] LC_TIME=Greek_Greece.1253    
## 
## attached base packages:
## [1] stats     graphics  grDevices utils     datasets  methods   base     
## 
## other attached packages:
## [1] dplyr_0.8.5 MXM_1.4.8  
## 
## loaded via a namespace (and not attached):
##  [1] nlme_3.1-144        ordinal_2019.12-10  doParallel_1.0.15  
##  [4] RColorBrewer_1.1-2  R.cache_0.14.0      numDeriv_2016.8-1.1
##  [7] tools_3.6.3         backports_1.1.6     R6_2.4.1           
## [10] rpart_4.1-15        Hmisc_4.4-0         colorspace_1.4-1   
## [13] nnet_7.3-12         tidyselect_1.0.0    gridExtra_2.3      
## [16] compiler_3.6.3      coxme_2.2-16        cli_2.0.2          
## [19] quantreg_5.55       htmlTable_1.13.3    SparseM_1.78       
## [22] slam_0.1-47         scales_1.1.0        checkmate_2.0.0    
## [25] relations_0.6-9     stringr_1.4.0       digest_0.6.25      
## [28] foreign_0.8-75      minqa_1.2.4         R.utils_2.9.2      
## [31] base64enc_0.1-3     jpeg_0.1-8.1        pkgconfig_2.0.3    
## [34] htmltools_0.4.0     lme4_1.1-21         Rfast2_0.0.5       
## [37] htmlwidgets_1.5.1   rlang_0.4.5         rstudioapi_0.11    
## [40] visNetwork_2.0.9    generics_0.0.2      energy_1.7-7       
## [43] jsonlite_1.6.1      acepack_1.4.1       R.oo_1.23.0        
## [46] magrittr_1.5        Formula_1.2-3       Matrix_1.2-18      
## [49] Rcpp_1.0.4.6        munsell_0.5.0       fansi_0.4.1        
## [52] geepack_1.3-1       lifecycle_0.2.0     RcppZiggurat_0.1.5 
## [55] R.methodsS3_1.8.0   ucminf_1.1-4        stringi_1.4.6      
## [58] MASS_7.3-51.5       grid_3.6.3          parallel_3.6.3     
## [61] bdsmatrix_1.3-4     bigmemory.sri_0.1.3 crayon_1.3.4       
## [64] lattice_0.20-38     splines_3.6.3       knitr_1.28         
## [67] pillar_1.4.3        boot_1.3-24         markdown_1.1       
## [70] codetools_0.2-16    glue_1.4.0          evaluate_0.14      
## [73] latticeExtra_0.6-29 data.table_1.12.8   png_0.1-7          
## [76] vctrs_0.2.4         nloptr_1.2.2.1      foreach_1.5.0      
## [79] MatrixModels_0.4-1  gtable_0.3.0        purrr_0.3.3        
## [82] tidyr_1.0.2         assertthat_0.2.1    ggplot2_3.3.0      
## [85] xfun_0.12           Rfast_1.9.9         broom_0.5.5        
## [88] survival_3.1-12     tibble_3.0.0        iterators_1.0.12   
## [91] sets_1.0-18         cluster_2.1.0       bigmemory_4.5.36   
## [94] ellipsis_0.3.0      R.rsp_0.43.2

References {.unnumbered}

Tsagris M., Borboudakis G., Lagani V. and Tsamardinos I. (2018). Constraint-based Causal Discovery with Mixed Data. International Journal of Data Science and Analytics.

Tsamardinos, I., Brown, L.E. & Aliferis, C.F. Mach Learn (2006) 65: 31. https://doi.org/10.1007/s10994-006-6889-7

Brown L. E., Tsamardinos I., & Aliferis C. F. (2004). A novel algorithm for scalable and accurate Bayesian network learning. Medinfo, 711-715.

Spirtes P., Glymour C. and Scheines R. (2001). Causation, Prediction, and Search. The MIT Press, Cambridge, MA, USA, 3nd edition.

Spirtes, P., Glymour, C. & Scheines, R. (1993). Causation, Prediction, and Search. DOI: 10.1007/978-1-4612-2748-9