Classification Modeling

Choonghyun Ryu

2020-06-07

Preface

Once the data set is ready for model development, the model is fitted, predicted and evaluated in the following ways:

The alookr package makes these steps fast and easy:

Data: Wisconsin Breast Cancer Data

BreastCancer of mlbench package is a breast cancer data. The objective is to identify each of a number of benign or malignant classes.

A data frame with 699 observations on 11 variables, one being a character variable, 9 being ordered or nominal, and 1 target class.:

library(mlbench)
data(BreastCancer)

# class of each variables
sapply(BreastCancer, function(x) class(x)[1])
             Id    Cl.thickness       Cell.size      Cell.shape   Marg.adhesion 
    "character"       "ordered"       "ordered"       "ordered"       "ordered" 
   Epith.c.size     Bare.nuclei     Bl.cromatin Normal.nucleoli         Mitoses 
      "ordered"        "factor"        "factor"        "factor"        "factor" 
          Class 
       "factor" 

Preperation the data

Perform data preprocessing as follows.:

Fix the missing value with dlookr::imputate_na()

find the variables that include missing value. and imputate the missing value using imputate_na() in dlookr package.

library(dlookr)
library(dplyr)

# variable that have a missing value
diagnose(BreastCancer) %>%
  filter(missing_count > 0)
# A tibble: 1 x 6
  variables   types  missing_count missing_percent unique_count unique_rate
  <chr>       <chr>          <int>           <dbl>        <int>       <dbl>
1 Bare.nuclei factor            16            2.29           11      0.0157

# imputation of missing value
breastCancer <- BreastCancer %>%
  mutate(Bare.nuclei = imputate_na(BreastCancer, Bare.nuclei, Class,
                         method = "mice", no_attrs = TRUE, print_flag = FALSE))

Split data set

Splits the dataset into a train set and a test set with split_by()

split_by() in the alookr package splits the dataset into a train set and a test set.

The ratio argument of the split_by() function specifies the ratio of the train set.

split_by() creates a class object named split_df.

library(alookr)

# split the data into a train set and a test set by default arguments
sb <- breastCancer %>%
  split_by(target = Class)

# show the class name
class(sb)
[1] "split_df"   "grouped_df" "tbl_df"     "tbl"        "data.frame"

# split the data into a train set and a test set by ratio = 0.6
tmp <- breastCancer %>%
  split_by(Class, ratio = 0.6)

The summary() function displays the following useful information about the split_df object:

# summary() display the some information
summary(sb)
** Split train/test set information **
 + random seed        :  72691 
 + split data            
    - train set count :  489 
    - test set count  :  210 
 + target variable    :  Class 
    - minority class  :  malignant (0.344778)
    - majority class  :  benign (0.655222)

# summary() display the some information
summary(tmp)
** Split train/test set information **
 + random seed        :  82577 
 + split data            
    - train set count :  419 
    - test set count  :  280 
 + target variable    :  Class 
    - minority class  :  malignant (0.344778)
    - majority class  :  benign (0.655222)

Check missing levels in the train set

In the case of categorical variables, when a train set and a test set are separated, a specific level may be missing from the train set.

In this case, there is no problem when fitting the model, but an error occurs when predicting with the model you created. Therefore, preprocessing is performed to avoid missing data preprocessing.

In the following example, fortunately, there is no categorical variable that contains the missing levels in the train set.

# list of categorical variables in the train set that contain missing levels
nolevel_in_train <- sb %>%
  compare_target_category() %>% 
  filter(is.na(train)) %>% 
  select(variable) %>% 
  unique() %>% 
  pull

nolevel_in_train
character(0)

# if any of the categorical variables in the train set contain a missing level, 
# split them again.
while (length(nolevel_in_train) > 0) {
  sb <- breastCancer %>%
    split_by(Class)

  nolevel_in_train <- sb %>%
    compare_target_category() %>% 
    filter(is.na(train)) %>% 
    select(variable) %>% 
    unique() %>% 
    pull
}

Handling the imbalanced classes data with sampling_target()

Issue of imbalanced classes data

Imbalanced classes(levels) data means that the number of one level of the frequency of the target variable is relatively small. In general, the proportion of positive classes is relatively small. For example, in the model of predicting spam, the class of interest spam is less than non-spam.

Imbalanced classes data is a common problem in machine learning classification.

table() and prop.table() are traditionally useful functions for diagnosing imbalanced classes data. However, alookr’s summary() is simpler and provides more information.

# train set frequency table - imbalanced classes data
table(sb$Class)

   benign malignant 
      458       241 

# train set relative frequency table - imbalanced classes data
prop.table(table(sb$Class))

   benign malignant 
0.6552217 0.3447783 

# using summary function - imbalanced classes data
summary(sb)
** Split train/test set information **
 + random seed        :  72691 
 + split data            
    - train set count :  489 
    - test set count  :  210 
 + target variable    :  Class 
    - minority class  :  malignant (0.344778)
    - majority class  :  benign (0.655222)

Handling the imbalanced classes data

Most machine learning algorithms work best when the number of samples in each class are about equal. And most algorithms are designed to maximize accuracy and reduce error. So, we requre handling an imbalanced class problem.

sampling_target() performs sampling to solve an imbalanced classes data problem.

Resampling - oversample minority class

Oversampling can be defined as adding more copies of the minority class.

Oversampling is performed by specifying “ubOver” in the method argument of the sampling_target() function.

# to balanced by over sampling
train_over <- sb %>%
  sampling_target(method = "ubOver")

# frequency table 
table(train_over$Class)

   benign malignant 
      307       307 

Resampling - undersample majority class

Undersampling can be defined as removing some observations of the majority class.

Undersampling is performed by specifying “ubUnder” in the method argument of the sampling_target() function.

# to balanced by under sampling
train_under <- sb %>%
  sampling_target(method = "ubUnder")

# frequency table 
table(train_under$Class)

   benign malignant 
      182       182 

Generate synthetic samples - SMOTE

SMOTE(Synthetic Minority Oversampling Technique) uses a nearest neighbors algorithm to generate new and synthetic data.

SMOTE is performed by specifying “ubSMOTE” in the method argument of the sampling_target() function.

# to balanced by SMOTE
train_smote <- sb %>%
  sampling_target(seed = 1234L, method = "ubSMOTE")

# frequency table 
table(train_smote$Class)

   benign malignant 
      728       546 

Cleansing the dataset for classification modeling with cleanse()

The cleanse() cleanse the dataset for classification modeling.

This function is useful when fit the classification model. This function does the following.:

In this example, The cleanse() function removed a variable ID with a high unique rate.

# clean the training set
train <- train_smote %>%
  cleanse
─ Checking unique value ────────────── unique value is one ─
No variables that unique value is one.

─ Checking unique rate ──────────────── high unique rate ─
remove variables with high unique rate
● Id = 436(0.342229199372057)

─ Checking character variables ──────────── categorical data ─
No character variables.

Extract test set for evaluation of the model with extract_set()

# extract test set
test <- sb %>%
  extract_set(set = "test")

Binary classification modeling with run_models()

run_models() performs some representative binary classification modeling using split_df object created by split_by().

run_models() executes the process in parallel when fitting the model. However, it is not supported in MS-Windows operating system and RStudio environment.

Currently supported algorithms are as follows.:

run_models() returns a model_df class object.

The model_df class object contains the following variables.:

result <- train %>% 
  run_models(target = "Class", positive = "malignant")
result
# A tibble: 5 x 5
  step     model_id     target positive  fitted_model
  <chr>    <chr>        <chr>  <chr>     <list>      
1 1.Fitted logistic     Class  malignant <glm>       
2 1.Fitted rpart        Class  malignant <rpart>     
3 1.Fitted ctree        Class  malignant <BinaryTr>  
4 1.Fitted randomForest Class  malignant <rndmFrs.>  
5 1.Fitted ranger       Class  malignant <ranger>    

Evaluate the model

Evaluate the predictive performance of fitted models.

Predict test set using fitted model with run_predict()

run_predict() predict the test set using model_df class fitted by run_models().

run_predict () is executed in parallel when predicting by model. However, it is not supported in MS-Windows operating system and RStudio environment.

The model_df class object contains the following variables.:

pred <- result %>%
  run_predict(test)
pred
# A tibble: 5 x 6
  step        model_id     target positive  fitted_model predicted  
  <chr>       <chr>        <chr>  <chr>     <list>       <list>     
1 2.Predicted logistic     Class  malignant <glm>        <fct [210]>
2 2.Predicted rpart        Class  malignant <rpart>      <fct [210]>
3 2.Predicted ctree        Class  malignant <BinaryTr>   <fct [210]>
4 2.Predicted randomForest Class  malignant <rndmFrs.>   <fct [210]>
5 2.Predicted ranger       Class  malignant <ranger>     <fct [210]>

Calculate the performance metric with run_performance()

run_performance() calculate the performance metric of model_df class predicted by run_predict().

run_performance () is performed in parallel when calculating the performance evaluation metrics However, it is not supported in MS-Windows operating system and RStudio environment.

The model_df class object contains the following variables.:

# Calculate performace metrics.
perf <- run_performance(pred)
perf
# A tibble: 5 x 7
  step          model_id     target positive fitted_model predicted  performance
  <chr>         <chr>        <chr>  <chr>    <list>       <list>     <list>     
1 3.Performanc… logistic     Class  maligna… <glm>        <fct [210… <dbl [15]> 
2 3.Performanc… rpart        Class  maligna… <rpart>      <fct [210… <dbl [15]> 
3 3.Performanc… ctree        Class  maligna… <BinaryTr>   <fct [210… <dbl [15]> 
4 3.Performanc… randomForest Class  maligna… <rndmFrs.>   <fct [210… <dbl [15]> 
5 3.Performanc… ranger       Class  maligna… <ranger>     <fct [210… <dbl [15]> 

The performance variable contains a list object, which contains 15 performance metrics:

# Performance by analytics models
performance <- perf$performance
names(performance) <- perf$model_id
performance
$logistic
ZeroOneLoss    Accuracy   Precision      Recall Sensitivity Specificity 
 0.04285714  0.95714286  0.89062500  0.96610169  0.96610169  0.95364238 
   F1_Score Fbeta_Score     LogLoss         AUC        Gini       PRAUC 
 0.92682927  0.92682927  1.36484243  0.96767314  0.95106073  0.02451234 
    LiftAUC     GainAUC     KS_Stat 
 1.40435135  0.83627926 93.66932316 

$rpart
ZeroOneLoss    Accuracy   Precision      Recall Sensitivity Specificity 
 0.09523810  0.90476190  0.80000000  0.88135593  0.88135593  0.91390728 
   F1_Score Fbeta_Score     LogLoss         AUC        Gini       PRAUC 
 0.83870968  0.83870968  0.30219860  0.92917275  0.88977439  0.07855529 
    LiftAUC     GainAUC     KS_Stat 
 1.32040456  0.80859564 82.33247278 

$ctree
ZeroOneLoss    Accuracy   Precision      Recall Sensitivity Specificity 
  0.0952381   0.9047619   0.7746479   0.9322034   0.9322034   0.8940397 
   F1_Score Fbeta_Score     LogLoss         AUC        Gini       PRAUC 
  0.8461538   0.8461538   0.9634652   0.9622292   0.9387137   0.2203131 
    LiftAUC     GainAUC     KS_Stat 
  1.5391260   0.8323648  84.6110675 

$randomForest
ZeroOneLoss    Accuracy   Precision      Recall Sensitivity Specificity 
 0.04285714  0.95714286  0.86764706  1.00000000  1.00000000  0.94039735 
   F1_Score Fbeta_Score     LogLoss         AUC        Gini       PRAUC 
 0.92913386  0.92913386  0.15624744  0.98950499  0.97889774  0.64425125 
    LiftAUC     GainAUC     KS_Stat 
 1.96295058  0.85197740 97.35099338 

$ranger
ZeroOneLoss    Accuracy   Precision      Recall Sensitivity Specificity 
 0.01904762  0.98095238  0.93650794  1.00000000  1.00000000  0.97350993 
   F1_Score Fbeta_Score     LogLoss         AUC        Gini       PRAUC 
 0.96721311  0.96721311  0.14316118  0.98843866  0.97687732  0.82400080 
    LiftAUC     GainAUC     KS_Stat 
 2.11184262  0.85121065 97.35099338 

If you change the list object to tidy format, you’ll see the following at a glance:

# Convert to matrix for compare performace.
sapply(performance, "c")
               logistic       rpart      ctree randomForest      ranger
ZeroOneLoss  0.04285714  0.09523810  0.0952381   0.04285714  0.01904762
Accuracy     0.95714286  0.90476190  0.9047619   0.95714286  0.98095238
Precision    0.89062500  0.80000000  0.7746479   0.86764706  0.93650794
Recall       0.96610169  0.88135593  0.9322034   1.00000000  1.00000000
Sensitivity  0.96610169  0.88135593  0.9322034   1.00000000  1.00000000
Specificity  0.95364238  0.91390728  0.8940397   0.94039735  0.97350993
F1_Score     0.92682927  0.83870968  0.8461538   0.92913386  0.96721311
Fbeta_Score  0.92682927  0.83870968  0.8461538   0.92913386  0.96721311
LogLoss      1.36484243  0.30219860  0.9634652   0.15624744  0.14316118
AUC          0.96767314  0.92917275  0.9622292   0.98950499  0.98843866
Gini         0.95106073  0.88977439  0.9387137   0.97889774  0.97687732
PRAUC        0.02451234  0.07855529  0.2203131   0.64425125  0.82400080
LiftAUC      1.40435135  1.32040456  1.5391260   1.96295058  2.11184262
GainAUC      0.83627926  0.80859564  0.8323648   0.85197740  0.85121065
KS_Stat     93.66932316 82.33247278 84.6110675  97.35099338 97.35099338

compare_performance() return a list object(results of compared model performance). and list has the following components:

In this example, compare_performance() recommend the “ranger” model.

# Compaire the Performance metrics of each model
comp_perf <- compare_performance(pred)
comp_perf
$recommend_model
[1] "ranger"

$top_metric_count
    logistic        rpart        ctree randomForest       ranger 
           0            0            0            5           10 

$mean_rank
    logistic        rpart        ctree randomForest       ranger 
    3.153846     4.538462     4.076923     1.923077     1.307692 

$top_metric
$top_metric$logistic
NULL

$top_metric$rpart
NULL

$top_metric$ctree
NULL

$top_metric$randomForest
[1] "Recall"  "AUC"     "Gini"    "GainAUC" "KS_Stat"

$top_metric$ranger
 [1] "ZeroOneLoss" "Accuracy"    "Precision"   "Recall"      "Specificity"
 [6] "F1_Score"    "LogLoss"     "PRAUC"       "LiftAUC"     "KS_Stat"    

Plot the ROC curve with plot_performance()

compare_performance() plot ROC curve.

# Plot ROC curve
plot_performance(pred)

Tunning the cut-off

In general, if the prediction probability is greater than 0.5 in the binary classification model, it is predicted as positive class. In other words, 0.5 is used for the cut-off value. This applies to most model algorithms. However, in some cases, the performance can be tuned by changing the cut-off value.

plot_cutoff () visualizes a plot to select the cut-off value, and returns the cut-off value.

pred_best <- pred %>% 
  filter(model_id == comp_perf$recommend_model) %>% 
  select(predicted) %>% 
  pull %>% 
  .[[1]] %>% 
  attr("pred_prob")

cutoff <- plot_cutoff(pred_best, test$Class, "malignant", type = "mcc")

cutoff
[1] 0.57

cutoff2 <- plot_cutoff(pred_best, test$Class, "malignant", type = "density")

cutoff2
[1] 0.5108

cutoff3 <- plot_cutoff(pred_best, test$Class, "malignant", type = "prob")

cutoff3
[1] 0.57

Performance comparison between prediction and tuned cut-off with performance_metric()

Compare the performance of the original prediction with that of the tuned cut-off. Compare the cut-off with the non-cut model for the model with the best performance comp_perf$recommend_model.

comp_perf$recommend_model
[1] "ranger"

# extract predicted probability
idx <- which(pred$model_id == comp_perf$recommend_model)
pred_prob <- attr(pred$predicted[[idx]], "pred_prob")

# or, extract predicted probability using dplyr
pred_prob <- pred %>% 
  filter(model_id == comp_perf$recommend_model) %>% 
  select(predicted) %>% 
  pull %>% 
  "[["(1) %>% 
  attr("pred_prob")

# predicted probability
pred_prob  
  [1] 0.0159333333 0.4463444444 0.0001056410 0.0000000000 0.9909515873
  [6] 0.0000000000 0.0001723077 0.9980000000 0.0000000000 0.4412536075
 [11] 0.5934333333 0.0159333333 0.7166396825 0.9234833333 0.8174357143
 [16] 0.8336484127 0.8866119048 0.9619753968 0.0000000000 0.4888087302
 [21] 0.1275253968 0.0186087302 0.3840238095 0.0000000000 0.2729964646
 [26] 0.9502222222 0.0000000000 0.0000000000 0.0000000000 0.0000000000
 [31] 0.0000000000 0.0000000000 0.0000000000 0.8832769841 1.0000000000
 [36] 0.5710436508 0.0000000000 0.8677785714 0.0285182540 0.4205940115
 [41] 0.1139833333 0.0148849206 0.9992000000 0.0000000000 1.0000000000
 [46] 0.9926698413 0.1426349206 0.0000000000 0.0000000000 0.9453896825
 [51] 0.0000000000 0.0001723077 0.0000000000 1.0000000000 0.0148849206
 [56] 0.0000000000 0.0001056410 0.0000000000 0.0000000000 0.0836413553
 [61] 0.9535841270 0.0000000000 0.9952857143 0.9948698413 0.0000000000
 [66] 0.0001056410 0.0318515374 0.9937841270 0.9993333333 0.9995555556
 [71] 0.9988571429 0.0000000000 0.0000000000 0.0000000000 0.9975000000
 [76] 1.0000000000 0.9997777778 0.9945976190 0.9596571429 0.2781753968
 [81] 0.0380063492 0.3422595238 0.9724658730 0.8600365079 0.0000000000
 [86] 0.0000000000 0.0000000000 0.7355349206 0.9993333333 0.0000000000
 [91] 0.0000000000 0.9704730159 0.0000000000 0.9826158730 0.9947857143
 [96] 0.0000000000 0.9992000000 0.0952012987 0.0000000000 0.9813111111
[101] 0.0406325397 0.9969785714 0.0000000000 0.0001723077 0.9957277778
[106] 0.0000000000 0.0285182540 0.0000000000 0.9952857143 0.9952857143
[111] 0.0000000000 0.0001056410 0.0000000000 1.0000000000 0.1627190476
[116] 0.0000000000 0.9992777778 0.0000000000 0.0001056410 0.0052777778
[121] 0.0000000000 0.0000000000 0.0000000000 0.9985500000 0.0000000000
[126] 0.0946261905 0.0000000000 0.0000000000 0.1402055556 0.3280420635
[131] 0.9979428571 0.9913063492 0.0000000000 0.0000000000 0.7024698413
[136] 0.0000000000 0.0138547619 0.9868420635 0.0110825397 0.9984698413
[141] 0.9946722222 0.0000000000 0.0485943945 0.0000000000 1.0000000000
[146] 0.9935500000 0.0000000000 0.0351127339 0.0040723077 0.6119555556
[151] 0.0001723077 0.0001056410 0.0000000000 0.9990000000 0.0000000000
[156] 0.0006666667 0.0000000000 0.0001056410 0.0001056410 1.0000000000
[161] 0.0000000000 0.0000000000 0.2031111111 0.2927802309 0.3834067100
[166] 0.0000000000 0.0000000000 0.0008333333 0.9392126984 1.0000000000
[171] 0.0000000000 0.0000000000 0.0000000000 0.0000000000 0.0098492063
[176] 0.9930698413 0.9966666667 0.0000000000 0.0000000000 0.8850404762
[181] 0.0000000000 0.0000000000 0.0000000000 0.0001056410 0.0000000000
[186] 0.0000000000 0.2889143579 0.1071650794 0.0485841270 0.0186087302
[191] 0.0004000000 0.0163000000 0.0002000000 0.0019000000 0.0000000000
[196] 0.0000000000 0.4942992063 0.0000000000 0.0000000000 0.0000000000
[201] 0.0036722222 0.0106087302 0.0000000000 0.0000000000 0.0000000000
[206] 0.0285182540 0.0002000000 0.0000000000 0.0009166667 0.0000000000

# compaire Accuracy
performance_metric(pred_prob, test$Class, "malignant", "Accuracy")
[1] 0.9809524
performance_metric(pred_prob, test$Class, "malignant", "Accuracy",
                   cutoff = cutoff)
[1] 0.9809524

# compaire Confusion Matrix
performance_metric(pred_prob, test$Class, "malignant", "ConfusionMatrix")
           actual
predict     benign malignant
  benign       147         0
  malignant      4        59
performance_metric(pred_prob, test$Class, "malignant", "ConfusionMatrix", 
                   cutoff = cutoff)
           actual
predict     benign malignant
  benign       147         0
  malignant      4        59

# compaire F1 Score
performance_metric(pred_prob, test$Class, "malignant", "F1_Score")
[1] 0.9672131
performance_metric(pred_prob, test$Class,  "malignant", "F1_Score", 
                   cutoff = cutoff)
[1] 0.9672131
performance_metric(pred_prob, test$Class,  "malignant", "F1_Score", 
                   cutoff = cutoff2)
[1] 0.9672131

If the performance of the tuned cut-off is good, use it as a cut-off to predict positives.

Predict

If you have selected a good model from several models, then perform the prediction with that model.

Create data set for predict

Create sample data for predicting by extracting 100 samples from the data set used in the previous under sampling example.

data_pred <- train_under %>% 
  cleanse 
─ Checking unique value ────────────── unique value is one ─
No variables that unique value is one.

─ Checking unique rate ──────────────── high unique rate ─
remove variables with high unique rate
● Id = 348(0.956043956043956)

─ Checking character variables ──────────── categorical data ─
No character variables.

set.seed(1234L)
data_pred <- data_pred %>% 
  nrow %>% 
  seq %>% 
  sample(size = 50) %>% 
  data_pred[., ]

Predict with alookr and dplyr

Do a predict using the dplyr package. The last factor() function eliminates unnecessary information.

pred_actual <- pred %>%
  filter(model_id == comp_perf$recommend_model) %>% 
  run_predict(data_pred) %>% 
  select(predicted) %>% 
  pull %>% 
  "[["(1) %>% 
  factor()

pred_actual
 [1] malignant benign    benign    malignant benign    benign    benign   
 [8] malignant benign    malignant malignant benign    benign    malignant
[15] benign    benign    malignant malignant malignant benign    malignant
[22] malignant malignant malignant benign    benign    malignant benign   
[29] benign    malignant malignant malignant benign    malignant malignant
[36] benign    malignant benign    malignant benign    malignant benign   
[43] benign    malignant benign    malignant malignant benign    malignant
[50] benign   
Levels: benign malignant

If you want to predict by cut-off, specify the cutoff argument in the run_predict() function as follows.:

In the example, there is no difference between the results of using cut-off and not.

pred_actual2 <- pred %>%
  filter(model_id == comp_perf$recommend_model) %>% 
  run_predict(data_pred, cutoff) %>% 
  select(predicted) %>% 
  pull %>% 
  "[["(1) %>% 
  factor()

pred_actual2
 [1] malignant benign    benign    malignant benign    benign    benign   
 [8] malignant benign    malignant malignant benign    benign    malignant
[15] benign    benign    malignant malignant malignant benign    malignant
[22] malignant malignant malignant benign    benign    malignant benign   
[29] benign    malignant malignant malignant benign    malignant malignant
[36] benign    malignant benign    malignant benign    malignant benign   
[43] benign    malignant benign    malignant malignant benign    malignant
[50] benign   
Levels: benign malignant

sum(pred_actual != pred_actual2)
[1] 0