missing_values_handling

  • Available in: Deep Learning, GLM
  • Hyperparameter: yes

Description

This option is used to specify the way that the algorithm will treat missing values. In H2O, the Deep Learning and GLM algorithms will either skip or mean-impute rows with NA values. This option defaults to MeanImputation. Note that in Deep Learning, unseen categorical variables are imputed by adding an extra “missing” level. In GLM, unseen categorical levels are replaced by the most frequent level present in training (mod). Optionally, either algorithm can skip all rows with any missing values.

The fewer the NA values in your training data, the better. Always check degrees of freedom in the output model. Degrees of freedom is the number of observations used to train the model minus the size of the model (i.e., the number of features). If this number is much smaller than expected, it is likely that too many rows have been excluded due to missing values:

  • If you have few columns with many NAs, you might accidentally be losing all your rows, so its better to exclude (skip) them.
  • If you have many columns with a small fraction of uniformly distributed missing values, every row will likely have at least one missing value. In this case, impute the NAs (e.g., substitute the NAs with mean values) before modeling.

Example

  • r
  • python
library(h2o)
h2o.init()
# import the boston dataset:
# this dataset looks at features of the boston suburbs and predicts median housing prices
# the original dataset can be found at https://archive.ics.uci.edu/ml/datasets/Housing
boston <- h2o.importFile("https://s3.amazonaws.com/h2o-public-test-data/smalldata/gbm_test/BostonHousing.csv")

# set the predictor names and the response column name
predictors <- colnames(boston)[1:13]
# set the response column to "medv", the median value of owner-occupied homes in $1000's
response <- "medv"

# convert the chas column to a factor (chas = Charles River dummy variable (= 1 if tract bounds river; 0 otherwise))
boston["chas"] <- as.factor(boston["chas"])

# split into train and validation sets
boston.splits <- h2o.splitFrame(data =  boston, ratios = .8)
train <- boston.splits[[1]]
valid <- boston.splits[[2]]

# insert missing values at random (this method happens inplace)
h2o.insertMissingValues(boston)

# check the number of missing values
print(paste("missing:", sum(is.na(boston)), sep = ", "))

# try using the `missing_values_handling` parameter:
boston_glm <- h2o.glm(x = predictors, y = response, training_frame = train,
                      missing_values_handling = "Skip",
                      validation_frame = valid)

# print the mse for the validation data
print(h2o.mse(boston_glm, valid=TRUE))

# grid over `missing_values_handling`
# select the values for `missing_values_handling` to grid over
hyper_params <- list( missing_values_handling = c("Skip", "MeanImputation") )

# this example uses cartesian grid search because the search space is small
# and we want to see the performance of all models. For a larger search space use
# random grid search instead: {'strategy': "RandomDiscrete"}

# build grid search with previously made GLM and hyperparameters
grid <- h2o.grid(x = predictors, y = response, training_frame = train, validation_frame = valid,
                 algorithm = "glm", grid_id = "boston_grid", hyper_params = hyper_params,
                 search_criteria = list(strategy = "Cartesian"))

# Sort the grid models by mse
sortedGrid <- h2o.getGrid("boston_grid", sort_by = "mse", decreasing = FALSE)
sortedGrid