AutoML: Automatic Machine Learning¶
In recent years, the demand for machine learning experts has outpaced the supply, despite the surge of people entering the field. To address this gap, there have been big strides in the development of user-friendly machine learning software that can be used by non-experts. The first steps toward simplifying machine learning involved developing simple, unified interfaces to a variety of machine learning algorithms (e.g. H2O).
Although H2O has made it easy for non-experts to experiment with machine learning, there is still a fair bit of knowledge and background in data science that is required to produce high-performing machine learning models. Deep Neural Networks in particular are notoriously difficult for a non-expert to tune properly. In order for machine learning software to truly be accessible to non-experts, we have designed an easy-to-use interface which automates the process of training a large selection of candidate models. H2O’s AutoML can also be a helpful tool for the advanced user, by providing a simple wrapper function that performs a large number of modeling-related tasks that would typically require many lines of code, and by freeing up their time to focus on other aspects of the data science pipeline tasks such as data-preprocessing, feature engineering and model deployment.
H2O’s AutoML can be used for automating the machine learning workflow, which includes automatic training and tuning of many models within a user-specified time-limit. The user can also use a performance metric-based stopping criterion for the AutoML process rather than a specific time constraint. Stacked Ensembles will be automatically trained on collections of individual models to produce highly predictive ensemble models which, in most cases, will be the top performing models in the AutoML Leaderboard.
AutoML Interface¶
The H2O AutoML interface is designed to have as few parameters as possible so that all the user needs to do is point to their dataset, identify the response column and optionally specify a time constraint, a maximum number of models constraint, and early stopping parameters.
In both the R and Python API, AutoML uses the same data-related arguments, x
, y
, training_frame
, validation_frame
, as the other H2O algorithms. Most of the time, all you’ll need to do is specify the data arguments. You can then configure values for max_runtime_secs
and/or max_models
to set explicit time or number-of-model limits on your run, or you can set those values high and configure the early stopping arguments to take care of the rest.
Required Parameters¶
Required Data Parameters¶
- y: This argument is the name (or index) of the response column.
- training_frame: Specifies the training set.
Required Stopping Parameters¶
One of the following stopping strategies (time, model or metric-based) must be specified. Note: When multiple options are set to control the stopping of the AutoML run (e.g. max_models
and max_runtime_secs
), then whichever happens first will stop the AutoML run.
Time-based:
- max_runtime_secs: This argument controls how long the AutoML run will execute. This defaults to 3600 seconds (1 hour).
Model-based:
- max_models: Specify the maximum number of models to build in an AutoML run. (Does not include the Stacked Ensemble model.)
Metric-based (used as a group):
stopping_metric: Specifies the metric to use for early stopping. Defaults to
"AUTO"
. The available options are:AUTO
: This defaults tologloss
for classification,deviance
for regressiondeviance
logloss
mse
rmse
mae
rmsle
auc
lift_top_group
misclassification
mean_per_class_error
stopping_tolerance: This option specifies the relative tolerance for the metric-based stopping to stop the AutoML run if the improvement is less than this value. This value defaults to 0.001 if the dataset is at least 1 million rows; otherwise it defaults to a bigger value determined by the size of the dataset and the non-NA-rate. In that case, the value is computed as 1/sqrt(nrows * non-NA-rate).
stopping_rounds: This argument stops training new models in the AutoML run when the option selected for stopping_metric doesn’t improve for the specified number of models, based on a simple moving average. Defaults to 3 and must be an non-negative integer. To disable this feature, set it to 0.
Optional Parameters¶
Optional Data Parameters¶
- x: A list/vector of predictor column names or indexes. This argument only needs to be specified if the user wants to exclude columns from the set of predictors. If all columns (other than the response) should be used in prediction, then this does not need to be set.
- validation_frame: This argument is used to specify the validation frame used for early stopping within the training process of the individual models, grid search and the AutoML run itself (unless a model or time limit is set).
- leaderboard_frame: This argument allows the user to specify a particular data frame use to score & rank models on the leaderboard. This frame will not be used for anything besides leaderboard scoring. If a leaderboard frame is not specified by the user, then the leaderboard will use cross-validation metrics instead or if cross-validation is turned off by setting
nfolds = 0
, then a leaderboard frame will be generated automatically from the validation frame (if provided) or the training frame. - fold_column: Specifies a column with cross-validation fold index assignment per observation. This is used to override the default, randomized, 5-fold cross-validation scheme for individual models in the AutoML run.
- weights_column: Specifies a column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative weights are not allowed.
- ignored_columns: (Optional, Python only) Specify the column or columns to be excluded from the model.
Optional Miscellaneous Parameters¶
- nfolds: Number of folds for k-fold cross-validation of the models in the AutoML run. Defaults to 5. Use 0 to disable cross-validation; this will also disable Stacked Ensembles (thus decreasing the overall best model performance).
- seed: Integer. Set a seed for reproducibility. AutoML can only guarantee reproducibility if
max_models
or early stopping is used becausemax_runtime_secs
is resource limited, meaning that if the resources are not the same between runs, AutoML may be able to train more models on one run vs another. Defaults toNULL/None
. - project_name: Character string to identify an AutoML project. Defaults to
NULL/None
, which means a project name will be auto-generated based on the training frame ID. More models can be trained on an existing AutoML project by specifying the same project name in muliple calls to the AutoML function (as long as the same training frame is used in subsequent runs).
Auto-Generated Frames¶
If the user doesn’t specify a validation_frame
, then one will be created automatically by randomly partitioning the training data. The validation frame is required for early stopping of the individual algorithms, the grid searches and the AutoML process itself.
By default, AutoML uses cross-validation for all models, and therefore we can use cross-validation metrics to generate the leaderboard. If the leaderboard_frame
is explicitly specified by the user, then that frame will be used to generate the leaderboard metrics (See JIRA: this is currently not working unless nfolds=0).
For cross-validated AutoML, when the user specifies:
- training: The
training_frame
is split into training (80%) and validation (20%).- training + leaderboard: The
training_frame
is split into training (80%) and validation (20%).- training + validation: Leave frames as-is.
- training + validation + leaderboard: Leave frames as-is.
If not using cross-validation (by setting nfolds = 0
) in AutoML, then we need to make sure there is a test frame (aka. the “leaderboard frame”) to score on because cross-validation metrics will not be available. So when the user specifies:
- training: The
training_frame
is split into training (80%), validation (10%) and leaderboard/test (10%).- training + leaderboard: The
training_frame
is split into training (80%) and validation (20%). Leaderboard frame as-is.- training + validation: The
validation_frame
is split into validation (50%) and leaderboard/test (50%). Training frame as-is.- training + validation + leaderboard: Leave frames as-is.
Code Examples¶
Here’s an example showing basic usage of the h2o.automl()
function in R and the H2OAutoML
class in Python. For demonstration purposes only, we explicitly specify the the x argument, even though on this dataset, that’s not required. With this dataset, the set of predictors is all columns other than the response. Like other H2O algorithms, the default value of x
is “all columns, excluding y
”, so that will produce the same result.
library(h2o)
h2o.init()
# Import a sample binary outcome train/test set into H2O
train <- h2o.importFile("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
test <- h2o.importFile("https://s3.amazonaws.com/erin-data/higgs/higgs_test_5k.csv")
# Identify predictors and response
y <- "response"
x <- setdiff(names(train), y)
# For binary classification, response should be a factor
train[,y] <- as.factor(train[,y])
test[,y] <- as.factor(test[,y])
aml <- h2o.automl(x = x, y = y,
training_frame = train,
leaderboard_frame = test,
max_runtime_secs = 30)
# View the AutoML Leaderboard
lb <- aml@leaderboard
lb
# model_id auc logloss
# 1 StackedEnsemble_AllModels_0_AutoML_20171121_012135 0.788321 0.554019
# 2 StackedEnsemble_BestOfFamily_0_AutoML_20171121_012135 0.783099 0.559286
# 3 GBM_grid_0_AutoML_20171121_012135_model_1 0.780554 0.560248
# 4 GBM_grid_0_AutoML_20171121_012135_model_0 0.779713 0.562142
# 5 GBM_grid_0_AutoML_20171121_012135_model_2 0.776206 0.564970
# 6 GBM_grid_0_AutoML_20171121_012135_model_3 0.771026 0.570270
# [10 rows x 3 columns]
# The leader model is stored here
aml@leader
# If you need to generate predictions on a test set, you can make
# predictions directly on the `"H2OAutoML"` object, or on the leader
# model object directly
pred <- h2o.predict(aml, test) # predict(aml, test) also works
# or:
pred <- h2o.predict(aml@leader, test)
import h2o
from h2o.automl import H2OAutoML
h2o.init()
# Import a sample binary outcome train/test set into H2O
train = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_train_10k.csv")
test = h2o.import_file("https://s3.amazonaws.com/erin-data/higgs/higgs_test_5k.csv")
# Identify predictors and response
x = train.columns
y = "response"
x.remove(y)
# For binary classification, response should be a factor
train[y] = train[y].asfactor()
test[y] = test[y].asfactor()
# Run AutoML for 30 seconds
aml = H2OAutoML(max_runtime_secs = 30)
aml.train(x = x, y = y,
training_frame = train,
leaderboard_frame = test)
# View the AutoML Leaderboard
lb = aml.leaderboard
lb
# model_id auc logloss
# ---------------------------------------------------- -------- ---------
# StackedEnsemble_AllModels_0_AutoML_20171121_010846 0.786063 0.555833
# StackedEnsemble_BestOfFamily_0_AutoML_20171121_010846 0.783367 0.558511
# GBM_grid_0_AutoML_20171121_010846_model_1 0.779242 0.562157
# GBM_grid_0_AutoML_20171121_010846_model_0 0.778855 0.562648
# GBM_grid_0_AutoML_20171121_010846_model_3 0.769666 0.572165
# GBM_grid_0_AutoML_20171121_010846_model_2 0.769147 0.572064
# XRT_0_AutoML_20171121_010846 0.744612 0.593885
# DRF_0_AutoML_20171121_010846 0.733039 0.608609
# GLM_grid_0_AutoML_20171121_010846_model_0 0.685211 0.635138
# [9 rows x 3 columns]
# The leader model is stored here
aml.leader
# If you need to generate predictions on a test set, you can make
# predictions directly on the `"H2OAutoML"` object, or on the leader
# model object directly
preds = aml.predict(test)
# or:
preds = aml.leader.predict(test)
AutoML Output¶
The AutoML object includes a “leaderboard” of models that were trained in the process, including the performance of the model on the leaderboard_frame
test set. If the user did not specify the leaderboard_frame
argument, then a frame will be automatically partitioned, as explained in the Auto-Generated Frames section. In the future, the leaderboard will be created using cross-validation metrics, unless a scoring frame is provided explicitly by the user.
The models are ranked by a default metric based on the problem type (the second column of the leaderboard). In binary classification problems, that metric is AUC, and in multiclass classification problems, the metric is mean per-class error. In regression problems, the default sort metric is deviance. Some additional metrics are also provided, for convenience.
Here is an example leaderboard for a binary classification task:
model_id | auc | logloss |
---|---|---|
StackedEnsemble_AllModels_0_AutoML_20171121_012135 | 0.788321 | 0.554019 |
StackedEnsemble_BestOfFamily_0_AutoML_20171121_012135 | 0.783099 | 0.559286 |
GBM_grid_0_AutoML_20171121_012135_model_1 | 0.780554 | 0.560248 |
GBM_grid_0_AutoML_20171121_012135_model_0 | 0.779713 | 0.562142 |
GBM_grid_0_AutoML_20171121_012135_model_2 | 0.776206 | 0.564970 |
GBM_grid_0_AutoML_20171121_012135_model_3 | 0.771026 | 0.570270 |
DRF_0_AutoML_20171121_012135 | 0.734653 | 0.601520 |
XRT_0_AutoML_20171121_012135 | 0.730457 | 0.611706 |
GBM_grid_0_AutoML_20171121_012135_model_4 | 0.727098 | 0.666513 |
GLM_grid_0_AutoML_20171121_012135_model_0 | 0.685211 | 0.635138 |
FAQ¶
- Which models are trained in the AutoML process?
The current version of AutoML trains and cross-validates a default Random Forest, an Extremely-Randomized Forest, a random grid of Gradient Boosting Machines (GBMs), a random grid of Deep Neural Nets, a fixed grid of GLMs, and then trains two Stacked Ensemble models. A list of the hyperparameters searched over for each algorithm in the AutoML process is included in the appendix below.
One ensemble contains all the models, and the second ensemble contains just the best performing model from each algorithm class/family, so it’s an ensemble of five base models. The second “Best of Family” ensemble is optimized for production use, since it only contains five constituent models. More details about the hyperparamter settings for the models will be added to this page at a later date.
- How do I save AutoML runs?
Rather than saving an AutoML object itself, currently, the best thing to do is to save the models you want to keep, individually. A utility for saving all of the models at once will be added in a future release.
Appendix: Grid Search Parameters¶
AutoML performs hyperparameter search over a variety of H2O algorithms in order to deliver the best model. In AutoML, the following hyperparameters are supported by grid search.
GBM Hyperparameters
score_tree_interval
histogram_type
ntrees
max_depth
min_rows
learn_rate
sample_rate
col_sample_rate
col_sample_rate_per_tree
min_split_improvement
GLM Hyperparameters
alpha
missing_values_handling
Deep Learning Hyperparameters
epochs
adaptivate_rate
activation
rho
epsilon
input_dropout_ratio
hidden
hidden_dropout_ratios