Modeling In H2O¶
H2OEstimator
¶
-
class
h2o.estimators.estimator_base.
H2OEstimator
[source]¶ Bases:
h2o.model.model_base.ModelBase
H2O Estimators.
- H2O Estimators implement the following methods for model construction:
- start - Top-level user-facing API for asynchronous model build
- join - Top-level user-facing API for blocking on async model build
- train - Top-level user-facing API for model building.
- fit - Used by scikit-learn.
Because H2OEstimator instances are instances of ModelBase, these objects can use the H2O model API.
-
fit
(x, y=None, **params)[source]¶ Fit an H2O model as part of a scikit-learn pipeline or grid search.
A warning will be issued if a caller other than sklearn attempts to use this method.
Parameters: x : H2OFrame
An H2OFrame consisting of the predictor variables.
- y : H2OFrame, optional
An H2OFrame consisting of the response variable.
- params : optional
Extra arguments.
Returns: The current instance of H2OEstimator for method chaining.
-
get_params
(deep=True)[source]¶ Obtain parameters for this estimator.
Used primarily for sklearn Pipelines and sklearn grid search.
Parameters: deep – If True, return parameters of all sub-objects that are estimators. Returns: A dict of parameters
-
set_params
(**parms)[source]¶ Used by sklearn for updating parameters during grid search.
Parameters: parms : dict
A dictionary of parameters that will be set on this model.
Returns: Returns self, the current estimator object with the parameters all set as desired.
-
start
(x, y=None, training_frame=None, offset_column=None, fold_column=None, weights_column=None, validation_frame=None, **params)[source]¶ Train the model asynchronously.
To block for results, call join.
Parameters: x : list
A list of column names or indices indicating the predictor columns.
y : str
An index or a column name indicating the response column.
training_frame : H2OFrame
The H2OFrame having the columns indicated by x and y (as well as any additional columns specified by fold, offset, and weights).
offset_column : str, optional
The name or index of the column in training_frame that holds the offsets.
fold_column : str, optional
The name or index of the column in training_frame that holds the per-row fold assignments.
weights_column : str, optional
The name or index of the column in training_frame that holds the per-row weights.
validation_frame : H2OFrame, optional
H2OFrame with validation data to be scored on while training.
-
train
(x=None, y=None, training_frame=None, offset_column=None, fold_column=None, weights_column=None, validation_frame=None, max_runtime_secs=None, ignored_columns=None, **ignored)[source]¶ Train the H2O model.
Parameters: x : list, None
A list of column names or indices indicating the predictor columns.
y :
An index or a column name indicating the response column.
training_frame : H2OFrame
The H2OFrame having the columns indicated by x and y (as well as any additional columns specified by fold, offset, and weights).
offset_column : str, optional
The name or index of the column in training_frame that holds the offsets.
fold_column : str, optional
The name or index of the column in training_frame that holds the per-row fold assignments.
weights_column : str, optional
The name or index of the column in training_frame that holds the per-row weights.
validation_frame : H2OFrame, optional
H2OFrame with validation data to be scored on while training.
max_runtime_secs : float
Maximum allowed runtime in seconds for model training. Use 0 to disable.
H2ODeepLearningEstimator
¶
-
class
h2o.estimators.deeplearning.
H2ODeepLearningEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.estimator_base.H2OEstimator
Deep Learning
Build a Deep Neural Network model using CPUs Builds a feed-forward multilayer artificial neural network on an H2OFrame
Examples
>>> import h2o >>> from h2o.estimators.deeplearning import H2ODeepLearningEstimator >>> h2o.connect() >>> rows = [[1,2,3,4,0], [2,1,2,4,1], [2,1,4,2,1], [0,1,2,34,1], [2,3,4,1,0]] * 50 >>> fr = h2o.H2OFrame(rows) >>> fr[4] = fr[4].asfactor() >>> model = H2ODeepLearningEstimator() >>> model.train(x=range(4), y=4, training_frame=fr)
-
activation
¶ Enum[“tanh”, “tanh_with_dropout”, “rectifier”, “rectifier_with_dropout”, “maxout”, “maxout_with_dropout”]: Activation function. (Default: “rectifier”)
-
adaptive_rate
¶ bool: Adaptive learning rate. (Default: True)
-
autoencoder
¶ bool: Auto-Encoder. (Default: False)
-
average_activation
¶ float: Average activation for sparse auto-encoder. #Experimental (Default: 0.0)
-
balance_classes
¶ bool: Balance training data class counts via over/under-sampling (for imbalanced data). (Default: False)
-
categorical_encoding
¶ Enum[“auto”, “enum”, “one_hot_internal”, “one_hot_explicit”, “binary”, “eigen”]: Encoding scheme for categorical features (Default: “auto”)
-
checkpoint
¶ str: Model checkpoint to resume training with.
-
class_sampling_factors
¶ List[float]: Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.
-
classification_stop
¶ float: Stopping criterion for classification error fraction on training data (-1 to disable). (Default: 0.0)
-
col_major
¶ bool: #DEPRECATED Use a column major weight matrix for input layer. Can speed up forward propagation, but might slow down backpropagation. (Default: False)
-
diagnostics
¶ bool: Enable diagnostics for hidden layers. (Default: True)
-
distribution
¶ Enum[“auto”, “bernoulli”, “multinomial”, “gaussian”, “poisson”, “gamma”, “tweedie”, “laplace”, “quantile”, “huber”]: Distribution function (Default: “auto”)
-
elastic_averaging
¶ bool: Elastic averaging between compute nodes can improve distributed model convergence. #Experimental (Default: False)
-
elastic_averaging_moving_rate
¶ float: Elastic averaging moving rate (only if elastic averaging is enabled). (Default: 0.9)
-
elastic_averaging_regularization
¶ float: Elastic averaging regularization strength (only if elastic averaging is enabled). (Default: 0.001)
-
epochs
¶ float: How many times the dataset should be iterated (streamed), can be fractional. (Default: 10.0)
-
epsilon
¶ float: Adaptive learning rate smoothing factor (to avoid divisions by zero and allow progress). (Default: 1e-08)
-
export_weights_and_biases
¶ bool: Whether to export Neural Network weights and biases to H2O Frames. (Default: False)
-
fast_mode
¶ bool: Enable fast mode (minor approximation in back-propagation). (Default: True)
-
fold_assignment
¶ Enum[“auto”, “random”, “modulo”, “stratified”]: Cross-validation fold assignment scheme, if fold_column is not specified. The ‘Stratified’ option will stratify the folds based on the response variable, for classification problems. (Default: “auto”)
-
fold_column
¶ str: Column with cross-validation fold index assignment per observation.
-
force_load_balance
¶ bool: Force extra load balancing to increase training speed for small datasets (to keep all cores busy). (Default: True)
List[int]: Hidden layer sizes (e.g. [100, 100]). (Default: [200, 200])
List[float]: Hidden layer dropout ratios (can improve generalization), specify one value per hidden layer, defaults to 0.5.
-
huber_alpha
¶ float: Desired quantile for Huber/M-regression (threshold between quadratic and linear loss, must be between 0 and 1). (Default: 0.9)
-
ignore_const_cols
¶ bool: Ignore constant columns. (Default: True)
-
ignored_columns
¶ List[str]: Names of columns to ignore for training.
-
initial_biases
¶ List[str]: A list of H2OFrame ids to initialize the bias vectors of this model with.
-
initial_weight_distribution
¶ Enum[“uniform_adaptive”, “uniform”, “normal”]: Initial weight distribution. (Default: “uniform_adaptive”)
-
initial_weight_scale
¶ float: Uniform: -value...value, Normal: stddev. (Default: 1.0)
-
initial_weights
¶ List[str]: A list of H2OFrame ids to initialize the weight matrices of this model with.
-
input_dropout_ratio
¶ float: Input layer dropout ratio (can improve generalization, try 0.1 or 0.2). (Default: 0.0)
-
keep_cross_validation_fold_assignment
¶ bool: Whether to keep the cross-validation fold assignment. (Default: False)
-
keep_cross_validation_predictions
¶ bool: Whether to keep the predictions of the cross-validation models. (Default: False)
-
l1
¶ float: L1 regularization (can add stability and improve generalization, causes many weights to become 0). (Default: 0.0)
-
l2
¶ float: L2 regularization (can add stability and improve generalization, causes many weights to be small. (Default: 0.0)
-
loss
¶ Enum[“automatic”, “cross_entropy”, “quadratic”, “huber”, “absolute”, “quantile”]: Loss function. (Default: “automatic”)
-
max_after_balance_size
¶ float: Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes. (Default: 5.0)
-
max_categorical_features
¶ int: Max. number of categorical features, enforced via hashing. #Experimental (Default: 2147483647)
-
max_confusion_matrix_size
¶ int: Maximum size (# classes) for confusion matrices to be printed in the Logs. (Default: 20)
-
max_hit_ratio_k
¶ int: Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable). (Default: 0)
-
max_runtime_secs
¶ float: Maximum allowed runtime in seconds for model training. Use 0 to disable. (Default: 0.0)
-
max_w2
¶ float: Constraint for squared sum of incoming weights per unit (e.g. for Rectifier). (Default: 3.4028235e+38)
-
mini_batch_size
¶ int: Mini-batch size (smaller leads to better fit, larger can speed up and generalize better). (Default: 1)
-
missing_values_handling
¶ Enum[“skip”, “mean_imputation”]: Handling of missing values. Either Skip or MeanImputation. (Default: “mean_imputation”)
-
momentum_ramp
¶ float: Number of training samples for which momentum increases. (Default: 1000000.0)
-
momentum_stable
¶ float: Final momentum after the ramp is over (try 0.99). (Default: 0.0)
-
momentum_start
¶ float: Initial momentum at the beginning of training (try 0.5). (Default: 0.0)
-
nesterov_accelerated_gradient
¶ bool: Use Nesterov accelerated gradient (recommended). (Default: True)
-
nfolds
¶ int: Number of folds for N-fold cross-validation (0 to disable or >= 2). (Default: 0)
-
offset_column
¶ str: Offset column. This will be added to the combination of columns before applying the link function.
-
overwrite_with_best_model
¶ bool: If enabled, override the final model with the best model found during training. (Default: True)
-
pretrained_autoencoder
¶ str: Pretrained autoencoder model to initialize this model with.
-
quantile_alpha
¶ float: Desired quantile for Quantile regression, must be between 0 and 1. (Default: 0.5)
-
quiet_mode
¶ bool: Enable quiet mode for less output to standard output. (Default: False)
-
rate
¶ float: Learning rate (higher => less stable, lower => slower convergence). (Default: 0.005)
-
rate_annealing
¶ float: Learning rate annealing: rate / (1 + rate_annealing * samples). (Default: 1e-06)
-
rate_decay
¶ float: Learning rate decay factor between layers (N-th layer: rate * rate_decay ^ (n - 1). (Default: 1.0)
-
regression_stop
¶ float: Stopping criterion for regression error (MSE) on training data (-1 to disable). (Default: 1e-06)
-
replicate_training_data
¶ bool: Replicate the entire training dataset onto every node for faster training on small datasets. (Default: True)
-
reproducible
¶ bool: Force reproducibility on small data (will be slow - only uses 1 thread). (Default: False)
-
response_column
¶ str: Response variable column.
-
rho
¶ float: Adaptive learning rate time decay factor (similarity to prior updates). (Default: 0.99)
-
score_duty_cycle
¶ float: Maximum duty cycle fraction for scoring (lower: more training, higher: more scoring). (Default: 0.1)
-
score_each_iteration
¶ bool: Whether to score during each iteration of model training. (Default: False)
-
score_interval
¶ float: Shortest time interval (in seconds) between model scoring. (Default: 5.0)
-
score_training_samples
¶ int: Number of training set samples for scoring (0 for all). (Default: 10000)
-
score_validation_samples
¶ int: Number of validation set samples for scoring (0 for all). (Default: 0)
-
score_validation_sampling
¶ Enum[“uniform”, “stratified”]: Method used to sample validation dataset for scoring. (Default: “uniform”)
-
seed
¶ int: Seed for random numbers (affects sampling) - Note: only reproducible when running single threaded. (Default: -1)
-
shuffle_training_data
¶ bool: Enable shuffling of training data (recommended if training data is replicated and train_samples_per_iteration is close to #nodes x #rows, of if using balance_classes). (Default: False)
-
single_node_mode
¶ bool: Run on a single node for fine-tuning of model parameters. (Default: False)
-
sparse
¶ bool: Sparse data handling (more efficient for data with lots of 0 values). (Default: False)
-
sparsity_beta
¶ float: Sparsity regularization. #Experimental (Default: 0.0)
-
standardize
¶ bool: If enabled, automatically standardize the data. If disabled, the user must provide properly scaled input data. (Default: True)
-
stopping_metric
¶ Enum[“auto”, “deviance”, “logloss”, “mse”, “auc”, “lift_top_group”, “r2”, “misclassification”, “mean_per_class_error”]: Metric to use for early stopping (AUTO: logloss for classification, deviance for regression) (Default: “auto”)
-
stopping_rounds
¶ int: Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable) (Default: 5)
-
stopping_tolerance
¶ float: Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much) (Default: 0.0)
-
target_ratio_comm_to_comp
¶ float: Target ratio of communication overhead to computation. Only for multi-node operation and train_samples_per_iteration = -2 (auto-tuning). (Default: 0.05)
-
train_samples_per_iteration
¶ int: Number of training samples (globally) per MapReduce iteration. Special values are 0: one epoch, -1: all available data (e.g., replicated training data), -2: automatic. (Default: -2)
-
training_frame
¶ str: Id of the training data frame (Not required, to allow initial validation of model parameters).
-
tweedie_power
¶ float: Tweedie power for Tweedie regression, must be between 1 and 2. (Default: 1.5)
-
use_all_factor_levels
¶ bool: Use all factor levels of categorical variables. Otherwise, the first factor level is omitted (without loss of accuracy). Useful for variable importances and auto-enabled for autoencoder. (Default: True)
-
validation_frame
¶ str: Id of the validation data frame.
-
variable_importances
¶ bool: Compute variable importances for input features (Gedeon method) - can be slow for large networks. (Default: False)
-
weights_column
¶ str: Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative weights are not allowed.
-
H2OAutoEncoderEstimator
¶
-
class
h2o.estimators.deeplearning.
H2OAutoEncoderEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.deeplearning.H2ODeepLearningEstimator
Examples
>>> import h2o as ml >>> from h2o.estimators.deeplearning import H2OAutoEncoderEstimator >>> ml.init() >>> rows = [[1,2,3,4,0]*50, [2,1,2,4,1]*50, [2,1,4,2,1]*50, [0,1,2,34,1]*50, [2,3,4,1,0]*50] >>> fr = ml.H2OFrame(rows) >>> fr[4] = fr[4].asfactor() >>> model = H2OAutoEncoderEstimator() >>> model.train(x=range(4), training_frame=fr)
H2ORandomForestEstimator
¶
-
class
h2o.estimators.random_forest.
H2ORandomForestEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.estimator_base.H2OEstimator
Distributed Random Forest
-
balance_classes
¶ bool: Balance training data class counts via over/under-sampling (for imbalanced data). (Default: False)
-
binomial_double_trees
¶ bool: For binary classification: Build 2x as many trees (one per class) - can lead to higher accuracy. (Default: False)
-
build_tree_one_node
¶ bool: Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets. (Default: False)
-
categorical_encoding
¶ Enum[“auto”, “enum”, “one_hot_internal”, “one_hot_explicit”, “binary”, “eigen”]: Encoding scheme for categorical features (Default: “auto”)
-
checkpoint
¶ str: Model checkpoint to resume training with.
-
class_sampling_factors
¶ List[float]: Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.
-
col_sample_rate_change_per_level
¶ float: Relative change of the column sampling rate for every level (from 0.0 to 2.0) (Default: 1.0)
-
col_sample_rate_per_tree
¶ float: Column sample rate per tree (from 0.0 to 1.0) (Default: 1.0)
-
fold_assignment
¶ Enum[“auto”, “random”, “modulo”, “stratified”]: Cross-validation fold assignment scheme, if fold_column is not specified. The ‘Stratified’ option will stratify the folds based on the response variable, for classification problems. (Default: “auto”)
-
fold_column
¶ str: Column with cross-validation fold index assignment per observation.
-
histogram_type
¶ Enum[“auto”, “uniform_adaptive”, “random”, “quantiles_global”, “round_robin”]: What type of histogram to use for finding optimal split points (Default: “auto”)
-
ignore_const_cols
¶ bool: Ignore constant columns. (Default: True)
-
ignored_columns
¶ List[str]: Names of columns to ignore for training.
-
keep_cross_validation_fold_assignment
¶ bool: Whether to keep the cross-validation fold assignment. (Default: False)
-
keep_cross_validation_predictions
¶ bool: Whether to keep the predictions of the cross-validation models. (Default: False)
-
max_after_balance_size
¶ float: Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes. (Default: 5.0)
-
max_confusion_matrix_size
¶ int: Maximum size (# classes) for confusion matrices to be printed in the Logs (Default: 20)
-
max_depth
¶ int: Maximum tree depth. (Default: 20)
-
max_hit_ratio_k
¶ int: Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable) (Default: 0)
-
max_runtime_secs
¶ float: Maximum allowed runtime in seconds for model training. Use 0 to disable. (Default: 0.0)
-
min_rows
¶ float: Fewest allowed (weighted) observations in a leaf (in R called ‘nodesize’). (Default: 1.0)
-
min_split_improvement
¶ float: Minimum relative improvement in squared error reduction for a split to happen (Default: 1e-05)
-
mtries
¶ int: Number of variables randomly sampled as candidates at each split. If set to -1, defaults to sqrt{p} for classification and p/3 for regression (where p is the # of predictors (Default: -1)
-
nbins
¶ int: For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best point (Default: 20)
-
nbins_cats
¶ int: For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher values can lead to more overfitting. (Default: 1024)
-
nbins_top_level
¶ int: For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then decrease by factor of two per level (Default: 1024)
-
nfolds
¶ int: Number of folds for N-fold cross-validation (0 to disable or >= 2). (Default: 0)
-
ntrees
¶ int: Number of trees. (Default: 50)
-
offset_column
¶ str: Offset column. This will be added to the combination of columns before applying the link function.
-
r2_stopping
¶ float: r2_stopping is no longer supported and will be ignored if set - please use stopping_rounds, stopping_metric and stopping_tolerance instead. Previous version of H2O would stop making trees when the R^2 metric equals or exceeds this (Default: 1.79769313486e+308)
-
response_column
¶ str: Response variable column.
-
sample_rate
¶ float: Row sample rate per tree (from 0.0 to 1.0) (Default: 0.632000029087)
-
sample_rate_per_class
¶ List[float]: Row sample rate per tree per class (from 0.0 to 1.0)
-
score_each_iteration
¶ bool: Whether to score during each iteration of model training. (Default: False)
-
score_tree_interval
¶ int: Score the model after every so many trees. Disabled if set to 0. (Default: 0)
-
seed
¶ int: Seed for pseudo random number generator (if applicable) (Default: -1)
-
stopping_metric
¶ Enum[“auto”, “deviance”, “logloss”, “mse”, “auc”, “lift_top_group”, “r2”, “misclassification”, “mean_per_class_error”]: Metric to use for early stopping (AUTO: logloss for classification, deviance for regression) (Default: “auto”)
-
stopping_rounds
¶ int: Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable) (Default: 0)
-
stopping_tolerance
¶ float: Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much) (Default: 0.001)
-
training_frame
¶ str: Id of the training data frame (Not required, to allow initial validation of model parameters).
-
validation_frame
¶ str: Id of the validation data frame.
-
weights_column
¶ str: Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative weights are not allowed.
-
H2OGradientBoostingEstimator
¶
-
class
h2o.estimators.gbm.
H2OGradientBoostingEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.estimator_base.H2OEstimator
Gradient Boosting Machine
Builds gradient boosted trees on a parsed data set, for regression or classification. The default distribution function will guess the model type based on the response column type. Otherwise, the response column must be an enum for “bernoulli” or “multinomial”, and numeric for all other distributions.
-
balance_classes
¶ bool: Balance training data class counts via over/under-sampling (for imbalanced data). (Default: False)
-
build_tree_one_node
¶ bool: Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets. (Default: False)
-
categorical_encoding
¶ Enum[“auto”, “enum”, “one_hot_internal”, “one_hot_explicit”, “binary”, “eigen”]: Encoding scheme for categorical features (Default: “auto”)
-
checkpoint
¶ str: Model checkpoint to resume training with.
-
class_sampling_factors
¶ List[float]: Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.
-
col_sample_rate
¶ float: Column sample rate (from 0.0 to 1.0) (Default: 1.0)
-
col_sample_rate_change_per_level
¶ float: Relative change of the column sampling rate for every level (from 0.0 to 2.0) (Default: 1.0)
-
col_sample_rate_per_tree
¶ float: Column sample rate per tree (from 0.0 to 1.0) (Default: 1.0)
-
distribution
¶ Enum[“auto”, “bernoulli”, “multinomial”, “gaussian”, “poisson”, “gamma”, “tweedie”, “laplace”, “quantile”, “huber”]: Distribution function (Default: “auto”)
-
fold_assignment
¶ Enum[“auto”, “random”, “modulo”, “stratified”]: Cross-validation fold assignment scheme, if fold_column is not specified. The ‘Stratified’ option will stratify the folds based on the response variable, for classification problems. (Default: “auto”)
-
fold_column
¶ str: Column with cross-validation fold index assignment per observation.
-
histogram_type
¶ Enum[“auto”, “uniform_adaptive”, “random”, “quantiles_global”, “round_robin”]: What type of histogram to use for finding optimal split points (Default: “auto”)
-
huber_alpha
¶ float: Desired quantile for Huber/M-regression (threshold between quadratic and linear loss, must be between 0 and 1). (Default: 0.9)
-
ignore_const_cols
¶ bool: Ignore constant columns. (Default: True)
-
ignored_columns
¶ List[str]: Names of columns to ignore for training.
-
keep_cross_validation_fold_assignment
¶ bool: Whether to keep the cross-validation fold assignment. (Default: False)
-
keep_cross_validation_predictions
¶ bool: Whether to keep the predictions of the cross-validation models. (Default: False)
-
learn_rate
¶ float: Learning rate (from 0.0 to 1.0) (Default: 0.1)
-
learn_rate_annealing
¶ float: Scale the learning rate by this factor after each tree (e.g., 0.99 or 0.999) (Default: 1.0)
-
max_abs_leafnode_pred
¶ float: Maximum absolute value of a leaf node prediction (Default: 1.79769313486e+308)
-
max_after_balance_size
¶ float: Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes. (Default: 5.0)
-
max_confusion_matrix_size
¶ int: Maximum size (# classes) for confusion matrices to be printed in the Logs (Default: 20)
-
max_depth
¶ int: Maximum tree depth. (Default: 5)
-
max_hit_ratio_k
¶ int: Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable) (Default: 0)
-
max_runtime_secs
¶ float: Maximum allowed runtime in seconds for model training. Use 0 to disable. (Default: 0.0)
-
min_rows
¶ float: Fewest allowed (weighted) observations in a leaf (in R called ‘nodesize’). (Default: 10.0)
-
min_split_improvement
¶ float: Minimum relative improvement in squared error reduction for a split to happen (Default: 1e-05)
-
nbins
¶ int: For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best point (Default: 20)
-
nbins_cats
¶ int: For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher values can lead to more overfitting. (Default: 1024)
-
nbins_top_level
¶ int: For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then decrease by factor of two per level (Default: 1024)
-
nfolds
¶ int: Number of folds for N-fold cross-validation (0 to disable or >= 2). (Default: 0)
-
ntrees
¶ int: Number of trees. (Default: 50)
-
offset_column
¶ str: Offset column. This will be added to the combination of columns before applying the link function.
-
pred_noise_bandwidth
¶ float: Bandwidth (sigma) of Gaussian multiplicative noise ~N(1,sigma) for tree node predictions (Default: 0.0)
-
quantile_alpha
¶ float: Desired quantile for Quantile regression, must be between 0 and 1. (Default: 0.5)
-
r2_stopping
¶ float: r2_stopping is no longer supported and will be ignored if set - please use stopping_rounds, stopping_metric and stopping_tolerance instead. Previous version of H2O would stop making trees when the R^2 metric equals or exceeds this (Default: 1.79769313486e+308)
-
response_column
¶ str: Response variable column.
-
sample_rate
¶ float: Row sample rate per tree (from 0.0 to 1.0) (Default: 1.0)
-
sample_rate_per_class
¶ List[float]: Row sample rate per tree per class (from 0.0 to 1.0)
-
score_each_iteration
¶ bool: Whether to score during each iteration of model training. (Default: False)
-
score_tree_interval
¶ int: Score the model after every so many trees. Disabled if set to 0. (Default: 0)
-
seed
¶ int: Seed for pseudo random number generator (if applicable) (Default: -1)
-
stopping_metric
¶ Enum[“auto”, “deviance”, “logloss”, “mse”, “auc”, “lift_top_group”, “r2”, “misclassification”, “mean_per_class_error”]: Metric to use for early stopping (AUTO: logloss for classification, deviance for regression) (Default: “auto”)
-
stopping_rounds
¶ int: Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable) (Default: 0)
-
stopping_tolerance
¶ float: Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much) (Default: 0.001)
-
training_frame
¶ str: Id of the training data frame (Not required, to allow initial validation of model parameters).
-
tweedie_power
¶ float: Tweedie power for Tweedie regression, must be between 1 and 2. (Default: 1.5)
-
validation_frame
¶ str: Id of the validation data frame.
-
weights_column
¶ str: Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative weights are not allowed.
-
H2OGeneralizedLinearEstimator
¶
-
class
h2o.estimators.glm.
H2OGeneralizedLinearEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.estimator_base.H2OEstimator
Generalized Linear Modeling
Fits a generalized linear model, specified by a response variable, a set of predictors, and a description of the error distribution.
Returns: A subclass of ModelBase is returned. The specific subclass depends on the machine learning task at hand
(if it’s binomial classification, then an H2OBinomialModel is returned, if it’s regression then a
H2ORegressionModel is returned). The default print-out of the models is shown, but further GLM-specific
information can be queried out of the object. Upon completion of the GLM, the resulting object has
coefficients, normalized coefficients, residual/null deviance, aic, and a host of model metrics including
MSE, AUC (for logistic regression), degrees of freedom, and confusion matrices.
-
Lambda
¶ [DEPRECATED] Use self.lambda_ instead
-
alpha
¶ List[float]: distribution of regularization between L1 and L2.
-
balance_classes
¶ bool: Balance training data class counts via over/under-sampling (for imbalanced data). (Default: False)
-
beta_constraints
¶ str: beta constraints
-
beta_epsilon
¶ float: converge if beta changes less (using L-infinity norm) than beta esilon, ONLY applies to IRLSM solver (Default: 0.0001)
-
class_sampling_factors
¶ List[float]: Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.
-
compute_p_values
¶ bool: request p-values computation, p-values work only with IRLSM solver and no regularization (Default: False)
-
early_stopping
¶ bool: stop early when there is no more relative improvement on train or validation (if provided) (Default: True)
-
family
¶ Enum[“gaussian”, “binomial”, “multinomial”, “poisson”, “gamma”, “tweedie”]: Family. Use binomial for classification with logistic regression, others are for regression problems. (Default: “gaussian”)
-
fold_assignment
¶ Enum[“auto”, “random”, “modulo”, “stratified”]: Cross-validation fold assignment scheme, if fold_column is not specified. The ‘Stratified’ option will stratify the folds based on the response variable, for classification problems. (Default: “auto”)
-
fold_column
¶ str: Column with cross-validation fold index assignment per observation.
-
static
getGLMRegularizationPath
(model)[source]¶ Extract full regularization path explored during lambda search from glm model. @param model - source lambda search model
-
gradient_epsilon
¶ float: Converge if objective changes less (using L-infinity norm) than this, ONLY applies to L-BFGS solver. Default indicates: If lambda_search is set to False and lambda is equal to zero, the default value of gradient_epsilon is equal to .000001, otherwise the default value is .0001. If lambda_search is set to True, the conditional values above are 1E-8 and 1E-6 respectively. (Default: -1.0)
-
ignore_const_cols
¶ bool: Ignore constant columns. (Default: True)
-
ignored_columns
¶ List[str]: Names of columns to ignore for training.
-
interactions
¶ List[str]: A list of predictor column indices to interact. All pairwise combinations will be computed for the list.
-
intercept
¶ bool: include constant term in the model (Default: True)
-
keep_cross_validation_fold_assignment
¶ bool: Whether to keep the cross-validation fold assignment. (Default: False)
-
keep_cross_validation_predictions
¶ bool: Whether to keep the predictions of the cross-validation models. (Default: False)
-
lambda_
¶ [DEPRECATED] Use self.lambda_ instead
-
lambda_min_ratio
¶ float: Min lambda used in lambda search, specified as a ratio of lambda_max. Default indicates: if the number of observations is greater than the number of variables then lambda_min_ratio is set to 0.0001; if the number of observations is less than the number of variables then lambda_min_ratio is set to 0.01. (Default: -1.0)
-
lambda_search
¶ bool: use lambda search starting at lambda max, given lambda is then interpreted as lambda min (Default: False)
-
link
¶ Enum[“family_default”, “identity”, “logit”, “log”, “inverse”, “tweedie”]: (Default: “family_default”)
-
static
makeGLMModel
(model, coefs, threshold=0.5)[source]¶ Create a custom GLM model using the given coefficients. Needs to be passed source model trained on the dataset to extract the dataset information from.
@param model - source model, used for extracting dataset information @param coefs - dictionary containing model coefficients @param threshold - (optional, only for binomial) decision threshold used for classification
-
max_active_predictors
¶ int: Maximum number of active predictors during computation. Use as a stopping criterion to prevent expensive model building with many predictors. Default indicates: If the IRLSM solver is used, the value of max_active_predictors is set to 7000 otherwise it is set to 100000000. (Default: -1)
-
max_after_balance_size
¶ float: Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes. (Default: 5.0)
-
max_confusion_matrix_size
¶ int: Maximum size (# classes) for confusion matrices to be printed in the Logs (Default: 20)
-
max_hit_ratio_k
¶ int: Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable) (Default: 0)
-
max_iterations
¶ int: Maximum number of iterations (Default: -1)
-
max_runtime_secs
¶ float: Maximum allowed runtime in seconds for model training. Use 0 to disable. (Default: 0.0)
-
missing_values_handling
¶ Enum[“skip”, “mean_imputation”]: Handling of missing values. Either Skip or MeanImputation. (Default: “mean_imputation”)
-
nfolds
¶ int: Number of folds for N-fold cross-validation (0 to disable or >= 2). (Default: 0)
-
nlambdas
¶ int: Number of lambdas to be used in a search. Default indicates: If alpha is zero, with lambda search set to True, the value of nlamdas is set to 30 (fewer lambdas are needed for ridge regression) otherwise it is set to 100. (Default: -1)
-
non_negative
¶ bool: Restrict coefficients (not intercept) to be non-negative (Default: False)
-
objective_epsilon
¶ float: Converge if objective value changes less than this. Default indicates: If lambda_search is set to True the value of objective_epsilon is set to .0001. If the lambda_search is set to False and lambda is equal to zero, the value of objective_epsilon is set to .000001, for any other value of lambda the default value of objective_epsilon is set to .0001. (Default: -1.0)
-
offset_column
¶ str: Offset column. This will be added to the combination of columns before applying the link function.
-
prior
¶ float: prior probability for y==1. To be used only for logistic regression iff the data has been sampled and the mean of response does not reflect reality. (Default: -1.0)
-
remove_collinear_columns
¶ bool: in case of linearly dependent columns remove some of the dependent columns (Default: False)
-
response_column
¶ str: Response variable column.
-
score_each_iteration
¶ bool: Whether to score during each iteration of model training. (Default: False)
-
seed
¶ int: Seed for pseudo random number generator (if applicable) (Default: -1)
-
solver
¶ Enum[“auto”, “irlsm”, “l_bfgs”, “coordinate_descent_naive”, “coordinate_descent”]: AUTO will set the solver based on given data and the other parameters. IRLSM is fast on on problems with small number of predictors and for lambda-search with L1 penalty, L_BFGS scales better for datasets with many columns. Coordinate descent is experimental (beta). (Default: “auto”)
-
standardize
¶ bool: Standardize numeric columns to have zero mean and unit variance (Default: True)
-
training_frame
¶ str: Id of the training data frame (Not required, to allow initial validation of model parameters).
-
tweedie_link_power
¶ float: Tweedie link power (Default: 1.0)
-
tweedie_variance_power
¶ float: Tweedie variance power (Default: 0.0)
-
validation_frame
¶ str: Id of the validation data frame.
-
weights_column
¶ str: Column with observation weights. Giving some observation a weight of zero is equivalent to excluding it from the dataset; giving an observation a relative weight of 2 is equivalent to repeating that row twice. Negative weights are not allowed.
-
H2OGeneralizedLowRankEstimator
¶
-
class
h2o.estimators.glrm.
H2OGeneralizedLowRankEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.estimator_base.H2OEstimator
Generalized Low Rank Modeling
Builds a generalized low rank model of a H2O dataset.
-
expand_user_y
¶ bool: Expand categorical columns in user-specified initial Y (Default: True)
-
gamma_x
¶ float: Regularization weight on X matrix (Default: 0.0)
-
gamma_y
¶ float: Regularization weight on Y matrix (Default: 0.0)
-
ignore_const_cols
¶ bool: Ignore constant columns. (Default: True)
-
ignored_columns
¶ List[str]: Names of columns to ignore for training.
-
impute_original
¶ bool: Reconstruct original training data by reversing transform (Default: False)
-
init
¶ Enum[“random”, “svd”, “plus_plus”, “user”]: Initialization mode (Default: “plus_plus”)
-
init_step_size
¶ float: Initial step size (Default: 1.0)
-
k
¶ int: Rank of matrix approximation (Default: 1)
-
loading_name
¶ str: Frame key to save resulting X
-
loss
¶ Enum[“quadratic”, “absolute”, “huber”, “poisson”, “hinge”, “logistic”, “periodic”]: Numeric loss function (Default: “quadratic”)
-
loss_by_col
¶ List[Enum[“quadratic”, “absolute”, “huber”, “poisson”, “hinge”, “logistic”, “periodic”, “categorical”, “ordinal”]]: Loss function by column (override)
-
loss_by_col_idx
¶ List[int]: Loss function by column index (override)
-
max_iterations
¶ int: Maximum number of iterations (Default: 1000)
-
max_runtime_secs
¶ float: Maximum allowed runtime in seconds for model training. Use 0 to disable. (Default: 0.0)
-
max_updates
¶ int: Maximum number of updates (Default: 2000)
-
min_step_size
¶ float: Minimum step size (Default: 0.0001)
-
multi_loss
¶ Enum[“categorical”, “ordinal”]: Categorical loss function (Default: “categorical”)
-
period
¶ int: Length of period (only used with periodic loss function) (Default: 1)
-
recover_svd
¶ bool: Recover singular values and eigenvectors of XY (Default: False)
-
regularization_x
¶ Enum[“none”, “quadratic”, “l2”, “l1”, “non_negative”, “one_sparse”, “unit_one_sparse”, “simplex”]: Regularization function for X matrix (Default: “none”)
-
regularization_y
¶ Enum[“none”, “quadratic”, “l2”, “l1”, “non_negative”, “one_sparse”, “unit_one_sparse”, “simplex”]: Regularization function for Y matrix (Default: “none”)
-
score_each_iteration
¶ bool: Whether to score during each iteration of model training. (Default: False)
-
seed
¶ int: RNG seed for initialization (Default: -1)
-
svd_method
¶ Enum[“gram_s_v_d”, “power”, “randomized”]: Method for computing SVD during initialization (Caution: Power and Randomized are currently experimental and unstable) (Default: “randomized”)
-
training_frame
¶ str: Id of the training data frame (Not required, to allow initial validation of model parameters).
-
transform
¶ Enum[“none”, “standardize”, “normalize”, “demean”, “descale”]: Transformation of training data (Default: “none”)
-
user_x
¶ str: User-specified initial X
-
user_y
¶ str: User-specified initial Y
-
validation_frame
¶ str: Id of the validation data frame.
-
H2OKMeansEstimator
¶
-
class
h2o.estimators.kmeans.
H2OKMeansEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.estimator_base.H2OEstimator
K-means
Performs k-means clustering on an H2O dataset.
-
categorical_encoding
¶ Enum[“auto”, “enum”, “one_hot_internal”, “one_hot_explicit”, “binary”, “eigen”]: Encoding scheme for categorical features (Default: “auto”)
-
estimate_k
¶ bool: Whether to estimate the number of clusters (<=k) iteratively and deterministically. (Default: False)
-
fold_assignment
¶ Enum[“auto”, “random”, “modulo”, “stratified”]: Cross-validation fold assignment scheme, if fold_column is not specified. The ‘Stratified’ option will stratify the folds based on the response variable, for classification problems. (Default: “auto”)
-
fold_column
¶ str: Column with cross-validation fold index assignment per observation.
-
ignore_const_cols
¶ bool: Ignore constant columns. (Default: True)
-
ignored_columns
¶ List[str]: Names of columns to ignore for training.
-
init
¶ Enum[“random”, “plus_plus”, “furthest”, “user”]: Initialization mode (Default: “furthest”)
-
k
¶ int: The max. number of clusters. If estimate_k is disabled, the model will find k centroids, otherwise it will find up to k centroids. (Default: 1)
-
keep_cross_validation_fold_assignment
¶ bool: Whether to keep the cross-validation fold assignment. (Default: False)
-
keep_cross_validation_predictions
¶ bool: Whether to keep the predictions of the cross-validation models. (Default: False)
-
max_iterations
¶ int: Maximum training iterations (if estimate_k is enabled, then this is for each inner Lloyds iteration) (Default: 10)
-
max_runtime_secs
¶ float: Maximum allowed runtime in seconds for model training. Use 0 to disable. (Default: 0.0)
-
nfolds
¶ int: Number of folds for N-fold cross-validation (0 to disable or >= 2). (Default: 0)
-
score_each_iteration
¶ bool: Whether to score during each iteration of model training. (Default: False)
-
seed
¶ int: RNG Seed (Default: -1)
-
standardize
¶ bool: Standardize columns before computing distances (Default: True)
-
training_frame
¶ str: Id of the training data frame (Not required, to allow initial validation of model parameters).
-
user_points
¶ str: User-specified points
-
validation_frame
¶ str: Id of the validation data frame.
-
H2ONaiveBayesEstimator
¶
-
class
h2o.estimators.naive_bayes.
H2ONaiveBayesEstimator
(**kwargs)[source]¶ Bases:
h2o.estimators.estimator_base.H2OEstimator
Naive Bayes
The naive Bayes classifier assumes independence between predictor variables conditional on the response, and a Gaussian distribution of numeric predictors with mean and standard deviation computed from the training dataset. When building a naive Bayes classifier, every row in the training dataset that contains at least one NA will be skipped completely. If the test dataset has missing values, then those predictors are omitted in the probability calculation during prediction.
-
balance_classes
¶ bool: Balance training data class counts via over/under-sampling (for imbalanced data). (Default: False)
-
class_sampling_factors
¶ List[float]: Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.
-
compute_metrics
¶ bool: Compute metrics on training data (Default: True)
-
eps_prob
¶ float: Cutoff below which probability is replaced with min_prob (Default: 0.0)
-
eps_sdev
¶ float: Cutoff below which standard deviation is replaced with min_sdev (Default: 0.0)
-
fold_assignment
¶ Enum[“auto”, “random”, “modulo”, “stratified”]: Cross-validation fold assignment scheme, if fold_column is not specified. The ‘Stratified’ option will stratify the folds based on the response variable, for classification problems. (Default: “auto”)
-
fold_column
¶ str: Column with cross-validation fold index assignment per observation.
-
ignore_const_cols
¶ bool: Ignore constant columns. (Default: True)
-
ignored_columns
¶ List[str]: Names of columns to ignore for training.
-
keep_cross_validation_fold_assignment
¶ bool: Whether to keep the cross-validation fold assignment. (Default: False)
-
keep_cross_validation_predictions
¶ bool: Whether to keep the predictions of the cross-validation models. (Default: False)
-
laplace
¶ float: Laplace smoothing parameter (Default: 0.0)
-
max_after_balance_size
¶ float: Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes. (Default: 5.0)
-
max_confusion_matrix_size
¶ int: Maximum size (# classes) for confusion matrices to be printed in the Logs (Default: 20)
-
max_hit_ratio_k
¶ int: Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable) (Default: 0)
-
max_runtime_secs
¶ float: Maximum allowed runtime in seconds for model training. Use 0 to disable. (Default: 0.0)
-
min_prob
¶ float: Min. probability to use for observations with not enough data (Default: 0.001)
-
min_sdev
¶ float: Min. standard deviation to use for observations with not enough data (Default: 0.001)
-
nfolds
¶ int: Number of folds for N-fold cross-validation (0 to disable or >= 2). (Default: 0)
-
response_column
¶ str: Response variable column.
-
score_each_iteration
¶ bool: Whether to score during each iteration of model training. (Default: False)
-
seed
¶ int: Seed for pseudo random number generator (only used for cross-validation and fold_assignment=”Random” or “AUTO”) (Default: -1)
-
training_frame
¶ str: Id of the training data frame (Not required, to allow initial validation of model parameters).
-
validation_frame
¶ str: Id of the validation data frame.
-
H2OGridSearch
¶
This module implements grid search class. All grid search things inherit from this class.
copyright: |
|
---|---|
license: | Apache License Version 2.0 (see LICENSE for details) |
-
class
h2o.grid.grid_search.
H2OGridSearch
(model, hyper_params, grid_id=None, search_criteria=None)[source]¶ Bases:
h2o.utils.backward_compatibility.BackwardsCompatibleBase
-
aic
(train=False, valid=False, xval=False)[source]¶ Get the AIC(s). If all are False (default), then return the training metric value. If more than one options is set to True, then return a dictionary of metrics where the keys are “train”, “valid”, and “xval”
Parameters: - train – If train is True, then return the AIC value for the training data.
- valid – If valid is True, then return the AIC value for the validation data.
- xval – If xval is True, then return the AIC value for the validation data.
Returns: The AIC.
-
auc
(train=False, valid=False, xval=False)[source]¶ Get the AUC(s). If all are False (default), then return the training metric value. If more than one options is set to True, then return a dictionary of metrics where the keys are “train”, “valid”, and “xval”
Parameters: - train – If train is True, then return the AUC value for the training data.
- valid – If valid is True, then return the AUC value for the validation data.
- xval – If xval is True, then return the AUC value for the validation data.
Returns: The AUC.
-
biases
(vector_id=0)[source]¶ Return the frame for the respective bias vector :param: vector_id: an integer, ranging from 0 to number of layers, that specifies the bias vector to return. :return: an H2OFrame which represents the bias vector identified by vector_id
-
deepfeatures
(test_data, layer)[source]¶ Obtain a hidden layer’s details on a dataset.
Parameters: test_data: H2OFrame
Data to create a feature space on
layer: int
index of the hidden layer
Returns: A dictionary of hidden layer details for each model.
-
failed_params
¶
-
failed_raw_params
¶
-
failure_details
¶
-
failure_stack_traces
¶
-
get_grid
(sort_by=None, decreasing=None)[source]¶ Retrieve an H2OGridSearch instance. Optionally specify a metric by which to sort models and a sort order.
Parameters: sort_by : str, optional
A metric by which to sort the models in the grid space. Choices are “logloss”, “residual_deviance”, “mse”, “auc”, “r2”, “accuracy”, “precision”, “recall”, “f1”, etc.
decreasing : bool, optional
Sort the models in decreasing order of metric if true, otherwise sort in increasing order (default).
Returns
——-
A new H2OGridSearch instance optionally sorted on the specified metric.
-
get_hyperparams
(id, display=True)[source]¶ Get the hyperparameters of a model explored by grid search.
Parameters: id: str
The model id of the model with hyperparameters of interest.
display: boolean
Flag to indicate whether to display the hyperparameter names.
Returns: A list of the hyperparameters for the specified model.
-
get_hyperparams_dict
(id, display=True)[source]¶ Derived and returned the model parameters used to train the particular grid search model.
Parameters: id: str
The model id of the model with hyperparameters of interest.
display: boolean
Flag to indicate whether to display the hyperparameter names.
Returns: A dict of model pararmeters derived from the hyper-parameters used to train this particular model.
-
get_xval_models
(key=None)[source]¶ Return a Model object.
Parameters: key : str
If None, return all cross-validated models; otherwise return the model that key points.
Returns: A model or list of models.
-
gini
(train=False, valid=False, xval=False)[source]¶ Get the Gini Coefficient(s).
If all are False (default), then return the training metric value. If more than one options is set to True, then return a dictionary of metrics where the keys are “train”, “valid”, and “xval”
Parameters: - train – If train is True, then return the Gini Coefficient value for the training data.
- valid – If valid is True, then return the Gini Coefficient value for the validation data.
- xval – If xval is True, then return the Gini Coefficient value for the cross validation data.
Returns: The Gini Coefficient for this binomial model.
-
grid_id
¶ A key that identifies this grid search object in H2O.
-
hyper_names
¶
-
logloss
(train=False, valid=False, xval=False)[source]¶ Get the Log Loss(s). If all are False (default), then return the training metric value. If more than one options is set to True, then return a dictionary of metrics where the keys are “train”, “valid”, and “xval”
Parameters: - train – If train is True, then return the Log Loss value for the training data.
- valid – If valid is True, then return the Log Loss value for the validation data.
- xval – If xval is True, then return the Log Loss value for the cross validation data.
Returns: The Log Loss for this binomial model.
-
mean_residual_deviance
(train=False, valid=False, xval=False)[source]¶ Get the Mean Residual Deviances(s). If all are False (default), then return the training metric value. If more than one options is set to True, then return a dictionary of metrics where the keys are “train”, “valid”, and “xval”
Parameters: - train – If train is True, then return the Mean Residual Deviance value for the training data.
- valid – If valid is True, then return the Mean Residual Deviance value for the validation data.
- xval – If xval is True, then return the Mean Residual Deviance value for the cross validation data.
Returns: The Mean Residual Deviance for this regression model.
-
model_ids
¶
-
model_performance
(test_data=None, train=False, valid=False, xval=False)[source]¶ Generate model metrics for this model on test_data.
Parameters: - test_data – Data set for which model metrics shall be computed against. All three of train, valid and xval arguments are ignored if test_data is not None.
- train – Report the training metrics for the model.
- valid – Report the validation metrics for the model.
- xval – Report the validation metrics for the model.
Returns: An object of class H2OModelMetrics.
-
mse
(train=False, valid=False, xval=False)[source]¶ Get the MSE(s). If all are False (default), then return the training metric value. If more than one options is set to True, then return a dictionary of metrics where the keys are “train”, “valid”, and “xval”
Parameters: - train – If train is True, then return the MSE value for the training data.
- valid – If valid is True, then return the MSE value for the validation data.
- xval – If xval is True, then return the MSE value for the cross validation data.
Returns: The MSE for this regression model.
-
null_degrees_of_freedom
(train=False, valid=False, xval=False)[source]¶ Retreive the null degress of freedom if this model has the attribute, or None otherwise.
Parameters: - train – Get the null dof for the training set. If both train and valid are False, then train is selected by default.
- valid – Get the null dof for the validation set. If both train and valid are True, then train is selected by default.
Returns: Return the null dof, or None if it is not present.
-
null_deviance
(train=False, valid=False, xval=False)[source]¶ Retreive the null deviance if this model has the attribute, or None otherwise.
Param: train Get the null deviance for the training set. If both train and valid are False, then train is selected by default. Param: valid Get the null deviance for the validation set. If both train and valid are True, then train is selected by default. Returns: Return the null deviance, or None if it is not present.
-
pprint_coef
()[source]¶ Pretty print the coefficents table (includes normalized coefficients) :return: None
-
predict
(test_data)[source]¶ Predict on a dataset.
Parameters: test_data : H2OFrame
Data to be predicted on.
Returns: H2OFrame filled with predictions.
-
r2
(train=False, valid=False, xval=False)[source]¶ Return the R^2 for this regression model.
The R^2 value is defined to be 1 - MSE/var, where var is computed as sigma*sigma.
If all are False (default), then return the training metric value. If more than one options is set to True, then return a dictionary of metrics where the keys are “train”, “valid”, and “xval”
Parameters: - train – If train is True, then return the R^2 value for the training data.
- valid – If valid is True, then return the R^2 value for the validation data.
- xval – If xval is True, then return the R^2 value for the cross validation data.
Returns: The R^2 for this regression model.
-
residual_degrees_of_freedom
(train=False, valid=False, xval=False)[source]¶ Retreive the residual degress of freedom if this model has the attribute, or None otherwise.
Parameters: - train – Get the residual dof for the training set. If both train and valid are False, then train is selected by default.
- valid – Get the residual dof for the validation set. If both train and valid are True, then train is selected by default.
Returns: Return the residual dof, or None if it is not present.
-
residual_deviance
(train=False, valid=False, xval=False)[source]¶ Retreive the residual deviance if this model has the attribute, or None otherwise.
Parameters: train : boolean, optional, default=True
Get the residual deviance for the training set. If both train and valid are False, then train is selected by default.
valid: boolean, optional
Get the residual deviance for the validation set. If both train and valid are True, then train is selected by default.
xval : boolean, optional
Get the residual deviance for the cross-validated models.
Returns: Return the residual deviance, or None if it is not present.
-
sort_by
(metric, increasing=True)[source]¶ Sort the models in the grid space by a metric.
Parameters: metric: str
A metric (‘logloss’, ‘auc’, ‘r2’) by which to sort the models. If addtional arguments are desired, they can be passed to the metric, for example ‘logloss(valid=True)’
increasing: boolean, optional
Sort the metric in increasing (True) (default) or decreasing (False) order.
Returns: An H2OTwoDimTable of the sorted models showing model id, hyperparameters, and metric value. The best model can
be selected and used for prediction.
Examples
>>> grid_search_results = gs.sort_by('F1', False) >>> best_model_id = grid_search_results['Model Id'][0] >>> best_model = h2o.get_model(best_model_id) >>> best_model.predict(test_data)
-
sorted_metric_table
()[source]¶ Retrieve Summary Table of an H2O Grid Search
Returns: The summary table as an H2OTwoDimTable or a Pandas DataFrame.
-
start
(x, y=None, training_frame=None, offset_column=None, fold_column=None, weights_column=None, validation_frame=None, **params)[source]¶ Asynchronous model build by specifying the predictor columns, response column, and any additional frame-specific values.
To block for results, call join.
Parameters: x : list
A list of column names or indices indicating the predictor columns.
- y : str
An index or a column name indicating the response column.
- training_frame : H2OFrame
The H2OFrame having the columns indicated by x and y (as well as any additional columns specified by fold, offset, and weights).
- offset_column : str, optional
The name or index of the column in training_frame that holds the offsets.
- fold_column : str, optional
The name or index of the column in training_frame that holds the per-row fold assignments.
- weights_column : str, optional
The name or index of the column in training_frame that holds the per-row weights.
- validation_frame : H2OFrame, optional
H2OFrame with validation data to be scored on while training.
-
train
(x, y=None, training_frame=None, offset_column=None, fold_column=None, weights_column=None, validation_frame=None, **params)[source]¶
-
varimp
(use_pandas=False)[source]¶ Pretty print the variable importances, or return them in a list/pandas DataFrame
Parameters: use_pandas: boolean, optional
If True, then the variable importances will be returned as a pandas data frame.
Returns: A dictionary of lists or Pandas DataFrame instances.
-