Appendix A - Parameters¶
This Appendix provides detailed descriptions of parameters that can be specified in the H2O algorithms. In addition, each parameter also includes the algorithms that support the parameter, whether the parameter is a hyperparameter (can be used in grid search), links to any related parameters, and R and Python examples showing the parameter in use.
Notes:
This Appendix is a work in progress.
For parameters that are supported in multiple algorithms, the included example uses the GBM or GLM algorithm.
alpha
balance_classes
base_models
beta_constraints
beta_epsilon
binomial_double_trees
blended_avg
blending_frame
build_tree_one_node
calibrate_model
calibration_frame
categorical_encoding
check_constant_response
checkpoint
class_sampling_factors
cluster_size_constraints
col_sample_rate
col_sample_rate_change_per_level
col_sample_rate_per_tree
compute_metrics
compute_p_values
custom_distribution_func
custom_metric_func
distribution
early_stopping
eps_prob
eps_sdev
estimate_k
exclude_algos
export_checkpoints_dir
family
fold_assignment
fold_column
gainslift_bins
gradient_epsilon
HGLM
histogram_type
holdout_type
huber_alpha
ignore_const_cols
ignored_columns
impute_missing
include_algos
inflection_point
init
(GLRM, K-Means)init
(CoxPH)interaction_pairs
interactions
intercept
k
keep_cross_validation_fold_assignment
keep_cross_validation_models
keep_cross_validation_predictions
lambda
lambda_min_ratio
lambda_search
laplace
learn_rate
learn_rate_annealing
link
lre_min
max_abs_leafnode_pred
max_active_predictors
max_after_balance_size
max_depth
max_iterations
max_models
max_runtime_secs
max_runtime_secs_per_model
metalearner_algorithm
metalearner_params
min_prob
min_rows
min_sdev
min_split_improvement
missing_values_handling
model_id
monotone_constraints
mtries
nbins
nbins_cats
nbins_top_level
nfolds
nlambdas
noise
non_negative
ntrees
objective_epsilon
offset_column
pca_impl
pca_method
plug_values
pred_noise_bandwidth
prior
quantile_alpha
rand_family
random_columns
rate
rate_annealing
rate_decay
remove_collinear_columns
sample_rate
sample_rate_per_class
sample_size
score_each_iteration
score_tree_interval
seed
single_node_mode
smoothing
solver
sort_metric
standardize
start_column
stop_column
stopping_metric
stopping_rounds
stopping_tolerance
stratify_by
theta
ties
training_frame
transform
tweedie_link_power
tweedie_power
tweedie_variance_power
upload_custom_distribution
upload_custom_metric
use_all_factor_levels
user_points
validation_frame
weights_column
x
y