Welcome to H2O

Welcome to the H2O documentation site! We’re glad you’re interested in learning more about H2O - if you have any questions, please email them to support@h2o.ai or post them on our Google groups website, h2ostream.

Note: To join our Google group on h2ostream, you need a Google account (such as Gmail or Google+). On the h2ostream page, click the Join group button, then click the New Topic button to post a new message.

We welcome your feedback! Please let us know if you have any questions or comments about this site by emailing us at support@h2o.ai.

Depending on your area of interest, select your learning path below:


New Users

If you’re just getting started with H2O, here are some links to help you learn more:


Experienced Users

If you’ve used previous versions of H2O, the following links will help guide you through the process of upgrading to H2O 3.0.


Corporate Users

If you’re considering using H2O in a corporate environment, you’ll be happy to know that H2O supports many popular scalable computing solutions, such as Hadoop and EC2 (AWS). For more information, refer to the following links.


Sparkling Water Users

Users of our Spark-compatible solution, Sparkling Water, should be aware that Sparkling Water is only supported with the latest version of H2O. For more information about Sparkling Water, refer to the following links.

Getting Started with Sparkling Water

Sparkling Water Blog Posts

Sparkling Water Meetup Slide Decks


Python Users

Pythonistas will be thrilled to know that H2O now provides support for this popular programming language! Python users can also use H2O with iPython notebooks. For more information, refer to the following links.


R Users

Don’t worry, R users - we still provide R support in the latest version of H2O, just as before. However, the R components of H2O have been cleaned up, simplified, and standardized, so the command format is easier and more intuitive. Due to these improvements, be aware that any scripts created with previous versions of H2O will not be compatible with the latest version. However, we have provided resources to assist R users in upgrading to the latest version in the form of a document that outlines the differences between versions and a tool that reviews scripts for deprecated or renamed parameters.


API Users

API users will be happy to know that the APIs have been more thoroughly documented in the latest release of H2O and additional capabilities (such as exporting weights and biases for Deep Learning models) have been added.

REST APIs are generated immediately out of the code, allowing users to implement machine learning in many ways. For example, REST APIs could be used to call a model created by sensor data and to set up auto-alerts if the sensor data falls below a specified threshold.


Developers

If you’re looking to use H2O to help you develop your own apps, the following links will provide helpful references.


Introduction

This guide will walk you through how to use H2O-dev’s web UI, H2O Flow. To view a demo video of H2O Flow, click here.

About H2O Flow

H2O Flow is an open-source user interface for H2O. It is a web-based interactive computational environment that allows you to combine code execution, text, mathematics, plots, and rich media in a single document, similar to iPython Notebooks.

With H2O Flow, you can capture, rerun, annotate, present, and share your workflow. H2O Flow allows you to use H2O interactively to import files, build models, and iteratively improve them. Based on your models, you can make predictions and add rich text to create vignettes of your work - all within Flow’s browser-based environment.

Flow’s hybrid user interface seamlessly blends command-line computing with a modern graphical user interface. However, rather than displaying output as plain text, Flow provides a point-and-click user interface for every H2O operation. It allows you to access any H2O object in the form of well-organized tabular data.

H2O Flow sends commands to H2O as a sequence of executable cells. The cells can be modified, rearranged, or saved to a library. Each cell contains an input field that allows you to enter commands, define functions, call other functions, and access other cells or objects on the page. When you execute the cell, the output is a graphical object, which can be inspected to view additional details.

While H2O Flow supports REST API, R scripts, and Coffeescript, no programming experience is required to run H2O Flow. You can click your way through any H2O operation without ever writing a single line of code. You can even disable the input cells to run H2O Flow using only the GUI. H2O Flow is designed to guide you every step of the way, by providing input prompts, interactive help, and example flows.


Getting Help


First, let’s go over the basics. Type h to view a list of helpful shortcuts.

The following help window displays:

help menu

To close this window, click the X in the upper-right corner, or click the Close button in the lower-right corner. You can also click behind the window to close it. You can also access this list of shortcuts by clicking the Help menu and selecting Keyboard Shortcuts.

For additional help, select the Help sidebar to the right and click the Assist Me! button.

Assist Me

You can also type assist in a blank cell and press Ctrl+Enter. A list of common tasks displays to help you find the correct command.

Assist Me links

There are multiple resources to help you get started with Flow in the Help sidebar. To access this document, select the Getting Started with H2O Flow link below the Help Topics heading.

You can also explore the pre-configured flows available in H2O Flow for a demonstration of how to create a flow. To view the example flows, click the Browse installed packs… link in the Packs subsection of the Help sidebar. Click the examples folder and select the example flow from the list.

Flow Packs

If you have a flow currently open, a confirmation window appears asking if the current notebook should be replaced. To load the example flow, click the Load Notebook button.

To view the REST API documentation, click the Help tab in the sidebar and then select the type of REST API documentation (Routes or Schemas).

REST API documentation

Before getting started with H2O Flow, make sure you understand the different cell modes.


Understanding Cell Modes

There are two modes for cells: edit and command.

Using Edit Mode

In edit mode, the cell is yellow with a blinking bar to indicate where text can be entered and there is an orange flag to the left of the cell.

Edit Mode

Using Command Mode

In command mode, the flag is yellow. The flag also indicates the cell’s format:

NOTE: If there is an error in the cell, the flag is red.

Cell error

If the cell is executing commands, the flag is teal. The flag returns to yellow when the task is complete.

Cell executing

Changing Cell Formats

To change the cell’s format (for example, from code to Markdown), make sure you are in not in command (not edit) mode and that the cell you want to change is selected. The easiest way to do this is to click on the flag to the left of the cell. Enter the keyboard shortcut for the format you want to use. The flag’s text changes to display the current format.

Cell Mode Keyboard Shortcut
Code y
Markdown m
Raw text r
Heading 1 1
Heading 2 2
Heading 3 3
Heading 4 4
Heading 5 5
Heading 6 6

Running Flows

When you run the flow, a progress bar that indicates the current status of the flow. You can cancel the currently running flow by clicking the Stop button in the progress bar.

Flow Progress Bar

When the flow is complete, a message displays in the upper right. Note: If there is an error in the flow, H2O Flow stops the flow at the cell that contains the error.

Flow - Completed Successfully Flow - Did Not Complete

Using Keyboard Shortcuts

Here are some important keyboard shortcuts to remember:

The following commands must be entered in command mode.

You can view these shortcuts by clicking Help > Keyboard Shortcuts or by clicking the Help tab in the sidebar.

Using Flow Buttons

There are also a series of buttons at the top of the page below the flow name that allow you to save the current flow, add a new cell, move cells up or down, run the current cell, and cut, copy, or paste the current cell. If you hover over the button, a description of the button’s function displays.

Flow buttons

You can also use the menus at the top of the screen to edit the order of the cells, toggle specific format types (such as input or output), create models, or score models. You can also access troubleshooting information or obtain help with Flow.

Flow menus

Note: To disable the code input and use H2O Flow strictly as a GUI, click the Cell menu, then Toggle Cell Input.

Now that you are familiar with the cell modes, let’s import some data.


Importing Data

If you don’t have any of your own data to work with, you can find some example datasets here:

There are multiple ways to import data in H2O flow:

After selecting the file to import, the file path displays in the “Search Results” section. To import a single file, click the plus sign next to the file. To import all files in the search results, click the Add all link. The files selected for import display in the “Selected Files” section.

Import Files

After you click the Import button, the raw code for the current job displays. A summary displays the results of the file import, including the number of imported files and their Network File System (nfs) locations.

Import Files - Results

Uploading Data

To upload a local file, click the Data menu and select Upload File…. Click the Choose File button, select the file, click the Choose button, then click the Upload button.

File Upload Pop-Up

When the file has uploaded successfully, a message displays in the upper right and the Setup Parse cell displays.

File Upload Successful

Ok, now that your data is available in H2O Flow, let’s move on to the next step: parsing. Click the Parse these files button to continue.


Parsing Data

After you have imported your data, parse the data.

Select the parser type (if necessary) from the drop-down Parser list. For most data parsing, H2O automatically recognizes the data type, so the default settings typically do not need to be changed. The following options are available:

If a separator or delimiter is used, select it from the Separator list.

Select a column header option, if applicable:

Select any necessary additional options:

A preview of the data displays in the “Data Preview” section. Flow - Parse options

Note: To change the column type, select the drop-down list at the top of the column and select the data type. The options are:

After making your selections, click the Parse button.

After you click the Parse button, the code for the current job displays.

Flow - Parse code

Since we’ve submitted a couple of jobs (data import & parse) to H2O now, let’s take a moment to learn more about jobs in H2O.


Viewing Jobs

Any command (such as importFiles) you enter in H2O is submitted as a job, which is associated with a key. The key identifies the job within H2O and is used as a reference.

Viewing All Jobs

To view all jobs, click the Admin menu, then click Jobs, or enter getJobs in a cell in CS mode.

View Jobs

The following information displays:

To refresh this information, click the Refresh button. To view the details of the job, click the View button.

Viewing Specific Jobs

To view a specific job, click the link in the “Destination” column.

View Job - Model

The following information displays:

NOTE: For a better understanding of how jobs work, make sure to review the Viewing Frames section as well.

Ok, now that you understand how to find jobs in H2O, let’s submit a new one by building a model.


Building Models

To build a model:

The Build Model… button can be accessed from any page containing the .hex key for the parsed data (for example, getJobs > getFrame).

In the Build a Model cell, select an algorithm from the drop-down menu:

The available options vary depending on the selected model. If an option is only available for a specific model type, the model type is listed. If no model type is specified, the option is applicable to all model types.

Advanced Options

Expert Options


Viewing Models

Click the Assist Me! button, then click the getModels link, or enter getModels in the cell in CS mode and press Ctrl+Enter. A list of available models displays.

Flow Models

To view all current models, you can also click the Model menu and click List All Models.

To inspect a model, check its checkbox then click the Inspect button, or click the Inspect button to the right of the model name.

Flow Model

A summary of the model’s parameters displays. To display more details, click the Show All Parameters button.

NOTE: The Clone this model… button will be supported in a future version.

To compare models, check the checkboxes for the models to use in the comparison and click the Compare selected models button. To select all models, check the checkbox at the top of the checkbox column (next to the KEY heading).

To delete a model, click the Delete button.

To generate a POJO to be able to use the model outside of H2O, click the Preview POJO button.

To learn how to make predictions, continue to the next section.


Making Predictions

After creating your model, click the key link for the model, then click the Predict button. Select the model to use in the prediction from the drop-down Model: menu and the data frame to use in the prediction from the drop-down Frame menu, then click the Predict button.

Making Predictions


Viewing Predictions

Click the Assist Me! button, then click the getPredictions link, or enter getPredictions in the cell in CS mode and press Ctrl+Enter. A list of the stored predictions displays. To view a prediction, click the View button to the right of the model name.

Viewing Predictions

You can also view predictions by clicking the drop-down Score menu and selecting List All Predictions.


Viewing Frames

To view a specific frame, click the “Key” link for the specified frame, or enter getFrame "FrameName" in a cell in CS mode (where FrameName is the name of a frame, such as allyears2k.hex.

Viewing specified frame

From the getFrame cell, you can:

When you view a frame, you can “drill-down” to the necessary level of detail (such as a specific column or row) using the View Data and Inspect buttons. The following screenshot displays the results of clicking the Inspect button.

Inspecting Frames

This screenshot displays the results of clicking the Summary link for the first column.

Inspecting Columns

To view all frames, click the Assist Me! button, then click the getFrames link, or enter getFrames in the cell in CS mode and press Ctrl+Enter. You can also view all current frames by clicking the drop-down Data menu and selecting List All Frames.

A list of the current frames in H2O displays that includes the following information for each frame:

For parsed data, the following information displays:

To make a prediction, check the checkboxes for the frames you want to use to make the prediction, then click the Predict on Selected Frames button.


Splitting Frames

In H2O Flow, you can split datasets within Flow for use in training and testing.

splitFrame cell

  1. To split a frame, click the Assist Me button, then click splitFrame. Note: You can also click the drop-down Data menu and select Split Frame….
  2. From the drop-down Frame: list, select the frame to split.
  3. In the second Ratio entry field, specify the fractional value to determine the split. The first Ratio field is automatically calculated based on the values entered in the second Ratio field.

    Note: Only fractional values between 0 and 1 are supported (for example, enter .5 to split the frame in half). The total sum of the ratio values must equal one. H2O automatically adjusts the ratio values to equal one; if unsupported values are entered, an error displays.

  4. In the Key entry field, specify a name for the new frame.
  5. (Optional) To add another split, click the Add a split link. To remove a split, click the X to the right of the Key entry field.
  6. Click the Create button.

Creating Frames

To create a frame with a large amount of random data (for example, to use for testing), click the drop-down Admin menu, then select Create Synthetic Frame. Customize the frame as needed, then click the Create button to create the frame.


Plotting Frames

To create a plot from a frame, click the Inspect button, then click the Plot button.

Select the type of plot (point, line, area, or interval) from the drop-down Type menu, then select the x-axis and y-axis from the following options:

Select one of the above options from the drop-down Color menu to display the specified data in color, then click the Plot button to plot the data.


Using Clips

Clips enable you to save cells containing your workflow for later reuse. To save a cell as a clip, click the paperclip icon to the right of the cell (highlighted in the red box in the following screenshot). Paperclip icon

To use a clip in a workflow, click the “Clips” tab in the sidebar on the right.

Clips tab

All saved clips, including the default system clips (such as assist, importFiles, and predict), are listed. Clips you have created are listed under the “My Clips” heading. To select a clip to insert, click the circular button to the left of the clip name. To delete a clip, click the trashcan icon to right of the clip name.

NOTE: The default clips listed under “System” cannot be deleted.

Deleted clips are stored in the trash. To permanently delete all clips in the trash, click the Empty Trash button.

NOTE: Saved data, including flows and clips, are persistent as long as the same IP address is used for the cluster. If a new IP is used, previously saved flows and clips are not available.


Viewing Outlines

The “Outline” tab in the sidebar displays a brief summary of the cells currently used in your flow; essentially, a command history.


Saving Flows

You can save your flow for later reuse. To save your flow as a notebook, click the “Save” button (the first button in the row of buttons below the flow name), or click the drop-down “Flow” menu and select “Save.” To enter a custom name for the flow, click the default flow name (“Untitled Flow”) and type the desired flow name. A pencil icon indicates where to enter the desired name.

Renaming Flows

To confirm the name, click the checkmark to the right of the name field.

Confirm Name

To reuse a saved flow, click the “Flows” tab in the sidebar, then click the flow name. To delete a saved flow, click the trashcan icon to the right of the flow name.

Flows

Finding Saved Flows on your Disk

By default, flows are saved to the h2oflows directory underneath your home directory. The directory where flows are saved is printed to stdout:

03-20 14:54:20.945 172.16.2.39:54323     95667  main      INFO: Flow dir: '/Users/<UserName>/h2oflows'

To back up saved flows, copy this directory to your preferred backup location.

To specify a different location for saved flows, use the command-line argument -flow_dir when launching H2O:

java -jar h2o.jar -flow_dir /<New>/<Location>/<For>/<Saved>/<Flows>

where /<New>/<Location>/<For>/<Saved>/<Flows> represents the specified location. If the directory does not exist, it will be created the first time you save a flow.

Saving Flows on a Hadoop cluster

Note: If you are running H2O Flow on a Hadoop cluster, H2O will try to find the HDFS home directory to use as the default directory for flows. If the HDFS home directory is not found, flows cannot be saved unless a directory is specified while launching using -flow_dir:

hadoop jar h2odriver.jar -nodes 1 -mapperXmx 1g -output hdfsOutputDirName -flow_dir hdfs:///<Saved>/<Flows>/<Location>

The location specified in flow_dir may be either an hdfs or regular filesystem directory. If the directory does not exist, it will be created the first time you save a flow.

Duplicating Flows

To create a copy of the current flow, select the Flow menu, then click Make a Copy. The name of the current flow changes to “Copy of “ (where is the name of the flow). You can save the duplicated flow using this name by clicking Flow > Save.

Downloading Flows

After saving a flow as a notebook, click the Flow menu, then select Download…. A new window opens and the saved flow is downloaded to the default downloads folder on your computer. The file is exported as <filename> .flow, where <filename> is the name specified when the flow was saved.

Caution: You must have an active internet connection to export flows.

Loading Flows

To load a saved flow, click the Flows tab in the sidebar at the right. In the pop-up confirmation window that appears, select Load Notebook, or click Cancel to return to the current flow.

Confirm Replace Flow

After clicking Load Notebook, the saved flow is loaded.

To load an exported flow, click the Flow menu and select Open…. In the pop-up window that appears, click the Choose File button and select the exported flow, then click the Open button.

Open Flow

Notes:


Troubleshooting

To troubleshoot issues in Flow, use the Admin menu. The Admin menu allows you to check the status of the cluster, view a timeline of events, and view or download logs for issue analysis.

NOTE: To view the current version, click the Help menu, then click About.

Viewing Cluster Status

Click the Admin menu, then select Cluster Status. A summary of the status of the cluster (also known as a cloud) displays, which includes the same information:

The following information displays for each node:

To view more information, click the Show Advanced button.


Viewing CPU Status (Water Meter)

To view the current CPU usage, click the Admin menu, then click Water Meter (CPU Meter). A new window opens, displaying the current CPU use statistics.


Viewing Logs

To view the logs for troubleshooting, click the Admin menu, then click Inspect Log.

Inspect Log

To view the logs for a specific node, select it from the drop-down Select Node menu.


Downloading Logs

To download the logs for further analysis, click the Admin menu, then click Download Log. A new window opens and the logs download to your default download folder. You can close the new window after downloading the logs. Send the logs to support@h2o.ai for issue resolution.


Viewing Stack Trace Information

To view the stack trace information, click the Admin menu, then click Stack Trace.

Stack Trace

To view the stack trace information for a specific node, select it from the drop-down Select Node menu.


Viewing Network Test Results

To view network test results, click the Admin menu, then click Network Test.

Network Test Results


Accessing the Profiler

The Profiler looks across the cluster to see where the same stack trace occurs, and can be helpful for identifying what the currently used CPU is doing. To view the profiler, click the Admin menu, then click Profiler.

Profiler

To view the profiler information for a specific node, select it from the drop-down Select Node menu.


Viewing the Timeline

To view a timeline of events in Flow, click the Admin menu, then click Timeline. The following information displays for each event:

To obtain the most recent information, click the Refresh button.


Shutting Down H2O

To shut down H2O, click the Admin menu, then click Shut Down. A Shut down complete message displays in the upper right when the cluster has been shut down.

Porting R Scripts from H2O to H2O-Dev

This document outlines how to port R scripts written in H2O for compatibility with the new H2O-Dev API. When upgrading from H2O to H2O-Dev, most functions are the same. However, there are some differences that will need to be resolved when porting any scripts that were originally created using H2O to H2O-Dev.

The original R script for H2O is listed first, followed by the updated script for H2O-Dev.

Some of the parameters have been renamed for consistency. For each algorithm, a table that describes the differences is provided.

For additional assistance within R, enter a question mark before the command (for example, ?h2o.glm).

There is also a “shim” available that will review R scripts created with previous versions of H2O, identify deprecated or renamed parameters, and suggest replacements. For more information, refer to the repo here.

Changes from H2O to H2O-Dev

h2o.exec

The h2o.exec command is no longer supported. Any workflows using h2o.exec must be revised to remove this command. If the H2O-Dev workflow contains any parameters or commands from H2O, errors will result and the workflow will fail.

The purpose of h2o.exec was to wrap expressions so that they could be evaluated in a single \Exec2 call. For example, h2o.exec(fr[,1] + 2/fr[,3]) and fr[,1] + 2/fr[,3] produced the same results in H2O. However, the first example makes a single REST call and uses a single temp object, while the second makes several REST calls and uses several temp objects.

Due to the improved architecture in H2O-Dev, the need to use h2o.exec has been eliminated, as the expression can be processed by R as an “unwrapped” typical R expression.

Currently, the only known exception is when factor is used in conjunction with h2o.exec. For example, h2o.exec(fr$myIntCol <- factor(fr$myIntCol)) would become fr$myIntCol <- as.factor(fr$myIntCol)

Note also that an array is not inside a string:

An int array is [1, 2, 3], not “[1, 2, 3]”.

A String array is [“f00”, “b4r”], not “[\”f00\”, \”b4r\”]”

Only string values are enclosed in double quotation marks (").

h2o.performance

To access any exclusively binomial output, use h2o.performance, optionally with the corresponding accessor. The accessor can only use the model metrics object created by h2o.performance. Each accessor is named for its corresponding field (for example, h2o.AUC, h2o.gini, h2o.F1). h2o.performance supports all current algorithms except for K-Means.

If you specify a data frame as a second parameter, H2O will use the specified data frame for scoring. If you do not specify a second parameter, the training metrics for the model metrics object are used.

xval and validation slots

The xval slot has been removed, as nfolds is not currently supported.

The validation slot has been merged with the model slot.

Table of Contents

Principal Components Regression (PCR)

Principal Components Regression (PCR) has also been deprecated. To obtain PCR values, create a Principal Components Analysis (PCA) model, then create a GLM model from the scored data from the PCA model.

GBM

N-fold cross-validation and grid search will be supported in a future version of H2O-Dev.

Renamed GBM Parameters

The following parameters have been renamed, but retain the same functions:

H2O Parameter Name H2O-Dev Parameter Name
data training_frame
key model_id
n.trees ntrees
interaction.depth max_depth
n.minobsinnode min_rows
shrinkage learn_rate
n.bins nbins
validation validation_frame
balance.classes balance_classes
max.after.balance.size max_after_balance_size

Deprecated GBM Parameters

The following parameters have been removed:

New GBM Parameters

The following parameters have been added:

GBM Algorithm Comparison

H2O H2O-Dev
h2o.gbm <- function( h2o.gbm <- function(
x, x,
y, y,
data, training_frame,
key = "", model_id,
distribution = 'multinomial', distribution = c("bernoulli", "multinomial", "gaussian"),
n.trees = 10, ntrees = 50
interaction.depth = 5, max_depth = 5,
n.minobsinnode = 10, min_rows = 10,
shrinkage = 0.1, learn_rate = 0.1,
n.bins = 20, nbins = 20,
validation, validation_frame = NULL,
balance.classes = FALSE balance_classes = FALSE,
max.after.balance.size = 5, max_after_balance_size = 1,
  seed,
  score_each_iteration)
group_split = TRUE,
importance = FALSE,
nfolds = 0,
holdout.fraction = 0,
class.sampling.factors = NULL,
grid.parallelism = 1)

Output

The following table provides the component name in H2O, the corresponding component name in H2O-Dev (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O H2O-Dev Model Type
@model$priorDistribution   all
@model$params @allparameters all
@model$err @model$scoring_history all
@model$classification   all
@model$varimp @model$variable_importances all
@model$confusion @model$training_metrics$cm$table binomial and multinomial
@model$auc @model$training_metrics$AUC binomial
@model$gini @model$training_metrics$Gini binomial
@model$best_cutoff   binomial
@model$F1 @model$training_metrics$thresholds_and_metric_scores$f1 binomial
@model$F2 @model$training_metrics$thresholds_and_metric_scores$f2 binomial
@model$accuracy @model$training_metrics$thresholds_and_metric_scores$accuracy binomial
@model$error   binomial
@model$precision @model$training_metrics$thresholds_and_metric_scores$precision binomial
@model$recall @model$training_metrics$thresholds_and_metric_scores$recall binomial
@model$mcc @model$training_metrics$thresholds_and_metric_scores$absolute_MCC binomial
@model$max_per_class_err currently replaced by @model$training_metrics$thresholds_and_metric_scores$min_per_class_correct binomial

GLM

N-fold cross-validation and grid search will be supported in a future version of H2O-Dev.

Renamed GLM Parameters

The following parameters have been renamed, but retain the same functions:

H2O Parameter Name H2O-Dev Parameter Name
data training_frame
key model_id
nlambda nlambdas
lambda.min.ratio lambda_min_ratio
iter.max max_iterations
epsilon beta_epsilon

Deprecated GLM Parameters

The following parameters have been removed:

New GLM Parameters

The following parameters have been added:

GLM Algorithm Comparison

H2O H2O-Dev
h2o.glm <- function( h2o.startGLMJob <- function(
x, x,
y, y,
data, training_frame,
key = "", model_id,
  validation_frame
iter.max = 100, max_iterations = 50,
epsilon = 1e-4 beta_epsilon = 0
strong_rules = TRUE,
return_all_lambda = FALSE,
intercept = TRUE,
non_negative = FALSE,
  solver = c("IRLSM", "L_BFGS"),
standardize = TRUE, standardize = TRUE,
family, family = c("gaussian", "binomial", "poisson", "gamma", "tweedie"),
link, link = c("family_default", "identity", "logit", "log", "inverse", "tweedie"),
tweedie.p = ifelse(family == "tweedie",1.5, NA_real_) tweedie_variance_power = NaN,
  tweedie_link_power = NaN,
alpha = 0.5, alpha = 0.5,
prior = NULL prior = 0.0,
lambda = 1e-5, lambda = 1e-05,
lambda_search = FALSE, lambda_search = FALSE,
nlambda = -1, nlambdas = -1,
lambda.min.ratio = -1, lambda_min_ratio = 1.0,
use_all_factor_levels = FALSE use_all_factor_levels = FALSE,
nfolds = 0, nfolds = 0,
beta_constraints = NULL, beta_constraint = NULL)
higher_accuracy = FALSE,
variable_importances = FALSE,
disable_line_search = FALSE,
offset = NULL,
max_predictors = -1)

Output

The following table provides the component name in H2O, the corresponding component name in H2O-Dev (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O H2O-Dev Model Type
@model$params @allparameters all
@model$coefficients @model$coefficients all
@model$nomalized_coefficients @model$coefficients_table$norm_coefficients all
@model$rank @model$rank all
@model$iter @model$iter all
@model$lambda   all
@model$deviance @model$residual_deviance all
@model$null.deviance @model$null_deviance all
@model$df.residual @model$residual_degrees_of_freedom all
@model$df.null @model$null_degrees_of_freedom all
@model$aic @model$AIC all
@model$train.err   binomial
@model$prior   binomial
@model$thresholds @model$threshold binomial
@model$best_threshold   binomial
@model$auc @model$AUC binomial
@model$confusion   binomial

K-Means

Renamed K-Means Parameters

The following parameters have been renamed, but retain the same functions:

H2O Parameter Name H2O-Dev Parameter Name
data training_frame
key model_id
centers k
cols x
iter.max max_iterations
normalize standardize

Note In H2O, the normalize parameter was disabled by default. The standardize parameter is enabled by default in H2O-Dev to provide more accurate results for datasets containing columns with large values.

New K-Means Parameters

The following parameters have been added:

K-Means Algorithm Comparison

H2O H2O-Dev
h2o.kmeans <- function( h2o.kmeans <- function(
data, training_frame,
cols = '', x,
centers, k,
key = "", model_id,
iter.max = 10, max_iterations = 1000,
normalize = FALSE, standardize = TRUE,
init = "none", init = c("Furthest","Random", "PlusPlus"),
seed = 0, seed)

Output

The following table provides the component name in H2O and the corresponding component name in H2O-Dev (if supported).

H2O H2O-Dev
@model$params @allparameters
@model$centers @model$centers
@model$tot.withinss @model$tot_withinss
@model$size @model$size
@model$iter @model$iterations
  @model$_scoring_history
  @model$_model_summary

Deep Learning

N-fold cross-validation and grid search will be supported in a future version of H2O-Dev.

Note: If the results in the confusion matrix are incorrect, verify that score_training_samples is equal to 0. By default, only the first 10,000 rows are included.

Renamed Deep Learning Parameters

The following parameters have been renamed, but retain the same functions:

H2O Parameter Name H2O-Dev Parameter Name
data training_frame
key model_id
validation validation_frame
class.sampling.factors class_sampling_factors
nfolds n_folds
override_with_best_model overwrite_with_best_model

Deprecated DL Parameters

The following parameters have been removed:

New DL Parameters

The following parameters have been added:

The following options for the loss parameter have been added:

DL Algorithm Comparison

H2O H2O-Dev
h2o.deeplearning <- function(x, h2o.deeplearning <- function(x,
y, y,
data, training_frame,
key = "", model_id = "",
override_with_best_model, overwrite_with_best_model = true,
classification = TRUE,
nfolds = 0, n_folds = 0
validation, validation_frame,
holdout_fraction = 0,
checkpoint = " " checkpoint,
autoencoder, autoencoder = false,
use_all_factor_levels, use_all_factor_levels = true
activation, _activation = c("Rectifier", "Tanh", "TanhWithDropout", "RectifierWithDropout", "Maxout", "MaxoutWithDropout"),
hidden, hidden= c(200, 200),
epochs, epochs = 10.0,
train_samples_per_iteration, train_samples_per_iteration = -2,
seed, _seed,
adaptive_rate, adaptive_rate = true,
rho, rho = 0.99,
epsilon, epsilon = 1e-8,
rate, rate = .005,
rate_annealing, rate_annealing = 1e-6,
rate_decay, rate_decay = 1.0,
momentum_start, momentum_start = 0,
momentum_ramp, momentum_ramp = 1e6,
momentum_stable, momentum_stable = 0,
nesterov_accelerated_gradient, nesterov_accelerated_gradient = true,
input_dropout_ratio, input_dropout_ratio = 0.0,
hidden_dropout_ratios, hidden_dropout_ratios,
l1, l1 = 0.0,
l2, l2 = 0.0,
max_w2, max_w2 = Inf,
initial_weight_distribution, initial_weight_distribution = c("UniformAdaptive","Uniform", "Normal"),
initial_weight_scale, initial_weight_scale = 1.0,
loss, loss = "Automatic", "CrossEntropy", "MeanSquare", "Absolute", "Huber"),
score_interval, score_interval = 5,
score_training_samples, score_training_samples = 10000l,
score_validation_samples, score_validation_samples = 0l,
score_duty_cycle, score_duty_cycle = 0.1,
classification_stop, classification_stop = 0
regression_stop, regression_stop = 1e-6,
quiet_mode, quiet_mode = false,
max_confusion_matrix_size, max_confusion_matrix_size,
max_hit_ratio_k, max_hit_ratio_k,
balance_classes, balance_classes = false,
class_sampling_factors, class_sampling_factors,
max_after_balance_size, max_after_balance_size,
score_validation_sampling, score_validation_sampling,
diagnostics, diagnostics = true,
variable_importances, variable_importances = false,
fast_mode, fast_mode = true,
ignore_const_cols, ignore_const_cols = true,
force_load_balance, force_load_balance = true,
replicate_training_data, replicate_training_data = true,
single_node_mode, single_node_mode = false,
shuffle_training_data, shuffle_training_data = false,
sparse, sparse = false,
col_major, col_major = false,
max_categorical_features, max_categorical_features = Integer.MAX_VALUE,
reproducible) reproducible=FALSE,
average_activation average_activation = 0,
  sparsity_beta = 0
  export_weights_and_biases=FALSE)

Output

The following table provides the component name in H2O, the corresponding component name in H2O-Dev (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O H2O-Dev Model Type
@model$priorDistribution   all
@model$params @allparameters all
@model$train_class_error @model$training_metrics$MSE all
@model$valid_class_error @model$validation_metrics$MSE all
@model$varimp @model$_variable_importances all
@model$confusion @model$training_metrics$cm$table binomial and multinomial
@model$train_auc @model$train_AUC binomial
  @model$_validation_metrics all
  @model$_model_summary all
  @model$_scoring_history all

Distributed Random Forest

Changes to DRF in H2O-Dev

Distributed Random Forest (DRF) was represented as h2o.randomForest(type="BigData", ...) in H2O. In H2O, SpeeDRF (type="fast") was not as accurate, especially for complex data with categoricals, and did not address regression problems. DRF (type="BigData") was at least as accurate as SpeeDRT (type="fast") and was the only algorithm that scaled to big data (data too large to fit on a single node). In H2O-Dev, our plan is to improve the performance of DRF so that the data fits on a single node (optimally, for all cases), which will make SpeeDRF obsolete. Ultimately, the goal is provide a single algorithm that provides the “best of both worlds” for all datasets and use cases.

Note: H2O-Dev only supports DRF. SpeeDRF is no longer supported. The functionality of DRF in H2O-Dev is similar to DRF functionality in H2O.

Renamed DRF Parameters

The following parameters have been renamed, but retain the same functions:

H2O Parameter Name H2O-Dev Parameter Name
data training_frame
key model_id
validation validation_frame
sample.rate sample_rate
ntree ntrees
depth max_depth
balance.classes balance_classes
score.each.iteration score_each_iteration
class.sampling.factors class_sampling_factors
nodesize min_rows

Deprecated DRF Parameters

The following parameters have been removed:

New DRF Parameters

The following parameter has been added:

DRF Algorithm Comparison

H2O H2O-Dev
h2o.randomForest <- function(x, h2o.randomForest <- function(
x, x,
y, y,
data, training_frame,
key="", model_id,
validation, validation_frame,
mtries = -1, mtries = -1,
sample.rate=2/3, sample_rate = 0.6666667,
  build_tree_one_node = FALSE,
ntree=50 ntrees=50,
depth=20, max_depth = 20,
  min_rows = 1,
nbins=20, nbins = 20,
balance.classes = FALSE, balance_classes = FALSE,
score.each.iteration = FALSE, score_each_iteration = FALSE,
seed = -1, seed
nodesize = 1,
classification=TRUE,
importance=FALSE,
nfolds=0,
holdout.fraction = 0,
max.after.balance.size = 5, max_after_balance_size)
class.sampling.factors = NULL,  
doGrpSplit = TRUE,
verbose = FALSE,
oobee = TRUE,
stat.type = "ENTROPY",
type = "fast")

Output

The following table provides the component name in H2O, the corresponding component name in H2O-Dev (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O H2O-Dev Model Type
@model$priorDistribution   all
@model$params @allparameters all
@model$mse @model$scoring_history all
@model$forest @model$model_summary all
@model$classification   all
@model$varimp @model$variable_importances all
@model$confusion @model$training_metrics$cm$table binomial and multinomial
@model$auc @model$training_metrics$AUC binomial
@model$gini @model$training_metrics$Gini binomial
@model$best_cutoff   binomial
@model$F1 @model$training_metrics$thresholds_and_metric_scores$f1 binomial
@model$F2 @model$training_metrics$thresholds_and_metric_scores$f2 binomial
@model$accuracy @model$training_metrics$thresholds_and_metric_scores$accuracy binomial
@model$Error @model$Error binomial
@model$precision @model$training_metrics$thresholds_and_metric_scores$precision binomial
@model$recall @model$training_metrics$thresholds_and_metric_scores$recall binomial
@model$mcc @model$training_metrics$thresholds_and_metric_scores$absolute_MCC binomial
@model$max_per_class_err currently replaced by @model$training_metrics$thresholds_and_metric_scores$min_per_class_correct binomial

How to Launch H2O-Dev from the Command Line

You can use Terminal (OS X) or the Command Prompt (Windows) to launch H2O-Dev. When you launch from the command line, you can include additional instructions to H2O-Dev, such as how many nodes to launch, how much memory to allocate for each node, assign names to the nodes in the cloud, and more.

There are two different argument types:

The arguments use the following format: java <JVM Options> -jar h2o.jar <H2O Options>.

JVM Options

Note: Do not try to launch H2O with more memory than you have available.

H2O Options

Cloud Formation Behavior

New H2O nodes join to form a cloud during launch. After a job has started on the cloud, it prevents new members from joining.

Wait for the INFO: Registered: # schemas in: #mS output before entering the above command again to add another node (the number for # will vary).

Flatfile Configuration

If you are configuring many nodes, it is faster and easier to use the -flatfile option, rather than -ip and -port.

To configure H2O-Dev on a multi-node cluster:

  1. Locate a set of hosts.
  2. Download the appropriate version of H2O-Dev for your environment.
  3. Verify that the same h2o.jar file is available on all hosts.
  4. Create a flatfile (a plain text file with the IP and port numbers of the hosts). Use one entry per line. For example:

    
    192.168.1.163:54321
    192.168.1.164:54321
    
  5. Copy the flatfile.txt to each node in the cluster.
  6. Use the -Xmx option to specify the amount of memory for each node. The cluster’s memory capacity is the sum of all H2O nodes in the cluster.

    For example, if you create a cluster with four 20g nodes (by specifying -Xmx20g four times), H2O will have a total of 80 gigs of memory available.

    For best performance, we recommend sizing your cluster to be about four times the size of your data. To avoid swapping, the -Xmx allocation must not exceed the physical memory on any node. Allocating the same amount of memory for all nodes is strongly recommended, as H2O-Dev works best with symmetric nodes.

    Note the optional -ip and -port options specify the IP address and ports to use. The -ip option is especially helpful for hosts with multiple network interfaces.

    java -Xmx20g -jar h2o.jar -flatfile flatfile.txt -port 54321

    The output will resemble the following:

     04-20 16:14:00.253 192.168.1.70:54321    2754   main      INFO:   1. Open a terminal and run 'ssh -L 55555:localhost:54321 H2O-DevUser@###.###.#.##'
     04-20 16:14:00.253 192.168.1.70:54321    2754   main      INFO:   2. Point your browser to http://localhost:55555
     04-20 16:14:00.437 192.168.1.70:54321    2754   main      INFO: Log dir: '/tmp/h2o-H2O-DevUser/h2ologs'
     04-20 16:14:00.437 192.168.1.70:54321    2754   main      INFO: Cur dir: '/Users/H2O-DevUser/h2o-dev'
     04-20 16:14:00.459 192.168.1.70:54321    2754   main      INFO: HDFS subsystem successfully initialized
     04-20 16:14:00.460 192.168.1.70:54321    2754   main      INFO: S3 subsystem successfully initialized
     04-20 16:14:00.460 192.168.1.70:54321    2754   main      INFO: Flow dir: '/Users/H2O-DevUser/h2oflows'
     04-20 16:14:00.475 192.168.1.70:54321    2754   main      INFO: Cloud of size 1 formed [/192.168.1.70:54321]
    

    As you add more nodes to your cluster, the output is updated: INFO WATER: Cloud of size 2 formed [/...]...

  7. Access the H2O-Dev web UI (Flow) with your browser. Point your browser to the HTTP address specified in the output Listening for HTTP and REST traffic on ....

    To check if the cloud is available, point to the url http://<ip>:<port>/Cloud.json (an example of the JSON response is provided below). Wait for cloud_size to be the expected value and the consensus field to be true:

    {
    ...
    
    "cloud_size": 2,
    "consensus": true,
    
    ...
    }
    

H2O-Dev on EC2

Tested on Redhat AMI, Amazon Linux AMI, and Ubuntu AMI

Launch H2O-Dev

+Note: Before launching H2O on an EC2 cluster, verify that ports 54321 and 54322 are both accessible by TCP and UDP.

Selecting the Operating System and Virtualization Type

Select your operating system and the virtualization type of the prebuilt AMI on Amazon. If you are using Windows, you will need to use a hardware-assisted virtual machine (HVM). If you are using Linux, you can choose between para-virtualization (PV) and HVM. These selections determine the type of instances you can launch.

EC2 Systems

For more information about virtualization types, refer to Amazon.

Configuring the Instance

  1. Select the IAM role and policy to use to launch the instance. H2O detects the temporary access keys associated with the instance, so you don’t need to copy your AWS credentials to the instances.

    EC2 Configuration

  2. When launching the instance, select an accessible key pair.

    EC2 Key Pair

(Windows Users) Tunneling into the Instance

For Windows users that do not have the ability to use ssh from the terminal, either download Cygwin or a Git Bash that has the capability to run ssh:

ssh -i amy_account.pem ec2-user@54.165.25.98

Otherwise, download PuTTY and follow these instructions:

  1. Launch the PuTTY Key Generator.
  2. Load your downloaded AWS pem key file. Note: To see the file, change the browser file type to “All”.
  3. Save the private key as a .ppk file.

    Private Key

  4. Launch the PuTTY client.

  5. In the Session section, enter the host name or IP address. For Ubuntu users, the default host name is ubuntu@<ip-address>. For Linux users, the default host name is ec2-user@<ip-address>.

    Configuring Session

  6. Select SSH, then Auth in the sidebar, and click the Browse button to select the private key file for authentication.

    Configuring SSH

  1. Start a new session and click the Yes button to confirm caching of the server’s rsa2 key fingerprint and continue connecting.

    PuTTY Alert

Downloading Java and H2O

  1. Download Java (JDK 1.7 or later) if it is not already available on the instance.
  2. To download H2O, run the wget command with the link to the zip file available on our website by copying the link associated with the Download button for the selected H2O-Dev build.

     wget http://h2o-release.s3.amazonaws.com/h2o-dev/rel-serre/1/index.html
     unzip h2o-dev-0.2.1.1.zip
     cd h2o-dev-0.2.1.1
     java -Xmx4g -jar h2o.jar
    
  3. From your browser, navigate to <Private_IP_Address>:54321 or <Public_DNS>:54321 to use H2O’s web interface.

Launch H2O-Dev from the Command Line

Important Notes

Java is a pre-requisite for H2O; if you do not already have Java installed, make sure to install it before installing H2O. Java is available free on the web, and can be installed quickly. Although Java is required to run H2O, no programming is necessary. For users that only want to run H2O without compiling their own code, Java Runtime Environment (version 1.6 or later) is sufficient, but for users planning on compiling their own builds, we strongly recommend using Java Development Kit 1.7 or later.

After installation, launch H2O using the argument -Xmx. Xmx is the amount of memory given to H2O. If your data set is large, allocate more memory to H2O by using -Xmx4g instead of the default -Xmx1g, which will allocate 4g instead of the default 1g to your instance. For best performance, the amount of memory allocated to H2O should be four times the size of your data, but never more than the total amount of memory on your computer.

Step-by-Step Walk-Through

  1. Download the .zip file containing the latest release of H2O-Dev from the H2O downloads page.

  2. From your terminal, change your working directory to the same directory as the location of the .zip file.

  3. From your terminal, unzip the .zip file. For example, unzip h2o-dev-0.2.1.1.zip.

  4. At the prompt, enter the following commands:

     cd h2o-dev-0.2.1.1  #change working directory to the downloaded file
     java -Xmx4g -jar h2o.jar #run the basic java command to start h2o
    
  5. After a few moments, output similar to the following appears in your terminal window:

      03-23 14:57:52.930 172.16.2.39:54321     1932   main      INFO: ----- H2O started  -----
     03-23 14:57:52.997 172.16.2.39:54321     1932   main      INFO: Build git branch: rel-serre
     03-23 14:57:52.998 172.16.2.39:54321     1932   main      INFO: Build git hash: 9eaa5f0c4ca39144b1fd180aedb535b5ba08b2ce
     03-23 14:57:52.998 172.16.2.39:54321     1932   main      INFO: Build git describe: jenkins-rel-serre-1
     03-23 14:57:52.998 172.16.2.39:54321     1932   main      INFO: Build project version: 0.2.1.1
     03-23 14:57:52.998 172.16.2.39:54321     1932   main      INFO: Built by: 'jenkins'
     03-23 14:57:52.998 172.16.2.39:54321     1932   main      INFO: Built on: '2015-03-18 12:55:28'
     03-23 14:57:52.998 172.16.2.39:54321     1932   main      INFO: Java availableProcessors: 8
     03-23 14:57:52.999 172.16.2.39:54321     1932   main      INFO: Java heap totalMemory: 245.5 MB
     03-23 14:57:52.999 172.16.2.39:54321     1932   main      INFO: Java heap maxMemory: 3.56 GB
     03-23 14:57:52.999 172.16.2.39:54321     1932   main      INFO: Java version: Java 1.7.0_67 (from Oracle Corporation)
     03-23 14:57:52.999 172.16.2.39:54321     1932   main      INFO: OS   version: Mac OS X 10.10.2 (x86_64)
     03-23 14:57:52.999 172.16.2.39:54321     1932   main      INFO: Machine physical memory: 16.00 GB
     03-23 14:57:52.999 172.16.2.39:54321     1932   main      INFO: Possible IP Address: en5 (en5), fe80:0:0:0:daeb:97ff:feb3:6d4b%10
     03-23 14:57:52.999 172.16.2.39:54321     1932   main      INFO: Possible IP Address: en5 (en5), 172.16.2.39
     03-23 14:57:53.000 172.16.2.39:54321     1932   main      INFO: Possible IP Address: lo0 (lo0), fe80:0:0:0:0:0:0:1%1
     03-23 14:57:53.000 172.16.2.39:54321     1932   main      INFO: Possible IP Address: lo0 (lo0), 0:0:0:0:0:0:0:1
     03-23 14:57:53.000 172.16.2.39:54321     1932   main      INFO: Possible IP Address: lo0 (lo0), 127.0.0.1
     03-23 14:57:53.000 172.16.2.39:54321     1932   main      INFO: Internal communication uses port: 54322
     03-23 14:57:53.000 172.16.2.39:54321     1932   main      INFO: Listening for HTTP and REST traffic on  http://172.16.2.39:54321/
     03-23 14:57:53.001 172.16.2.39:54321     1932   main      INFO: H2O cloud name: 'H2O-Dev-User' on /172.16.2.39:54321, discovery address /238.222.48.136:61150
     03-23 14:57:53.001 172.16.2.39:54321     1932   main      INFO: If you have trouble connecting, try SSH tunneling from your local machine (e.g., via port 55555):
     03-23 14:57:53.001 172.16.2.39:54321     1932   main      INFO:   1. Open a terminal and run 'ssh -L 55555:localhost:54321 H2O-Dev-User@172.16.2.39'
     03-23 14:57:53.001 172.16.2.39:54321     1932   main      INFO:   2. Point your browser to http://localhost:55555
     03-23 14:57:53.211 172.16.2.39:54321     1932   main      INFO: Log dir: '/tmp/h2o-H2O-Dev-User/h2ologs'
     03-23 14:57:53.211 172.16.2.39:54321     1932   main      INFO: Cur dir: '/Users/H2O-Dev-User/Downloads/h2o-dev-0.2.1.1'
     03-23 14:57:53.234 172.16.2.39:54321     1932   main      INFO: HDFS subsystem successfully initialized
     03-23 14:57:53.234 172.16.2.39:54321     1932   main      INFO: S3 subsystem successfully initialized
     03-23 14:57:53.235 172.16.2.39:54321     1932   main      INFO: Flow dir: '/Users/H2O-Dev-User/h2oflows'
     03-23 14:57:53.248 172.16.2.39:54321     1932   main      INFO: Cloud of size 1 formed [/172.16.2.39:54321]
     03-23 14:57:53.776 172.16.2.39:54321     1932   main      WARN: Found schema field which violates the naming convention; name has mixed lowercase and uppercase characters: ModelParametersSchema.dropNA20Cols
     03-23 14:57:53.935 172.16.2.39:54321     1932   main      INFO: Registered: 142 schemas in: 605mS
    
  6. Point your web browser to http://localhost:54321/

The user interface appears in your browser, and now H2O-Dev is ready to go.

WARNING: On Windows systems, Internet Explorer is frequently blocked due to security settings. If you cannot reach http://localhost:54321, try using a different web browser, such as Firefox or Chrome.

Running H2O-Dev on Hadoop

Currently supported versions:

Important Points to Remember:

Prerequisite: Open Communication Paths

H2O communicates using two communication paths. Verify these are open and available for use by H2O.

Path 1: mapper to driver

Optionally specify this port using the -driverport option in the hadoop jar command (see “Hadoop Launch Parameters” below). This port is opened on the driver host (the host where you entered the hadoop jar command). By default, this port is chosen randomly by the operating system.

Path 2: mapper to mapper

Optionally specify this port using the -baseport option in the hadoop jar command (see “Hadoop Launch Parameters” below). This port and the next subsequent port are opened on the mapper hosts (the Hadoop worker nodes) where the H2O mapper nodes are placed by the Resource Manager. By default, ports 54321 (TCP) and 54322 (TCP & UDP) are used.

The mapper port is adaptive: if 54321 and 54322 are not available, H2O will try 54323 and 54324 and so on. The mapper port is designed to be adaptive because sometimes if the YARN cluster is low on resources, YARN will place two H2O mappers for the same H2O cluster request on the same physical host. For this reason, we recommend opening a range of more than two ports (20 ports should be sufficient).


Tutorial

The following tutorial will walk the user through the download or build of H2O and the parameters involved in launching H2O from the command line.

  1. Download the latest H2O-dev release for your version of Hadoop:

     wget http://h2o-release.s3.amazonaws.com/h2o-dev/master/1110/h2o-dev-0.3.0.1110-cdh5.2.zip
     wget http://h2o-release.s3.amazonaws.com/h2o-dev/master/1110/h2o-dev-0.3.0.1110-cdh5.3.zip
     wget http://h2o-release.s3.amazonaws.com/h2o-dev/master/1110/h2o-dev-0.3.0.1110-hdp2.1.zip
     wget http://h2o-release.s3.amazonaws.com/h2o-dev/master/1110/h2o-dev-0.3.0.1110-hdp2.2.zip
     wget http://h2o-release.s3.amazonaws.com/h2o-dev/master/1110/h2o-dev-0.3.0.1110-mapr3.1.1.zip
     wget http://h2o-release.s3.amazonaws.com/h2o-dev/master/1110/h2o-dev-0.3.0.1110-mapr4.0.1.zip
    

    Note: Enter only one of the above commands.

  1. Prepare the job input on the Hadoop Node by unzipping the build file and changing to the directory with the Hadoop and H2O’s driver jar files.

     unzip h2o-0.3.0.1110-*.zip
     cd h2o-0.3.0.1110-*
    
  2. To launch H2O nodes and form a cluster on the Hadoop cluster, run:

     hadoop jar h2odriver.jar -nodes 1 -mapperXmx 1g -output hdfsOutputDirName
    
    • The above command launches a 1g node of H2O. We recommend you launch the cluster with at least four times the memory of your data file size.

    • mapperXmx is the mapper size or the amount of memory allocated to each node.

    • nodes is the number of nodes requested to form the cluster.

    • output is the name of the directory created each time a H2O cloud is created so it is necessary for the name to be unique each time it is launched.

  3. To monitor your job, direct your web browser to your standard job tracker Web UI. To access H2O’s Web UI, direct your web browser to one of the launched instances. If you are unsure where your JVM is launched, review the output from your command after the nodes has clouded up and formed a cluster. Any of the nodes’ IP addresses will work as there is no master node.

     Determining driver host interface for mapper->driver callback...
     [Possible callback IP address: 172.16.2.181]
     [Possible callback IP address: 127.0.0.1]
     ...
     Waiting for H2O cluster to come up...
     H2O node 172.16.2.184:54321 requested flatfile
     Sending flatfiles to nodes...
      [Sending flatfile to node 172.16.2.184:54321]
     H2O node 172.16.2.184:54321 reports H2O cluster size 1 
     H2O cluster (1 nodes) is up
     Blocking until the H2O cluster shuts down...
    

Hadoop Launch Parameters

How to Pass S3 Credentials to H2O

To make use of Amazon Web Services (AWS) storage solution S3 you will need to pass your S3 access credentials to H2O. This will allow you to access your data on S3 when importing data frames with path prefixes s3n://....

For security reasons, we recommend writing a script to read the access credentials that are stored in a separate file. This will not only keep your credentials from propagating to other locations, but it will also make it easier to change the credential information later.

Standalone Instance

When running H2O on standalone mode aka using the simple java launch command, we can pass in the S3 credentials in two ways.

You can pass in credentials in standalone mode the same way we access data from hdfs on Hadoop mode. You’ll need to create a core-site.xml file and pass it in with the flag -hdfs_config. For an example core-site.xml file, refer to Core-site.xml.

Edit the properties in the core-site.xml file to include your Access Key ID and Access Key as shown in the following example:

<property>
  <name>fs.s3n.awsAccessKeyId</name>
  <value>[AWS SECRET KEY]</value>
</property>

<property>
  <name>fs.s3n.awsSecretAccessKey</name>
  <value>[AWS SECRET ACCESS KEY]</value>
</property>

Launch with the configuration file core-site.xml by running in the command line:

`java -jar h2o.jar -hdfs_config core-site.xml`

or java -cp h2o.jar water.H2OApp -hdfs_config core-site.xml

Then import the data with the S3 url path: s3n://bucket/path/to/file.csv with importFile.

Accessing S3 Data from Hadoop Instance

H2O launched atop Hadoop servers can still access S3 Data in addition to having access to HDFS. To do this, edit Hadoop’s core-site.xml the same way. Set the HADOOP_CONF_DIR environment property to the directory containing the core-site.xml file. For an example core-site.xml file, refer to Core-site.xml. Typically, the configuration directory for most Hadoop distributions is /etc/hadoop/conf.

where AWS_ACCESS_KEY represents your user name and AWS_SECRET_KEY represents your password.

Then you can import the data with the S3 URL path:

To import the data from the Flow API:

    importFiles [ "s3n://bucket/path/to/file.csv" ]

To import the data from the R API:

    h2o.importFile(path = "s3n://bucket/path/to/file.csv")

To import the data from the Python API:

    h2o.import_frame(path = "s3n://bucket/path/to/file.csv")

Sparkling Water Instance

For Sparkling Water, the S3 credentials need to be passed via HADOOP_CONF_DIR that will point to a core-site.xml with the AWS_ACCESS_KEY AND AWS_SECRET_KEY. On Hadoop, typically the configuration directory is set to /etc/hadoop/conf:

    export HADOOP_CONF_DIR=/etc/hadoop/conf

If you are running a local instance, create a configuration directory locally with the core-site.xml and then export the path to the configuration directory:

    mkdir CONF
    cd CONF
    export HADOOP_CONF_DIR=`pwd`

Core-site.xml Example

The following is an example core-site.xml file:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

    <!--
    <property>
    <name>fs.default.name</name>
    <value>s3n://<your s3 bucket></value>
    </property>
    -->

    <property>
        <name>fs.s3n.awsAccessKeyId</name>
        <value>insert access key here</value>
    </property>

    <property>
        <name>fs.s3n.awsSecretAccessKey</name>
        <value>insert secret key here</value>
    </property>
    </configuration> 

Data Science in H2O-Dev

Commonalities

Missing Value Handling for Training

If missing values are found in the validation frame during model training or during the scoring process for creating predictions, the missing values are automatically imputed.

If the missing values are found during POJO scoring, the answer is converted to NaN.

K-Means

Introduction

K-Means falls in the general category of clustering algorithms.

Defining a K-Means Model

Interpreting a K-Means Model

By default, the following output displays:

K-Means randomly chooses starting points and converges to a local minimum of centroids. The number of clusters is arbitrary, and should be thought of as a tuning parameter. The output is a matrix of the cluster assignments and the coordinates of the cluster centers in terms of the originally chosen attributes. Your cluster centers may differ slightly from run to run as this problem is Non-deterministic Polynomial-time (NP)-hard.

FAQ

K-Means Algorithm

The number of clusters $K$ is user-defined and is determined a priori.

  1. Choose $K$ initial cluster centers $m_{k}$ according to one of the following:

    • Randomization: Choose $K$ clusters from the set of $N$ observations at random so that each observation has an equal chance of being chosen.

    • Plus Plus

      a. Choose one center $m_{1}$ at random.

      1. Calculate the difference between $m{1}$ and each of the remaining $N-1$ observations $x{i}$. $d(x{i}, m{1}) = ||(x{i}-m{1})||^2$

      2. Let $P(i)$ be the probability of choosing $x{i}$ as $m{2}$. Weight $P(i)$ by $d(x{i}, m{1})$ so that those $x{i}$ furthest from $m{2}$ have a higher probability of being selected than those $x{i}$ close to $m{1}$.

      3. Choose the next center $m_{2}$ by drawing at random according to the weighted probability distribution.

      4. Repeat until $K$ centers have been chosen.

    • Furthest

      a. Choose one center $m_{1}$ at random.

      1. Calculate the difference between $m{1}$ and each of the remaining $N-1$ observations $x{i}$. $d(x{i}, m{1}) = ||(x{i}-m{1})||^2$

      2. Choose $m{2}$ to be the $x{i}$ that maximizes $d(x{i}, m{1})$.

      3. Repeat until $K$ centers have been chosen.

  2. Once $K$ initial centers have been chosen calculate the difference between each observation $x{i}$ and each of the centers $m{1},…,m_{K}$, where difference is the squared Euclidean distance taken over $p$ parameters.

    $d(x{i}, m{k})=$ $\sum{j=1}^{p}(x{ij}-m{k})^2=$ $\lVert(x{i}-m_{k})\rVert^2$

  1. Assign $x{i}$ to the cluster $k$ defined by $m{k}$ that minimizes $d(x{i}, m{k})$

  2. When all observations $x_{i}$ are assigned to a cluster calculate the mean of the points in the cluster.

    $\bar{x}(k)=\lbrace\bar{x{i1}},…\bar{x{ip}}\rbrace$

  3. Set the $\bar{x}(k)$ as the new cluster centers $m{k}$. Repeat steps 2 through 5 until the specified number of max iterations is reached or cluster assignments of the $x{i}$ are stable.

References

Hastie, Trevor, Robert Tibshirani, and J Jerome H Friedman. The Elements of Statistical Learning. Vol.1. N.p., Springer New York, 2001.

Xiong, Hui, Junjie Wu, and Jian Chen. “K-means Clustering Versus Validation Measures: A Data- distribution Perspective.” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 39.2 (2009): 318-331.


GLM

Introduction

Generalized Linear Models (GLM) estimate regression models for outcomes following exponential distributions. In addition to the Gaussian (i.e. normal) distribution, these include Poisson, binomial, and gamma distributions. Each serves a different purpose, and depending on distribution and link function choice, can be used either for prediction or classification.

The GLM suite includes:

Defining a GLM Model

Interpreting a GLM Model

By default, the following output displays:

FAQ

GLM Algorithm

Following the definitive text by P. McCullagh and J.A. Nelder (1989) on the generalization of linear models to non-linear distributions of the response variable Y, H2O fits GLM models based on the maximum likelihood estimation via iteratively reweighed least squares.

Let $y{1},…,y{n}$ be n observations of the independent, random response variable $Y_{i}$.

Assume that the observations are distributed according to a function from the exponential family and have a probability density function of the form:

$f(y{i})=exp[\frac{y{i}\theta{i} - b(\theta{i})}{a{i}(\phi)} + c(y{i}; \phi)]$ where $\theta$ and $\phi$ are location and scale parameters, and $\: a{i}(\phi), \:b{i}(\theta{i}),\: c{i}(y_{i}; \phi)$ are known functions.

$a{i}$ is of the form $\:a{i}=\frac{\phi}{p{i}}; p{i}$ is a known prior weight.

When $Y$ has a pdf from the exponential family:

$E(Y{i})=\mu{i}=b^{\prime}$ $var(Y{i})=\sigma{i}^2=b^{\prime\prime}(\theta{i})a{i}(\phi)$

Let $g(\mu{i})=\eta{i}$ be a monotonic, differentiable transformation of the expected value of $y{i}$. The function $\eta{i}$ is the link function and follows a linear model.

$g(\mu{i})=\eta{i}=\mathbf{x_{i}^{\prime}}\beta$

When inverted: $\mu=g^{-1}(\mathbf{x_{i}^{\prime}}\beta)$

Maximum Likelihood Estimation

For an initial rough estimate of the parameters $\hat{\beta}$, use the estimate to generate fitted values: $\mu{i}=g^{-1}(\hat{\eta{i}})$

Let $z$ be a working dependent variable such that $z{i}=\hat{\eta{i}}+(y{i}-\hat{\mu{i}})\frac{d\eta{i}}{d\mu{i}}$,

where $\frac{d\eta{i}}{d\mu{i}}$ is the derivative of the link function evaluated at the trial estimate.

Calculate the iterative weights: $w{i}=\frac{p{i}}{[b^{\prime\prime}(\theta{i})\frac{d\eta{i}}{d\mu_{i}}^{2}]}$

Where $b^{\prime\prime}$ is the second derivative of $b(\theta_{i})$ evaluated at the trial estimate.

Assume $a{i}(\phi)$ is of the form $\frac{\phi}{p{i}}$. The weight $w{i}$ is inversely proportional to the variance of the working dependent variable $z{i}$ for current parameter estimates and proportionality factor $\phi$.

Regress $z{i}$ on the predictors $x{i}$ using the weights $w_{i}$ to obtain new estimates of $\beta$. $\hat{\beta}=(\mathbf{X}^{\prime}\mathbf{W}\mathbf{X})^{-1}\mathbf{X}^{\prime}\mathbf{W}\mathbf{z}$

Where $\mathbf{X}$ is the model matrix, $\mathbf{W}$ is a diagonal matrix of $w{i}$, and $\mathbf{z}$ is a vector of the working response variable $z{i}$.

This process is repeated until the estimates $\hat{\beta}$ change by less than the specified amount.

Cost of computation

H2O can process large data sets because it relies on parallel processes. Large data sets are divided into smaller data sets and processed simultaneously and the results are communicated between computers as needed throughout the process.

In GLM, data are split by rows but not by columns, because the predicted Y values depend on information in each of the predictor variable vectors. If O is a complexity function, N is the number of observations (or rows), and P is the number of predictors (or columns) then

    $Runtime\propto p^3+\frac{(N*p^2)}{CPUs}$

Distribution reduces the time it takes an algorithm to process because it decreases N.

Relative to P, the larger that (N/CPUs) becomes, the more trivial p becomes to the overall computational cost. However, when p is greater than (N/CPUs), O is dominated by p.

    $Complexity = O(p^3 + N*p^2)$

References

Breslow, N E. “Generalized Linear Models: Checking Assumptions and Strengthening Conclusions.” Statistica Applicata 8 (1996): 23-41.

Frome, E L. “The Analysis of Rates Using Poisson Regression Models.” Biometrics (1983): 665-674.

Goldberger, Arthur S. “Best Linear Unbiased Prediction in the Generalized Linear Regression Model.” Journal of the American Statistical Association 57.298 (1962): 369-375.

Guisan, Antoine, Thomas C Edwards Jr, and Trevor Hastie. “Generalized Linear and Generalized Additive Models in Studies of Species Distributions: Setting the Scene.” Ecological modeling 157.2 (2002): 89-100.

Nelder, John A, and Robert WM Wedderburn. “Generalized Linear Models.” Journal of the Royal Statistical Society. Series A (General) (1972): 370-384.

Niu, Feng, et al. “Hogwild!: A lock-free approach to parallelizing stochastic gradient descent.” Advances in Neural Information Processing Systems 24 (2011): 693-701.*implemented algorithm on p.5

Pearce, Jennie, and Simon Ferrier. “Evaluating the Predictive Performance of Habitat Models Developed Using Logistic Regression.” Ecological modeling 133.3 (2000): 225-245.

Press, S James, and Sandra Wilson. “Choosing Between Logistic Regression and Discriminant Analysis.” Journal of the American Statistical Association 73.364 (April, 2012): 699–705.

Snee, Ronald D. “Validation of Regression Models: Methods and Examples.” Technometrics 19.4 (1977): 415-428.


DRF

Introduction

Distributed Random Forest (DRF) is a powerful classification tool. When given a set of data, DRF generates a forest of classification trees, rather than a single classification tree. Each of these trees is a weak learner built on a subset of rows and columns. More trees will reduce the variance. The classification from each H2O tree can be thought of as a vote; the most votes determines the classification.

Defining a DRF Model

Interpreting a DRF Model

By default, the following output displays:

FAQ

DRF Algorithm

Jan vitek distributedrandomforest_5-2-2013 from 0xdata

References


Naïve Bayes

Introduction

Naïve Bayes (NB) is a classification algorithm that relies on strong assumptions of the independence of covariates in applying Bayes Theorem. NB models are commonly used as an alternative to decision trees for classification problems.

Defining a Naïve Bayes Model

Interpreting a Naïve Bayes Model

The output from Naïve Bayes is a list of tables containing the a-priori and conditional probabilities of each class of the response. The a-priori probability is the estimated probability of a particular class before observing any of the predictors. Each conditional probability table corresponds to a predictor column. The row headers are the classes of the response and the column headers are the classes of the predictor. Thus, in the table below, the probability of survival (y) given a person is male (x) is 0.91543624.

                Sex
Survived       Male     Female
     No  0.91543624 0.08456376
     Yes 0.51617440 0.48382560

When the predictor is numeric, Naïve Bayes assumes it is sampled from a Gaussian distribution given the class of the response. The first column contains the mean and the second column contains the standard deviation of the distribution.

By default, the following output displays:

FAQ

Naïve Bayes Algorithm

The algorithm is presented for the simplified binomial case without loss of generality.

Under the Naive Bayes assumption of independence, given a training set for a set of discrete valued features X ${(X^{(i)},\ y^{(i)};\ i=1,…m)}$

The joint likelihood of the data can be expressed as:

$\mathcal{L} \: (\phi(y),\: \phi{i|y=1},\:\phi{i|y=0})=\Pi_{i=1}^{m} p(X^{(i)},\: y^{(i)})$

The model can be parameterized by:

$\phi{i|y=0}=\ p(x{i}=1|\ y=0);\: \phi{i|y=1}=\ p(x{i}=1|y=1);\: \phi(y)$

Where $\phi{i|y=0}=\ p(x{i}=1|\ y=0)$ can be thought of as the fraction of the observed instances where feature $x{i}$ is observed, and the outcome is $y=0, \phi{i|y=1}=p(x{i}=1|\ y=1)$ is the fraction of the observed instances where feature $x{i}$ is observed, and the outcome is $y=1$, and so on.

The objective of the algorithm is to maximize with respect to $\phi{i|y=0}, \ \phi{i|y=1},\ and \ \phi(y)$

Where the maximum likelihood estimates are:

$\phi{j|y=1}= \frac{\Sigma{i}^m 1(x{j}^{(i)}=1 \ \bigcap y^{i} = 1)}{\Sigma{i=1}^{m}(y^{(i)}=1}$

$\phi{j|y=0}= \frac{\Sigma{i}^m 1(x{j}^{(i)}=1 \ \bigcap y^{i} = 0)}{\Sigma{i=1}^{m}(y^{(i)}=0}$

$\phi(y)= \frac{(y^{i} = 1)}{m}$

Once all parameters $\phi{j|y}$ are fitted, the model can be used to predict new examples with features $X{(i^*)}$.

This is carried out by calculating:

$p(y=1|x)=\frac{\Pi p(x_i|y=1) p(y=1)}{\Pi p(x_i|y=1)p(y=1) \: +\: \Pi p(x_i|y=0)p(y=0)}$

$p(y=0|x)=\frac{\Pi p(x_i|y=0) p(y=0)}{\Pi p(x_i|y=1)p(y=1) \: +\: \Pi p(x_i|y=0)p(y=0)}$

and predicting the class with the highest probability.

It is possible that prediction sets contain features not originally seen in the training set. If this occurs, the maximum likelihood estimates for these features predict a probability of 0 for all cases of y.

Laplace smoothing allows a model to predict on out of training data features by adjusting the maximum likelihood estimates to be:

$\phi{j|y=1}= \frac{\Sigma{i}^m 1(x{j}^{(i)}=1 \ \bigcap y^{i} = 1) \: + \: 1}{\Sigma{i=1}^{m}(y^{(i)}=1 \: + \: 2}$

$\phi{j|y=0}= \frac{\Sigma{i}^m 1(x{j}^{(i)}=1 \ \bigcap y^{i} = 0) \: + \: 1}{\Sigma{i=1}^{m}(y^{(i)}=0 \: + \: 2}$

Note that in the general case where y takes on k values, there are k+1 modified parameter estimates, and they are added in when the denominator is k (rather than two, as shown in the two-level classifier shown here.)

Laplace smoothing should be used with care; it is generally intended to allow for predictions in rare events. As prediction data becomes increasingly distinct from training data, train new models when possible to account for a broader set of possible X values.

References

Hastie, Trevor, Robert Tibshirani, and J Jerome H Friedman. The Elements of Statistical Learning. Vol.1. N.p., Springer New York, 2001.

Ng, Andrew. “Generative Learning algorithms.” (2008).


PCA

PCA is currently in progress in H2O-Dev. Once implementation of this algorithm is complete, this section of the document will be updated.


GBM

Introduction

Gradient Boosted Regression and Gradient Boosted Classification are forward learning ensemble methods. The guiding heuristic is that good predictive results can be obtained through increasingly refined approximations. H2O’s GBM sequentially builds regression trees on all the features of the dataset in a fully distributed way - each tree is built in parallel.

Defining a GBM Model

Interpreting a GBM Model

The output for GBM includes the following:

FAQ

GBM Algorithm

H2O’s Gradient Boosting Algorithms follow the algorithm specified by Hastie et al (2001):

Initialize $f_{k0} = 0,\: k=1,2,…,K$

For $m=1$ to $M:$

  (a) Set $p{k}(x)=\frac{e^{f{k}(x)}}{\sum{l=1}^{K}e^{f{l}(x)}},\:k=1,2,…,K$

  (b) For $k=1$ to $K$:

    i. Compute $r{ikm}=y{ik}-p{k}(x{i}),\:i=1,2,…,N.$     ii. Fit a regression tree to the targets $r{ikm},\:i=1,2,…,N$, giving terminal regions $R{jim},\:j=1,2,…,J{m}.$ $iii. Compute$ $\gamma{jkm}=\frac{K-1}{K}\:\frac{\sum{x{i}\in R{jkm}}(r{ikm})}{\sum{x{i}\in R{jkm}}|r{ikm}|(1-|r{ikm})},\:j=1,2,…,J{m}.$ $\:iv.\:Update\:f{km}(x)=f{k,m-1}(x)+\sum{j=1}^{J{m}}\gamma{jkm}I(x\in\:R{jkm}).$

Output $\:\hat{f{k}}(x)=f{kM}(x),\:k=1,2,…,K.$

References

Dietterich, Thomas G, and Eun Bae Kong. “Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms.” ML-95 255 (1995).

Elith, Jane, John R Leathwick, and Trevor Hastie. “A Working Guide to Boosted Regression Trees.” Journal of Animal Ecology 77.4 (2008): 802-813

Friedman, Jerome H. “Greedy Function Approximation: A Gradient Boosting Machine.” Annals of Statistics (2001): 1189-1232.

Friedman, Jerome, Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. “Discussion of Boosting Papers.” Ann. Statist 32 (2004): 102-107

Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. “Additive Logistic Regression: A Statistical View of Boosting (With Discussion and a Rejoinder by the Authors).” The Annals of Statistics 28.2 (2000): 337-407

Hastie, Trevor, Robert Tibshirani, and J Jerome H Friedman. The Elements of Statistical Learning. Vol.1. N.p., page 339: Springer New York, 2001.


Deep Learning

Introduction

H2O’s Deep Learning is based on a multi-layer feed-forward artificial neural network that is trained with stochastic gradient descent using back-propagation. The network can contain a large number of hidden layers consisting of neurons with tanh, rectifier and maxout activation functions. Advanced features such as adaptive learning rate, rate annealing, momentum training, dropout, L1 or L2 regularization, checkpointing and grid search enable high predictive accuracy. Each compute node trains a copy of the global model parameters on its local data with multi-threading (asynchronously), and contributes periodically to the global model via model averaging across the network.

Defining a Deep Learning Model

H2O Deep Learning models have many input parameters, many of which are only accessible via the expert mode. For most cases, use the default values. Please read the following instructions before building extensive Deep Learning models. The application of grid search and successive continuation of winning models via checkpoint restart is highly recommended, as model performance can vary greatly.

Interpreting a Deep Learning Model

To view the results, click the View button. The output for the Deep Learning model includes the following information for both the training and testing sets:

FAQ

This is something to look out for. Say you have three columns: zip code (70k levels), height, and income. The resulting number of internally one-hot encoded features will be 70,002 and only 3 of them will be activated (non-zero). If the first hidden layer has 200 neurons, then the resulting weight matrix will be of size 70,002 x 200, which can take a long time to train and converge. In this case, we recommend either reducing the number of categorical factor levels upfront (e.g., using h2o.interaction() from R), or specifying max_categorical_features to use feature hashing to reduce the dimensionality.

Deep Learning Algorithm

For more information about how the Deep Learning algorithm works, refer to the Deep Learning booklet.

References

“Deep Learning.” Wikipedia: The free encyclopedia. Wikimedia Foundation, Inc. 1 May 2015. Web. 4 May 2015.

“Artificial Neural Network.” Wikipedia: The free encyclopedia. Wikimedia Foundation, Inc. 22 April 2015. Web. 4 May 2015.

Zeiler, Matthew D. ‘ADADELTA: An Adaptive Learning Rate Method’. Arxiv.org. N.p., 2012. Web. 4 May 2015.

Sutskever, Ilya et al. “On the importance of initialization and momementum in deep learning.” JMLR:W&CP vol. 28. (2013).

Hinton, G.E. et. al. “Improving neural networks by preventing co-adaptation of feature detectors.” University of Toronto. (2012).

Wager, Stefan et. al. “Dropout Training as Adaptive Regularization.” Advances in Neural Information Processing Systems. (2013).

Gedeon, TD. “Data mining of inputs: analysing magnitude and functional measures.” University of New South Wales. (1997).

Candel, Arno and Parmar, Viraj. “Deep Learning with H2O.” H2O.ai, Inc. (2015).

Deep Learning Training

Slideshare slide decks

Youtube channel

Candel, Arno. “The Definitive Performance Tuning Guide for H2O Deep Learning.” H2O.ai, Inc. (2015).

Niu, Feng, et al. “Hogwild!: A lock-free approach to parallelizing stochastic gradient descent.” Advances in Neural Information Processing Systems 24 (2011): 693-701. (algorithm implemented is on p.5)

Hawkins, Simon et al. “Outlier Detection Using Replicator Neural Networks.” CSIRO Mathematical and Information Sciences

REST API Reference

GET /3/About

Return information about this H2O.

InputAboutV3
OutputAboutV3

GET /3/Cloud

Determine the status of the nodes in the H2O cloud.

InputCloudV3
OutputCloudV3

HEAD /3/Cloud

Determine the status of the nodes in the H2O cloud.

InputCloudV3
OutputCloudV3

POST /3/CreateFrame

Create a synthetic H2O Frame.

InputCreateFrameV3
OutputCreateFrameV3

DELETE /3/DKV

Remove all keys from the H2O distributed K/V store.

InputRemoveAllV3
OutputRemoveAllV3

DELETE /3/DKV/(?.*)

Remove an arbitrary key from the H2O distributed K/V store.

InputRemoveV3
OutputRemoveV3

GET /3/DownloadDataset

Download something something.

InputDownloadDataV3
OutputDownloadDataV3

GET /3/Find

Find a value within a Frame.

InputFindV3
OutputFindV3

GET /3/Frames

Return all Frames in the H2O distributed K/V store.

InputFramesV3
OutputFramesV3

DELETE /3/Frames

Delete all Frames from the H2O distributed K/V store.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)

Return the specified Frame.

InputFramesV3
OutputFramesV3

DELETE /3/Frames/(?.*)

Delete the specified Frame from the H2O distributed K/V store.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns

Return all the columns from a Frame.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns/(?.*)

Return the specified column from a Frame.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns/(?.*)/domain

Return the domains for the specified column. “null” if the column is not an Enum.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns/(?.*)/summary

Return the summary metrics for a column, e.g. mins, maxes, mean, sigma, percentiles, etc.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/export/(?.*)/overwrite/(?.*)

Export a Frame to the given path with optional overwrite.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/summary

Return a Frame, including the histograms, after forcing computation of rollups.

InputFramesV3
OutputFramesV3

GET /3/ImportFiles

Import raw data files into a single-column H2O Frame.

InputImportFilesV3
OutputImportFilesV3

GET /3/InitID

Issue a new session ID.

InputInitIDV3
OutputInitIDV3

GET /3/JStack

Something something something.

InputJStackV3
OutputJStackV3

GET /3/Jobs

Get a list of all the H2O Jobs (long-running actions).

InputJobsV3
OutputSchema

GET /3/Jobs/(?.*)

Get the status of the given H2O Job (long-running action).

InputJobsV3
OutputSchema

POST /3/Jobs/(?.*)/cancel

Cancel a running job.

InputJobsV3
OutputSchema

GET /3/KillMinus3

Kill minus 3 on this node

InputKillMinus3V3
OutputKillMinus3V3

POST /3/LogAndEcho

Save a message to the H2O logfile.

InputLogAndEchoV3
OutputLogAndEchoV3

GET /3/Logs/nodes/(?.*)/files/(?.*)

Get named log file for a node.

InputLogsV3
OutputLogsV3

POST /3/MakeGLMModel

make a new GLM model based on existing one

InputMakeGLMModelV3
OutputGLMModelV3

GET /3/Metadata/endpoints

Return a list of all the REST API endpoints.

InputDocsV3
OutputDocsV3

GET /3/Metadata/endpoints/(?[0-9]+)

Return the REST API endpoint metadata, including documentation, for the endpoint specified by number.

InputDocsV3
OutputDocsV3

GET /3/Metadata/endpoints/(?.*)

Return the REST API endpoint metadata, including documentation, for the endpoint specified by path.

InputDocsV3
OutputDocsV3

GET /3/Metadata/schemaclasses/(?.*)

Return the REST API schema metadata for specified schema class.

InputDocsV3
OutputDocsV3

GET /3/Metadata/schemas

Return list of all REST API schemas.

InputDocsV3
OutputDocsV3

GET /3/Metadata/schemas/(?.*)

Return the REST API schema metadata for specified schema.

InputDocsV3
OutputDocsV3

POST /3/MissingInserter

Insert missing values.

InputMissingInserterV3
OutputMissingInserterV3

GET /3/ModelBuilders

Return the Model Builder metadata for all available algorithms.

InputModelBuildersV3
OutputModelBuildersV3

GET /3/ModelBuilders/(?.*)

Return the Model Builder metadata for the specified algorithm.

InputModelBuildersV3
OutputModelBuildersV3

POST /3/ModelBuilders/deeplearning

Train a Deep Learning model on the specified Frame.

InputDeepLearningV3
OutputSchema

POST /3/ModelBuilders/deeplearning/parameters

Validate a set of Deep Learning model builder parameters.

InputDeepLearningV3
OutputDeepLearningV3

POST /3/ModelBuilders/drf

Train a DRF model on the specified Frame.

InputDRFV3
OutputSchema

POST /3/ModelBuilders/drf/parameters

Validate a set of DRF model builder parameters.

InputDRFV3
OutputDRFV3

POST /3/ModelBuilders/gbm

Train a GBM model on the specified Frame.

InputGBMV3
OutputSchema

POST /3/ModelBuilders/gbm/parameters

Validate a set of GBM model builder parameters.

InputGBMV3
OutputGBMV3

POST /3/ModelBuilders/glm

Train a GLM model on the specified Frame.

InputGLMV3
OutputSchema

POST /3/ModelBuilders/glm/parameters

Validate a set of GLM model builder parameters.

InputGLMV3
OutputGLMV3

POST /3/ModelBuilders/kmeans

Train a KMeans model on the specified Frame.

InputKMeansV3
OutputSchema

POST /3/ModelBuilders/kmeans/parameters

Validate a set of KMeans model builder parameters.

InputKMeansV3
OutputKMeansV3

POST /3/ModelBuilders/naivebayes

Train a Naive Bayes model on the specified Frame.

InputNaiveBayesV3
OutputSchema

POST /3/ModelBuilders/naivebayes/parameters

Validate a set of Naive Bayes model builder parameters.

InputNaiveBayesV3
OutputNaiveBayesV3

POST /3/ModelBuilders/pca

Train a PCA model on the specified Frame.

InputPCAV3
OutputSchema

POST /3/ModelBuilders/pca/parameters

Validate a set of PCA model builder parameters.

InputPCAV3
OutputPCAV3

GET /3/ModelMetrics

Return all the saved scoring metrics.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/frames/(?.*)

Return the saved scoring metrics for the specified Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/frames/(?.*)/models/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

DELETE /3/ModelMetrics/frames/(?.*)/models/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/models/(?.*)

Return the saved scoring metrics for the specified Model.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/models/(?.*)/frames/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

DELETE /3/ModelMetrics/models/(?.*)/frames/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

POST /3/ModelMetrics/models/(?.*)/frames/(?.*)

Return the scoring metrics for the specified Frame with the specified Model. If the Frame has already been scored with the Model then cached results will be returned; otherwise predictions for all rows in the Frame will be generated and the metrics will be returned.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/Models

Return all Models from the H2O distributed K/V store.

InputModelsV3
OutputModelsV3

DELETE /3/Models

Delete all Models from the H2O distributed K/V store.

InputModelsV3
OutputModelsV3

GET /3/Models/(?.*)

Return the specified Model from the H2O distributed K/V store, optionally with the list of compatible Frames.

InputModelsV3
OutputModelsV3

DELETE /3/Models/(?.*)

Delete the specified Model from the H2O distributed K/V store.

InputModelsV3
OutputModelsV3

GET /3/Models/(?.*)/preview

Return potentially abridged model suitable for viewing in a browser (currently only used for java model code).

InputModelsV3
OutputModelsV3

GET /3/NetworkTest

Something something something.

InputNetworkTestV3
OutputNetworkTestV3

POST /3/NodePersistentStorage/(?.*)

Store a value.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/(?.*)

Return all keys stored for a given category.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

POST /3/NodePersistentStorage/(?.*)/(?.*)

Store a named value.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/(?.*)/(?.*)

Return value for a given name.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

DELETE /3/NodePersistentStorage/(?.*)/(?.*)

Delete a key.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/categories/(?.*)/exists

Return true or false.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/categories/(?.*)/names/(?.*)/exists

Return true or false.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/configured

Return true or false.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

POST /3/Parse

Parse a raw byte-oriented Frame into a useful columnar data Frame.

InputParseV3
OutputParseV3

POST /3/ParseSetup

Guess the parameters for parsing raw byte-oriented data into an H2O Frame.

InputParseSetupV3
OutputParseSetupV3

POST /3/Predictions/models/(?.*)/frames/(?.*)

Score (generate predictions) for the specified Frame with the specified Model. Both the Frame of predictions and the metrics will be returned.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/Profiler

Something something something.

InputProfilerV3
OutputProfilerV3

POST /3/Rapids

Something something R exec something.

InputRapidsV3
OutputRapidsV3

GET /3/Rapids/isEval

something something r exec something.

InputRapidsV3
OutputRapidsV3

POST /3/Shutdown

Shut down the cluster

InputShutdownV3
OutputShutdownV3

POST /3/SplitFrame

Split a H2O Frame.

InputSplitFrameV3
OutputSplitFrameV3

GET /3/Timeline

Something something something.

InputTimelineV3
OutputTimelineV3

GET /3/Tutorials

H2O tutorials.

InputTutorialsV3
OutputTutorialsV3

GET /3/Typeahead/files

Typehead hander for filename completion.

InputTypeaheadV3
OutputSchema

POST /3/UnlockKeys

Unlock all keys in the H2O distributed K/V store, to attempt to recover from a crash.

InputUnlockKeysV3
OutputUnlockKeysV3

GET /3/WaterMeterCpuTicks/(?.*)

Return a CPU usage snapshot of all cores of all nodes in the H2O cluster.

InputWaterMeterCpuTicksV3
OutputWaterMeterCpuTicksV3

GET /3/WaterMeterIo

Return IO usage snapshot of all nodes in the H2O cluster.

InputWaterMeterIoV3
OutputWaterMeterIoV3

GET /3/WaterMeterIo/(?.*)

Return IO usage snapshot of all nodes in the H2O cluster.

InputWaterMeterIoV3
OutputWaterMeterIoV3

GET /99/Sample

Example of an experimental endpoint. Call via /EXPERIMENTAL/Sample. Experimental endpoints can change at any moment.

InputCloudV3
OutputCloudV3

REST API Schema Reference

AboutEntryV3

name
string
Property nameOut
value
string
Property valueOut

AboutV3

entries
Iced[]
List of properties about this running H2O instanceOut

CloudV3

skip_ticks
boolean
skip_ticksIn
version
string
versionOut
node_idx
int
Node index number cloud status is collected from (zero-based)Out
cloud_name
string
cloud_nameOut
cloud_size
int
cloud_sizeOut
cloud_uptime_millis
long
cloud_uptime_millisOut
cloud_healthy
boolean
cloud_healthyOut
bad_nodes
int
Nodes reporting unhealthyOut
consensus
boolean
Cloud voting is stableOut
locked
boolean
Cloud is accepting new members or notOut
nodes
Iced[]
nodesOut

ClusteringModelBuilderSchema

parameters
Parameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

ClusteringModelParametersSchema

k
int
Number of clustersIn/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

ColSpecifierV2

column_name
string
Name of the columnIn/Out
is_member_of_frames
string[]
List of fields which specify columns that must contain this columnIn/Out

ColV2

label
string
labelOut
missing_count
long
missingOut
zero_count
long
zerosOut
positive_infinity_count
long
positive infinitiesOut
negative_infinity_count
long
negative infinitiesOut
mins
double[]
minsOut
maxs
double[]
maxsOut
mean
double
meanOut
sigma
double
sigmaOut
type
string
datatype: {enum, string, int, real, time, uuid}Out
domain
string[]
domain; not-null for enum columns onlyOut
data
double[]
dataOut
string_data
string[]
string dataOut
precision
byte
decimal precision, -1 for all digitsOut
histogram_bins
long[]
Histogram bins; null if not computedOut
histogram_base
double
Start of histogram bin zeroOut
histogram_stride
double
Stride per binOut
percentiles
double[]
Percentile values, matching the default percentilesOut

ColumnSpecsBase

name
string
Column NameOut
type
string
Column TypeOut
format
string
Column Format (printf)Out
description
string
Column DescriptionOut

ConfusionMatrixBase

table
TwoDimTable
Annotated confusion matrixOut

ConfusionMatrixV3

table
TwoDimTable
Annotated confusion matrixOut

CoxPHModelOutputV3

names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

CoxPHModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
CoxPHParameters
The build parameters for the model (e.g. K for KMeans).Out
output
CoxPHOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

CoxPHParametersV3

model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

CoxPHV3

parameters
CoxPHParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

CreateFrameV3

rows
long
Number of rowsIn
cols
int
Number of data columns (in addition to the first response column)In
seed
long
Random number seedIn
randomize
boolean
Whether frame should be randomizedIn
value
long
Constant value (for randomize=false)In
real_range
long
Range for real variables (-range … range)In
categorical_fraction
double
Fraction of categorical columns (for randomize=true)In
factors
int
Factor levels for categorical variablesIn
integer_fraction
double
Fraction of integer columns (for randomize=true)In
integer_range
long
Range for integer variables (-range … range)In
binary_fraction
double
Fraction of binary columns (for randomize=true)In
binary_ones_fraction
double
Fraction of 1’s in binary columnsIn
missing_fraction
double
Fraction of missing valuesIn
response_factors
int
Number of factor levels of the first column (1=real, 2=binomial, N=multinomial)In
has_response
boolean
Whether an additional response column should be generatedIn
key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
runtimeOut
exception
string
exceptionOut

DRFModelOutputV3

variable_importances
TwoDimTable
Variable ImportancesOut
init_f
double
The Intercept term, the initial model function value to which trees make adjustmentsOut
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

DRFModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
DRFParameters
The build parameters for the model (e.g. K for KMeans).Out
output
DRFOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

DRFParametersV3

mtries
int
Number of columns to randomly select at each level, or -1 for sqrt(#cols)In
sample_rate
float
Sample rate, from 0. to 1.0In
build_tree_one_node
boolean
Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets.In
ntrees
int
Number of trees.In
max_depth
int
Maximum tree depth.In
min_rows
int
Fewest allowed observations in a leaf (in R called ‘nodesize’).In
nbins
int
Build a histogram of this many bins, then split at the best pointIn
seed
long
Seed for pseudo random number generator (if applicable)In
response_column
VecSpecifier
Response columnIn/Out
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

DRFV3

parameters
DRFParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

DStackTraceV2

node
string
Node nameOut
time
long
Unix epoch timeOut
thread_traces
string[]
One trace per threadOut

DeepLearningModelOutputV3

weights
Key[]
Frame keys for weight matricesIn
biases
Key[]
Frame keys for bias vectorsIn
variable_importances
TwoDimTable
Variable ImportancesOut
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

DeepLearningModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
DeepLearningParameters
The build parameters for the model (e.g. K for KMeans).Out
output
DeepLearningModelOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

DeepLearningParametersV3

checkpoint
Key
Model checkpoint to resume training withIn/Out
override_with_best_model
boolean
If enabled, override the final model with the best model found during trainingIn/Out
autoencoder
boolean
Auto-EncoderIn/Out
use_all_factor_levels
boolean
Use all factor levels of categorical variables. Otherwise, the first factor level is omitted (without loss of accuracy). Useful for variable importances and auto-enabled for autoencoder.In/Out
activation
enum
Activation functionIn/Out
hidden
int[]
Hidden layer sizes (e.g. 100,100).In/Out
epochs
double
How many times the dataset should be iterated (streamed), can be fractionalIn/Out
train_samples_per_iteration
long
Number of training samples (globally) per MapReduce iteration. Special values are 0: one epoch, -1: all available data (e.g., replicated training data), -2: automaticIn/Out
target_ratio_comm_to_comp
double
Target ratio of communication overhead to computation. Only for multi-node operation and train_samples_per_iteration=-2 (auto-tuning)In/Out
seed
long
Seed for random numbers (affects sampling) - Note: only reproducible when running single threadedIn/Out
adaptive_rate
boolean
Adaptive learning rateIn/Out
rho
double
Adaptive learning rate time decay factor (similarity to prior updates)In/Out
epsilon
double
Adaptive learning rate smoothing factor (to avoid divisions by zero and allow progress)In/Out
rate
double
Learning rate (higher => less stable, lower => slower convergence)In/Out
rate_annealing
double
Learning rate annealing: rate / (1 + rate_annealing * samples)In/Out
rate_decay
double
Learning rate decay factor between layers (N-th layer: rate*alpha^(N-1))In/Out
momentum_start
double
Initial momentum at the beginning of training (try 0.5)In/Out
momentum_ramp
double
Number of training samples for which momentum increasesIn/Out
momentum_stable
double
Final momentum after the ramp is over (try 0.99)In/Out
nesterov_accelerated_gradient
boolean
Use Nesterov accelerated gradient (recommended)In/Out
input_dropout_ratio
double
Input layer dropout ratio (can improve generalization, try 0.1 or 0.2)In/Out
hidden_dropout_ratios
double[]
Hidden layer dropout ratios (can improve generalization), specify one value per hidden layer, defaults to 0.5In/Out
l1
double
L1 regularization (can add stability and improve generalization, causes many weights to become 0)In/Out
l2
double
L2 regularization (can add stability and improve generalization, causes many weights to be smallIn/Out
max_w2
float
Constraint for squared sum of incoming weights per unit (e.g. for Rectifier)In/Out
initial_weight_distribution
enum
Initial Weight DistributionIn/Out
initial_weight_scale
double
Uniform: -value…value, Normal: stddev)In/Out
loss
enum
Loss functionIn/Out
score_interval
double
Shortest time interval (in secs) between model scoringIn/Out
score_training_samples
long
Number of training set samples for scoring (0 for all)In/Out
score_validation_samples
long
Number of validation set samples for scoring (0 for all)In/Out
score_duty_cycle
double
Maximum duty cycle fraction for scoring (lower: more training, higher: more scoring).In/Out
classification_stop
double
Stopping criterion for classification error fraction on training data (-1 to disable)In/Out
regression_stop
double
Stopping criterion for regression error (MSE) on training data (-1 to disable)In/Out
quiet_mode
boolean
Enable quiet mode for less output to standard outputIn/Out
score_validation_sampling
enum
Method used to sample validation dataset for scoringIn/Out
diagnostics
boolean
Enable diagnostics for hidden layersIn/Out
variable_importances
boolean
Compute variable importances for input features (Gedeon method) - can be slow for large networksIn/Out
fast_mode
boolean
Enable fast mode (minor approximation in back-propagation)In/Out
ignore_const_cols
boolean
Ignore constant training columns (no information can be gained anyway)In/Out
force_load_balance
boolean
Force extra load balancing to increase training speed for small datasets (to keep all cores busy)In/Out
replicate_training_data
boolean
Replicate the entire training dataset onto every node for faster training on small datasetsIn/Out
single_node_mode
boolean
Run on a single node for fine-tuning of model parametersIn/Out
shuffle_training_data
boolean
Enable shuffling of training data (recommended if training data is replicated and train_samples_per_iteration is close to #nodes x #rows)In/Out
missing_values_handling
enum
Handling of missing values. Either Skip or MeanImputation.In/Out
sparse
boolean
Sparse data handling (Experimental).In/Out
col_major
boolean
Use a column major weight matrix for input layer. Can speed up forward propagation, but might slow down backpropagation (Experimental).In/Out
average_activation
double
Average activation for sparse auto-encoder (Experimental)In/Out
sparsity_beta
double
Sparsity regularization (Experimental)In/Out
max_categorical_features
int
Max. number of categorical features, enforced via hashing (Experimental)In/Out
reproducible
boolean
Force reproducibility on small data (will be slow - only uses 1 thread)In/Out
export_weights_and_biases
boolean
Whether to export Neural Network weights and biases to H2O FramesIn/Out
response_column
VecSpecifier
Response columnIn/Out
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

DeepLearningV3

parameters
DeepLearningParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

DocsBase

num
int
Number for specifying an endpointIn
http_method
string
HTTP method (GET, POST, DELETE) if fetching by pathIn
path
string
Path for specifying an endpointIn
classname
string
Class name, for fetching docs for a schema (DEPRECATED)In
schemaname
string
Schema name (e.g., DocsV1), for fetching docs for a schemaIn
routes
Route[]
List of endpoint routesOut
schemas
SchemaMetadata[]
List of schemasOut
markdown
string
Table of Contents MarkdownOut

DocsV3

num
int
Number for specifying an endpointIn
http_method
string
HTTP method (GET, POST, DELETE) if fetching by pathIn
path
string
Path for specifying an endpointIn
classname
string
Class name, for fetching docs for a schema (DEPRECATED)In
schemaname
string
Schema name (e.g., DocsV1), for fetching docs for a schemaIn
routes
Route[]
List of endpoint routesOut
schemas
SchemaMetadata[]
List of schemasOut
markdown
string
Table of Contents MarkdownOut

DownloadDataV3

frame_id
Key
Frame to downloadIn
hex_string
boolean
Emit double values in a machine readable lossless format with Double.toHexString().In
csv
string
CSV StreamOut
filename
string
Suggested FilenameOut

EventV2

date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

ExampleModelOutputV3

iterations
int
Iterations executedIn
maxs
double[]
(No description available)In
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

ExampleModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
ExampleParameters
The build parameters for the model (e.g. K for KMeans).Out
output
ExampleOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

ExampleParametersV3

max_iterations
int
Maximum training iterations.In
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

ExampleV3

parameters
ExampleParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

FieldMetadataBase

schema_name
string
Schema name for this field, if it is_schema, or the name of the enum, if it’s an enum.In
name
string
Field name in the SchemaOut
type
string
Type for this fieldOut
is_schema
boolean
Type for this field is itself a Schema.Out
value
Polymorphic
Value for this fieldOut
help
string
A short help description to appear alongside the field in a UIOut
label
string
The label that should be displayed for the field if the name is insufficientOut
required
boolean
Is this field required, or is the default value generally sufficient?Out
level
enum
How important is this field? The web UI uses the level to do a slow reveal of the parametersOut
direction
enum
Is this field an input, output or inout?Out
values
string[]
For enum-type fields the allowed values are specified using the values annotation; this is used in UIs to tell the user the allowed values, and for validationOut
json
boolean
Should this field be rendered in the JSON representation?Out
is_member_of_frames
string[]
For Vec-type fields this is the set of other Vec-type fields which must contain mutually exclusive values; for example, for a SupervisedModel the response_column must be mutually exclusive with the weights_columnOut
is_mutually_exclusive_with
string[]
For Vec-type fields this is the set of Frame-type fields which must contain the named column; for example, for a SupervisedModel the response_column must be in both the training_frame and (if it’s set) the validation_frameOut

FieldMetadataV3

schema_name
string
Schema name for this field, if it is_schema, or the name of the enum, if it’s an enum.In
name
string
Field name in the SchemaOut
type
string
Type for this fieldOut
is_schema
boolean
Type for this field is itself a Schema.Out
value
Polymorphic
Value for this fieldOut
help
string
A short help description to appear alongside the field in a UIOut
label
string
The label that should be displayed for the field if the name is insufficientOut
required
boolean
Is this field required, or is the default value generally sufficient?Out
level
enum
How important is this field? The web UI uses the level to do a slow reveal of the parametersOut
direction
enum
Is this field an input, output or inout?Out
values
string[]
For enum-type fields the allowed values are specified using the values annotation; this is used in UIs to tell the user the allowed values, and for validationOut
json
boolean
Should this field be rendered in the JSON representation?Out
is_member_of_frames
string[]
For Vec-type fields this is the set of other Vec-type fields which must contain mutually exclusive values; for example, for a SupervisedModel the response_column must be mutually exclusive with the weights_columnOut
is_mutually_exclusive_with
string[]
For Vec-type fields this is the set of Frame-type fields which must contain the named column; for example, for a SupervisedModel the response_column must be in both the training_frame and (if it’s set) the validation_frameOut

FindV3

key
Frame
Frame to searchIn
column
string
Column, or null for allIn
row
long
Starting row for searchIn
match
string
Value to search for; leave blank for a search for missing valuesIn
prev
long
previous row with matching value, or -1Out
next
long
next row with matching value, or -1Out

FrameKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

FrameV3

frame_id
Key
Key to inspectIn
row_offset
long
Row offset to displayIn
row_count
int
Number of rows to displayIn/Out
checksum
long
checksumOut
rows
long
Number of rowsOut
byte_size
long
Total data size in bytesOut
is_text
boolean
Raw unparsed textOut
default_percentiles
double[]
Default percentiles, from 0 to 1Out
columns
Vec[]
ColumnsOut
compatible_models
string[]
Compatible models, if requestedOut
vec_ids
Key[]
The set of IDs of vectors in the FrameOut
chunk_summary
TwoDimTable
Chunk summaryOut
distribution_summary
TwoDimTable
Distribution summaryOut

FramesBase

frame_id
Key
Name of Frame of interestIn
column
string
Name of column of interestIn
find_compatible_models
boolean
Find and return compatible models?In
path
string
File output pathIn
force
boolean
Overwrite existing filIn
row_offset
long
Row offset to displayIn/Out
row_count
int
Number of rows to displayIn/Out
frames
Frame[]
FramesOut
compatible_models
Model[]
Compatible modelsOut
domain
string[][]
DomainsOut

FramesV3

frame_id
Key
Name of Frame of interestIn
column
string
Name of column of interestIn
find_compatible_models
boolean
Find and return compatible models?In
path
string
File output pathIn
force
boolean
Overwrite existing filIn
row_offset
long
Row offset to displayIn/Out
row_count
int
Number of rows to displayIn/Out
frames
Frame[]
FramesOut
compatible_models
Model[]
Compatible modelsOut
domain
string[][]
DomainsOut

GBMModelOutputV3

variable_importances
TwoDimTable
Variable ImportancesOut
init_f
double
The Intercept term, the initial model function value to which trees make adjustmentsOut
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

GBMModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
GBMParameters
The build parameters for the model (e.g. K for KMeans).Out
output
GBMOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

GBMParametersV3

learn_rate
float
Learning rate from 0.0 to 1.0In
distribution
enum
Distribution functionIn
ntrees
int
Number of trees.In
max_depth
int
Maximum tree depth.In
min_rows
int
Fewest allowed observations in a leaf (in R called ‘nodesize’).In
nbins
int
Build a histogram of this many bins, then split at the best pointIn
seed
long
Seed for pseudo random number generator (if applicable)In
response_column
VecSpecifier
Response columnIn/Out
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

GBMV3

parameters
GBMParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

GLMModelOutputV3

coefficients_table
TwoDimTable
Table of coefficientsIn
coefficients_magnitude
TwoDimTable
Coefficient magnitudesIn
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

GLMModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
GLMParameters
The build parameters for the model (e.g. K for KMeans).Out
output
GLMOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

GLMParametersV3

family
enum
Family. Use binomial for classification with logistic regression, others are for regression problems.In
solver
enum
Auto will pick solver better suited for the given dataset, in case of lambda search solvers may be changed during computation. IRLSM is fast on on problems with small number of predictors and for lambda-search with L1 penalty, L_BFGS scales better for datasets with many columns.In
alpha
double[]
distribution of regularization between L1 and L2.In
lambda
double[]
regularization strengthIn
lambda_search
boolean
use lambda search starting at lambda max, given lambda is then interpreted as lambda minIn
nlambdas
int
number of lambdas to be used in a searchIn
standardize
boolean
Standardize numeric columns to have zero mean and unit varianceIn
max_iterations
int
Maximum number of iterationsIn
beta_epsilon
double
beta esilon -> consider being converged if L1 norm of the current beta change is below this thresholdIn
link
enum
(No description available)In
prior
double
prior probability for y==1. To be used only for logistic regression iff the data has been sampled and the mean of response does not reflect reality.In
lambda_min_ratio
double
min lambda used in lambda search, specified as a ratio of lambda_maxIn
use_all_factor_levels
boolean
By default, first factor level is skipped from the possible set of predictors. Set this flag if you want use all of the levels. Needs sufficient regularization to solve!In
beta_constraints
Key
beta constraintsIn
max_active_predictors
int
Maximum number of active predictors during computation. Use as a stopping criterium to prevent expensive model building with many predictors.In
response_column
VecSpecifier
Response columnIn/Out
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

GLMV3

parameters
GLMParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

GrepModelOutputV3

matches
string[]
Matching stringsIn
offsets
long[]
Byte offsets of matchesIn
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

GrepModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
GrepParameters
The build parameters for the model (e.g. K for KMeans).Out
output
GrepOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

GrepParametersV3

regex
string
regexIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

GrepV3

parameters
GrepParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

H2OErrorV3

timestamp
long
Milliseconds since the epoch for the time that this H2OError instance was created. Generally this is a short time since the underlying error ocurred.Out
error_url
string
Error urlOut
msg
string
Message intended for the end user (a data scientist).Out
dev_msg
string
Potentially more detailed message intended for a developer (e.g. a front end engineer or someone designing a language binding).Out
http_status
int
HTTP status code for this error.Out
values
Map
Any values that are relevant to reporting or handling this error. Examples are a key name if the error is on a key, or a field name and object name if it’s on a specific field.Out
exception_type
string
Exception type, if any.Out
exception_msg
string
Raw exception message, if any.Out
stacktrace
string[]
Stacktrace, if any.Out

H2OModelBuilderErrorV3

parameters
Parameters
Model builder parameters.Out
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut
timestamp
long
Milliseconds since the epoch for the time that this H2OError instance was created. Generally this is a short time since the underlying error ocurred.Out
error_url
string
Error urlOut
msg
string
Message intended for the end user (a data scientist).Out
dev_msg
string
Potentially more detailed message intended for a developer (e.g. a front end engineer or someone designing a language binding).Out
http_status
int
HTTP status code for this error.Out
values
Map
Any values that are relevant to reporting or handling this error. Examples are a key name if the error is on a key, or a field name and object name if it’s on a specific field.Out
exception_type
string
Exception type, if any.Out
exception_msg
string
Raw exception message, if any.Out
stacktrace
string[]
Stacktrace, if any.Out

HeartBeatEvent

sends
int
number of sent heartbeatsIn
recvs
int
number of received heartbeatsIn
date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

IOEvent

io_flavor
string
flavor of the recorded io (ice/hdfs/…)In
node
string
node where this io event happenedIn
data
string
data infoIn
date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

ImportFilesV3

path
string
pathIn
files
string[]
filesOut
destination_frames
string[]
namesOut
fails
string[]
failsOut
dels
string[]
delsOut

InitIDV3

session_key
string
Session IDOut

IoStatsEntry

backend
string
Back end typeOut
store_count
long
Number of store eventsOut
store_bytes
long
Cumulative stored bytesOut
delete_count
long
Number of delete eventsOut
load_count
long
Number of load eventsOut
load_bytes
long
Cumulative loaded bytesOut

JStackV3

traces
DStackTrace[]
StacktracesOut

JobKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

JobV3

key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
runtimeOut
exception
string
exceptionOut

JobsV3

job_id
Key
Optional Job identifierIn
jobs
Job[]
jobsOut

KMeansModelOutputV3

centers
TwoDimTable
Cluster Centers[k][features]In
centers_std
TwoDimTable
Cluster Centers[k][features] on Standardized DataIn
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

KMeansModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
KMeansParameters
The build parameters for the model (e.g. K for KMeans).Out
output
KMeansOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

KMeansParametersV3

user_points
Key
User-specified pointsIn
max_iterations
int
Maximum training iterationsIn
standardize
boolean
Standardize columnsIn
seed
long
RNG SeedIn
init
enum
Initialization modeIn
k
int
Number of clustersIn/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

KMeansV3

parameters
KMeansParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

KeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

KillMinus3V3

(No fields)

LogAndEchoV3

message
string
Message to be Logged and EchoedIn

LogsV3

nodeidx
int
Index of node to query ticks for (0-based). -1 means current node.In
name
string
Which specific log file to read from the log file directory. If left unspecified, the system chooses a default for you.In
log
string
Content of log fileOut

MakeGLMModelV3

model
Key
source modelIn
dest
Key
destination keyIn
names
string[]
coefficient namesIn
beta
double[]
new glm coefficientsIn
threshold
float
decision threshold for label-generationIn

MissingInserterV3

dataset
Key
datasetIn
fraction
double
Fraction of data to replace with a missing valueIn
seed
long
SeedIn
key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
runtimeOut
exception
string
exceptionOut

ModelBuilderSchema

parameters
Parameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

ModelBuildersBase

algo
string
Algo of ModelBuilder of interestIn
model_builders
Map
ModelBuildersOut

ModelBuildersV3

algo
string
Algo of ModelBuilder of interestIn
model_builders
Map
ModelBuildersOut

ModelKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

ModelMetricsAutoEncoderV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsBase

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsBinomialGLMV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
residual_deviance
double
residual devianceOut
null_deviance
double
null devianceOut
AIC
double
AICOut
null_degrees_of_freedom
long
null DOFOut
residual_degrees_of_freedom
long
residual DOFOut
r2
double
The R^2 for this scoring run.Out
logloss
double
The logarithmic loss for this scoring run.Out
AUC
double
The AUC for this scoring run.Out
Gini
double
The Gini score for this scoring run.Out
thresholds_and_metric_scores
TwoDimTable
The Metrics for various thresholds.Out
max_criteria_and_metric_scores
TwoDimTable
The Metrics for various criteria.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsBinomialV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
r2
double
The R^2 for this scoring run.Out
logloss
double
The logarithmic loss for this scoring run.Out
AUC
double
The AUC for this scoring run.Out
Gini
double
The Gini score for this scoring run.Out
thresholds_and_metric_scores
TwoDimTable
The Metrics for various thresholds.Out
max_criteria_and_metric_scores
TwoDimTable
The Metrics for various criteria.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsClusteringV3

avg_within_ss
double
Average within cluster Mean Square ErrorIn
avg_ss
double
Average Mean Square Error to grand meanIn
avg_between_ss
double
Average between cluster Mean Square ErrorIn
centroid_stats
TwoDimTable
Centroid StatisticsIn
model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsListSchemaV3

model
Key
Key of Model of interest (optional)In
frame
Key
Key of Frame of interest (optional)In
reconstruction_error
boolean
Compute reconstruction error (optional, only for Deep Learning AutoEncoder models)In
deep_features_hidden_layer
int
Extract Deep Features for given hidden layer (optional, only for Deep Learning models)In
predictions_frame
Key
Key of predictions frame, if predictions are requested (optional)In/Out
model_metrics
ModelMetrics[]
ModelMetricsOut

ModelMetricsMultinomialV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
r2
double
The R^2 for this scoring run.Out
hit_ratio_table
TwoDimTable
The hit ratio table for this scoring run.Out
cm
ConfusionMatrix
The ConfusionMatrix object for this scoring run.Out
logloss
double
The logarithmic loss for this scoring run.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsPCAV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsRegressionGLMV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
residual_deviance
double
residual devianceOut
null_deviance
double
null devianceOut
AIC
double
AICOut
null_degrees_of_freedom
long
null DOFOut
residual_degrees_of_freedom
long
residual DOFOut
r2
double
The R^2 for this scoring run.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsRegressionV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
r2
double
The R^2 for this scoring run.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
duration_in_ms
long
The duration in mS for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelOutputSchema

names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

ModelParameterSchemaV3

is_member_of_frames
string[]
For Vec-type fields this is the set of other Vec-type fields which must contain mutually exclusive values; for example, for a SupervisedModel the response_column must be mutually exclusive with the weights_columnIn
is_mutually_exclusive_with
string[]
For Vec-type fields this is the set of Frame-type fields which must contain the named column; for example, for a SupervisedModel the response_column must be in both the training_frame and (if it’s set) the validation_frameIn
name
string
name in the JSON, e.g. “lambda”Out
label
string
label in the UI, e.g. “lambda”Out
help
string
help for the UI, e.g. “regularization multiplier, typically used for foo bar baz etc.”Out
required
boolean
the field is requiredOut
type
string
Java type, e.g. “double”Out
default_value
Polymorphic
default value, e.g. 1Out
actual_value
Polymorphic
actual value as set by the user and / or modified by the ModelBuilder, e.g., 10Out
level
string
the importance of the parameter, used by the UI, e.g. “critical”, “extended” or “expert”Out
values
string[]
list of valid values for use by the front-endOut

ModelParametersSchema

model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

ModelSchema

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
Parameters
The build parameters for the model (e.g. K for KMeans).Out
output
Output
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

ModelsBase

model_id
Key
Name of Model of interestIn
preview
boolean
Return potentially abridged model suitable for viewing in a browserIn
find_compatible_frames
boolean
Find and return compatible frames?In
models
Model[]
ModelsOut
compatible_frames
Frame[]
Compatible framesOut

ModelsV3

model_id
Key
Name of Model of interestIn
preview
boolean
Return potentially abridged model suitable for viewing in a browserIn
find_compatible_frames
boolean
Find and return compatible frames?In
models
Model[]
ModelsOut
compatible_frames
Frame[]
Compatible framesOut

NaiveBayesModelOutputV3

levels
string[]
Categorical levels of the responseIn
apriori
TwoDimTable
A-priori probabilities of the responseIn
pcond
TwoDimTable[]
Conditional probabilities of the predictorsIn
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

NaiveBayesModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
NaiveBayesParameters
The build parameters for the model (e.g. K for KMeans).Out
output
NaiveBayesOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

NaiveBayesParametersV3

laplace
double
Laplace smoothing parameterIn
min_sdev
double
Min. standard deviation to use for observations with not enough dataIn
eps_sdev
double
Cutoff below which standard deviation is replaced with min_sdevIn
min_prob
double
Min. probability to use for observations with not enough dataIn
eps_prob
double
Cutoff below which probability is replaced with min_probIn
response_column
VecSpecifier
Response columnIn/Out
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

NaiveBayesV3

parameters
NaiveBayesParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

NetworkEvent

is_send
boolean
Boolean flag distinguishing between sends (true) and receives(false)In
protocol
string
network protocol (UDP/TCP)In
msg_type
string
UDP type (exec,ack, ackack,…In
from
string
Sending nodeIn
to
string
Receiving nodeIn
data
string
Pretty print of the first few bytes of the msg payload. Contains class name for tasks.In
date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

NetworkTestV3

microseconds_collective
double[]
Collective broadcast/reduce times in microseconds (for each message size)Out
bandwidths_collective
double[]
Collective bandwidths in Bytes/sec (for each message size, for each node)Out
microseconds
double[][]
Round-trip times in microseconds (for each message size, for each node)Out
bandwidths
double[][]
Bi-directional bandwidths in Bytes/sec (for each message size, for each node)Out
nodes
string[]
NodesOut
table
TwoDimTable
NetworkTestResultsOut

NodePersistentStorageEntryV3

category
string
Category nameOut
name
string
Key nameOut
size
long
Size in bytes of valueOut
timestamp_millis
long
Epoch time in milliseconds of when the value was writtenOut

NodePersistentStorageV3

category
string
Category nameIn/Out
name
string
Key nameIn/Out
value
string
ValueIn/Out
configured
boolean
ConfiguredOut
exists
boolean
ExistsOut
entries
Iced[]
List of entriesOut

NodeV1

h2o
string
IPOut
ip_port
string
IP address and port in the form a.b.c.d:eOut
healthy
boolean
(now-last_ping)<HeartbeatThread.TIMEOUTOut
last_ping
long
Time (in msec) of last pingOut
sys_load
float
System load; average #runnables/#coresOut
gflops
double
Linpack GFlopsOut
mem_bw
double
Memory BandwidthOut
total_value_size
long
Data on Node (memory or disk)Out
mem_value_size
long
Data on Node (memory only)Out
num_keys
int
id="local-keys">local keys<Out
free_mem
long
Free heapOut
tot_mem
long
Total heapOut
max_mem
long
Max heapOut
free_disk
long
Free diskOut
max_disk
long
Max diskOut
rpcs_active
int
Active Remote Procedure CallsOut
fjthrds
short[]
F/J Thread count, by priorityOut
fjqueue
short[]
F/J Task count, by priorityOut
tcps_active
int
Open TCP connectionsOut
open_fds
int
Open File DescriptersOut
num_cpus
int
num_cpusOut
cpus_allowed
int
cpus_allowedOut
nthreads
int
nthreadsOut
my_cpu_pct
int
System CPU percentage used by this H2O process in last intervalOut
sys_cpu_pct
int
System CPU percentage used by everything in last intervalOut
pid
string
PIDOut

PCAModelOutputV3

iterations
int
Iterations executedIn
archetypes
double[][]
Mapping from training data to lower dimensional k-spaceIn
std_deviation
double[]
Standard deviation of each principal componentIn
eigenvectors
TwoDimTable
Principal components matrixIn
pc_importance
TwoDimTable
Importance of each principal componentIn
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

PCAModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
PCAParameters
The build parameters for the model (e.g. K for KMeans).Out
output
PCAOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

PCAParametersV3

transform
enum
Transformation of training dataIn
k
int
Rank of matrix approximationIn
gamma
double
Regularization weightIn
max_iterations
int
Maximum training iterationsIn
seed
long
RNG seed for k-means++ initializationIn
init
enum
Initialization modeIn
user_points
Key
User-specified initial YIn
loading_key
Key
Frame key to save resulting XIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

PCAV3

parameters
PCAParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

ParseSetupV3

source_frames
Key[]
Source framesIn/Out
parse_type
enum
Parser typeIn/Out
separator
byte
Field separatorIn/Out
single_quotes
boolean
Single quotesIn/Out
check_header
int
Check header: 0 means guess, +1 means 1st line is header not data, -1 means 1st line is data not headerIn/Out
column_names
string[]
Column namesIn/Out
column_types
string[]
Value types for columnsIn/Out
na_strings
string[]
NA strings for columnsIn/Out
destination_frame
string
Suggested nameOut
is_valid
boolean
The initial parse is saneOut
invalid_lines
long
Number of broken/invalid lines foundOut
header_lines
long
Number of header lines foundOut
number_columns
int
Number of columnsOut
domains
string[][]
Domains for categorical columnsOut
data
string[][]
Sample dataOut
chunk_size
int
Size of individual parse tasksOut

ParseV3

destination_frame
Key
Final frame nameIn
source_frames
Key[]
Source framesIn
parse_type
enum
Parser typeIn
separator
byte
Field separatorIn
single_quotes
boolean
Single QuotesIn
check_header
int
Check header: 0 means guess, +1 means 1st line is header not data, -1 means 1st line is data not headerIn
number_columns
int
Number of columnsIn
column_names
string[]
Column namesIn
column_types
string[]
Value types for columnsIn
domains
string[][]
Domains for categorical columnsIn
na_strings
string[]
NA strings for columnsIn
chunk_size
int
Size of individual parse tasksIn
delete_on_done
boolean
Delete input key after parseIn
blocking
boolean
Block until the parse completes (as opposed to returning early and requiring pollingIn
remove_frame
boolean
Remove frame after blocking parse, and return array of VecsIn
job
Job
Parse jobOut
rows
long
RowsOut
vec_ids
Key[]
Vec IDsOut

ProfilerNodeEntryV3

stacktrace
string
Stack traceOut
count
int
Profile CountOut

ProfilerNodeV3

node_name
string
Node namesOut
timestamp
long
Timestamp (millis since epoch)Out
entries
Iced[]
Profile entry listOut

ProfilerV3

depth
int
Stack trace depthIn
nodes
Iced[]
(No description available)Out

QuantileParametersV2

probs
double[]
Probabilities for quantilesIn
combine_method
enum
How to combine quantiles for even sample sizesIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

QuantileV3

parameters
QuantileParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

RapidsV3

ast
string
An Abstract Syntax Tree.In
fun
string
An array of function definitions.In
ast_key
Key
A pointer to a FrameIn
error
string
Parsing error, if anyOut
key
Key
Result keyOut
num_rows
long
Rows in Frame resultOut
num_cols
int
Columns in Frame resultOut
scalar
double
Scalar resultOut
funstr
string
Function resultOut
col_names
string[]
Column NamesOut
string
string
String resultOut
result
string
resultOut
evaluated
boolean
Was evaluatedOut
head
string[][]
Head of a Frame resultOut
result_type
int
Result Type.Out
vec_ids
Key[]
Vec keys for key resultOut

RemoveAllV3

(No fields)

RemoveV3

key
Key
Object to be removed.In

RouteBase

http_method
string
(No description available)Out
url_pattern
string
(No description available)Out
summary
string
(No description available)Out
handler_class
string
(No description available)Out
handler_method
string
(No description available)Out
input_schema
string
(No description available)Out
output_schema
string
(No description available)Out
doc_method
string
(No description available)Out
path_params
string[]
(No description available)Out
markdown
string
(No description available)Out

RouteV3

http_method
string
(No description available)Out
url_pattern
string
(No description available)Out
summary
string
(No description available)Out
handler_class
string
(No description available)Out
handler_method
string
(No description available)Out
input_schema
string
(No description available)Out
output_schema
string
(No description available)Out
doc_method
string
(No description available)Out
path_params
string[]
(No description available)Out
markdown
string
(No description available)Out

Schema

(No fields)

SchemaMetadataBase

version
int
Version number of the Schema.In
name
string
Simple name of the Schema. NOTE: the schema_names form a single namespace.In
superclass
string
Simple name of the superclass of the Schema. NOTE: the schema_names form a single namespace.In
type
string
Simple name of H2O type that this Schema represents. Must not be changed after creation (treat as final).In
fields
FieldMetadata[]
All the public fields of the schemaOut
markdown
string
Documentation for the schema in Markdown format with GitHub extensionsOut

SchemaMetadataV3

version
int
Version number of the Schema.In
name
string
Simple name of the Schema. NOTE: the schema_names form a single namespace.In
superclass
string
Simple name of the superclass of the Schema. NOTE: the schema_names form a single namespace.In
type
string
Simple name of H2O type that this Schema represents. Must not be changed after creation (treat as final).In
fields
FieldMetadata[]
All the public fields of the schemaOut
markdown
string
Documentation for the schema in Markdown format with GitHub extensionsOut

SharedTreeModelOutputV3

variable_importances
TwoDimTable
Variable ImportancesOut
init_f
double
The Intercept term, the initial model function value to which trees make adjustmentsOut
names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

SharedTreeModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
Parameters
The build parameters for the model (e.g. K for KMeans).Out
output
Output
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

SharedTreeParametersV3

ntrees
int
Number of trees.In
max_depth
int
Maximum tree depth.In
min_rows
int
Fewest allowed observations in a leaf (in R called ‘nodesize’).In
nbins
int
Build a histogram of this many bins, then split at the best pointIn
seed
long
Seed for pseudo random number generator (if applicable)In
response_column
VecSpecifier
Response columnIn/Out
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

SharedTreeV3

parameters
Parameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

ShutdownV3

(No fields)

SplitFrameV3

dataset
Key
DatasetIn
ratios
double[]
Split ratios - resulting number of split is ratios.length+1In
key
Key
Job KeyIn
description
string
Job descriptionIn
destination_frames
Key[]
Destination keys for each output frame split.In/Out
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
runtimeOut
exception
string
exceptionOut

SupervisedModelBuilderSchema

parameters
Parameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut

SupervisedModelParametersSchema

response_column
VecSpecifier
Response columnIn/Out
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

SynonymV3

key
Key
A word2vec model key.In
target
string
The target string to find synonyms.In
cnt
int
Find the top cnt synonyms of the target word.In
synonyms
string[]
The synonyms.Out
cos_sim
float[]
The cosine similarities.Out

TimelineV3

now
long
Current time in millis.Out
self
string
This nodeOut
events
Iced[]
recorded timeline eventsOut

TreeStatsV3

min_depth
int
minDepthIn
max_depth
int
maxDepthIn
mean_depth
float
meanDepthIn
min_leaves
int
minLeavesIn
max_leaves
int
maxLeavesIn
mean_leaves
float
meanLeavesIn

TutorialsV3

(No fields)

TwoDimTableBase

name
string
Table NameOut
description
string
Table DescriptionOut
columns
Iced[]
Column SpecificationOut
rowcount
int
Number of RowsOut
data
Polymorphic[][]
Table Data (col-major)Out

TwoDimTableV3

name
string
Table NameOut
description
string
Table DescriptionOut
columns
Iced[]
Column SpecificationOut
rowcount
int
Number of RowsOut
data
Polymorphic[][]
Table Data (col-major)Out

TypeaheadV3

src
string
training_frameIn
limit
int
limitIn
matches
string[]
matchesOut

UnlockKeysV3

(No fields)

ValidationMessageBase

message_type
string
Type of validation message (ERROR, WARN, INFO, HIDE)Out
field_name
string
Field to which the message appliesOut
message
string
Message textOut

ValidationMessageV2

message_type
string
Type of validation message (ERROR, WARN, INFO, HIDE)Out
field_name
string
Field to which the message appliesOut
message
string
Message textOut

VarImpBase

varimp
float[]
Variable importance of individual variablesOut
names
string[]
Names of variablesOut

VarImpV3

varimp
float[]
Variable importance of individual variablesOut
names
string[]
Names of variablesOut

VecKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

WaterMeterCpuTicksV3

nodeidx
int
Index of node to query ticks for (0-based)In
cpu_ticks
long[][]
array of tick counts per coreOut

WaterMeterIoV3

nodeidx
int
Index of node to query ticks for (0-based)In
persist_stats
Iced[]
array of IO infoOut

Word2VecModelOutputV3

names
string[]
Column names.Out
domains
string[][]
Domains for categorical (enum) columns.Out
model_category
enum
Category of the model (e.g., Binomial).Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
help
Map
Help information for output fieldsOut

Word2VecModelV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
parameters
Word2VecParameters
The build parameters for the model (e.g. K for KMeans).Out
output
Word2VecOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out

Word2VecParametersV3

vecSize
int
Set size of word vectorsIn
windowSize
int
Set max skip length between wordsIn
sentSampleRate
float
Set threshold for occurrence of words. Those that appear with higher frequency in the training data will be randomly down-sampled; useful range is (0, 1e-5)In
normModel
enum
Use Hierarchical Softmax or Negative SamplingIn
negSampleCnt
int
Number of negative examples, common values are 3 - 10 (0 = not used)In
epochs
int
Number of training iterations to runIn
minWordFreq
int
This will discard words that appear less than timesIn
initLearningRate
float
Set the starting learning rateIn
wordModel
enum
Use the continuous bag of words model or the Skip-Gram modelIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
drop_na20_cols
boolean
Drop columns with more than 20% missing valuesIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out

Word2VecV3

parameters
Word2VecParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
job
Job
Job KeyOut
validation_messages
ValidationMessage[]
Parameter validation messagesOut
validation_error_count
int
Count of parameter validation errorsOut