Welcome to H2O 3.0

Welcome to the H2O documentation site! Depending on your area of interest, select a learning path from the links above.

We’re glad you’re interested in learning more about H2O - if you have any questions or need general support, please email them to our Google Group, h2ostream or post them on our Google groups forum, h2ostream. This is a public forum, so your question will be visible to other users.

Note: To join our Google group on h2ostream, you need a Google account (such as Gmail or Google+). On the h2ostream page, click the Join group button, then click the New Topic button to post a new message. You don’t need to request or leave a message to join - you should be added to the group automatically.

We welcome your feedback! Please let us know if you have any questions or comments about H2O by clicking the chat balloon button in the lower-right corner in Flow (H2O’s web UI).

Chat Button

Type your question in the entry field that appears at the bottom of the sidebar and you will be connected with an H2O expert who will respond to your query in real time.

Chat Sidebar


New Users

If you’re just getting started with H2O, here are some links to help you learn more:


Experienced Users

If you’ve used previous versions of H2O, the following links will help guide you through the process of upgrading to H2O 3.0.


Enterprise Users

If you’re considering using H2O in an enterprise environment, you’ll be happy to know that the H2O platform is supported on all major Hadoop distributions including Cloudera Enterprise, Hortonworks Data Platform and the MapR Apache Hadoop Distribution.

H2O can be deployed in-memory directly on top of existing Hadoop clusters without the need for data transfers, allowing for unmatched speed and ease of use. To ensure the integrity of data stored in Hadoop clusters, the H2O platform supports native integration of the Kerberos protocol.

For additional sales or marketing assistance, please email sales@h2o.ai.


Sparkling Water Users

Sparkling Water is a gradle project with the following submodules:

The best way to get started is to modify the core module or create a new module, which extends a project.

Users of our Spark-compatible solution, Sparkling Water, should be aware that Sparkling Water is only supported with the latest version of H2O. For more information about Sparkling Water, refer to the following links.

Sparkling Water is versioned according to the Spark versioning:

Getting Started with Sparkling Water

Sparkling Water Blog Posts

Sparkling Water Meetup Slide Decks


Python Users

Pythonistas will be glad to know that H2O now provides support for this popular programming language. Python users can also use H2O with IPython notebooks. For more information, refer to the following links.


R Users

Don’t worry, R users - we still provide R support in the latest version of H2O, just as before. The R components of H2O have been cleaned up, simplified, and standardized, so the command format is easier and more intuitive. Due to these improvements, be aware that any scripts created with previous versions of H2O will need some revision to be compatible with the latest version.

We have provided the following helpful resources to assist R users in upgrading to the latest version, including a document that outlines the differences between versions and a tool that reviews scripts for deprecated or renamed parameters.

Currently, the only version of R that is known to be incompatible with H2O is R version 3.1.0 (codename “Spring Dance”). If you are using that version, we recommend upgrading the R version before using H2O.


API Users

API users will be happy to know that the APIs have been more thoroughly documented in the latest release of H2O and additional capabilities (such as exporting weights and biases for Deep Learning models) have been added.

REST APIs are generated immediately out of the code, allowing users to implement machine learning in many ways. For example, REST APIs could be used to call a model created by sensor data and to set up auto-alerts if the sensor data falls below a specified threshold.


Java Users

For Java developers, the following resources will help you create your own custom app that uses H2O.

SDK Information

The Java API is generated and accessible from the download page.


Developers

If you’re looking to use H2O to help you develop your own apps, the following links will provide helpful references.

For the latest version of IDEA IntelliJ, run ./gradlew idea, then click File > Open within IDEA. Select the .ipr file in the repository and click the Choose button.

For older versions of IDEA IntelliJ, run ./gradlew idea, then Import Project within IDEA and point it to the h2o-3 directory.

Note: This process will take longer, so we recommend using the first method if possible.

For JUnit tests to pass, you may need multiple H2O nodes. Create a “Run/Debug” configuration with the following parameters:

Type: Application
Main class: H2OApp
Use class path of module: h2o-app

After starting multiple “worker” node processes in addition to the JUnit test process, they will cloud up and run the multi-node JUnit tests.


Downloading H2O

To download H2O, go to our downloads page. Select a build type (bleeding edge or latest alpha), then select an installation method (standalone, R, Python, Hadoop, or Maven) by clicking the tabs at the top of the page. Follow the instructions in the tab to install H2O.

Starting H2O …

There are a variety of ways to start H2O, depending on which client you would like to use.

… From R

To use H2O in R, follow the instructions on the download page.

… From Python

To use H2O in Python, follow the instructions on the download page.

… On Spark

To use H2O on Spark, follow the instructions on the Sparkling Water download page.

… From the Cmd Line

You can use Terminal (OS X) or the Command Prompt (Windows) to launch H2O 3.0. When you launch from the command line, you can include additional instructions to H2O 3.0, such as how many nodes to launch, how much memory to allocate for each node, assign names to the nodes in the cloud, and more.

Note: H2O requires some space in the /tmp directory to launch. If you cannot launch H2O, try freeing up some space in the /tmp directory, then try launching H2O again.

For more detailed instructions on how to build and launch H2O, including how to clone the repository, how to pull from the repository, and how to install required dependencies, refer to the developer documentation.

There are two different argument types:

The arguments use the following format: java <JVM Options> -jar h2o.jar <H2O Options>.

JVM Options

Note: Do not try to launch H2O with more memory than you have available.

H2O Options

Cloud Formation Behavior

New H2O nodes join to form a cloud during launch. After a job has started on the cloud, it prevents new members from joining.

Wait for the INFO: Registered: # schemas in: #mS output before entering the above command again to add another node (the number for # will vary).

Flatfile Configuration for Multi-Node Clusters

Running H2O on a multi-node cluster allows you to use more memory for large-scale tasks (for example, creating models from huge datasets) than would be possible on a single node.

If you are configuring many nodes, using the -flatfile option is fast and easy. The -flatfile option is used to define a list of potential cloud peers. However, it is not an alternative to -ip and -port, which should be used to bind the IP and port address of the node you are using to launch H2O.

To configure H2O on a multi-node cluster:

  1. Locate a set of hosts that will be used to create your cluster. A host can be a server, an EC2 instance, or your laptop.
  2. Download the appropriate version of H2O for your environment.
  3. Verify the same h2o.jar file is available on each host in the multi-node cluster.
  4. Create a flatfile.txt that contains an IP address and port number for each H2O instance. Use one entry per line. For example:

    192.168.1.163:54321
    192.168.1.164:54321
    
  5. Copy the flatfile.txt to each node in the cluster.
  6. Use the -Xmx option to specify the amount of memory for each node. The cluster’s memory capacity is the sum of all H2O nodes in the cluster.

    For example, if you create a cluster with four 20g nodes (by specifying -Xmx20g four times), H2O will have a total of 80 gigs of memory available.

    For best performance, we recommend sizing your cluster to be about four times the size of your data. To avoid swapping, the -Xmx allocation must not exceed the physical memory on any node. Allocating the same amount of memory for all nodes is strongly recommended, as H2O works best with symmetric nodes.

    Note the optional -ip and -port options specify the IP address and ports to use. The -ip option is especially helpful for hosts with multiple network interfaces.

    java -Xmx20g -jar h2o.jar -flatfile flatfile.txt -port 54321

    The output will resemble the following:

     04-20 16:14:00.253 192.168.1.70:54321    2754   main      INFO:   1. Open a terminal and run 'ssh -L 55555:localhost:54321 H2O-3User@###.###.#.##'
     04-20 16:14:00.253 192.168.1.70:54321    2754   main      INFO:   2. Point your browser to http://localhost:55555
     04-20 16:14:00.437 192.168.1.70:54321    2754   main      INFO: Log dir: '/tmp/h2o-H2O-3User/h2ologs'
     04-20 16:14:00.437 192.168.1.70:54321    2754   main      INFO: Cur dir: '/Users/H2O-3User/h2o-3'
     04-20 16:14:00.459 192.168.1.70:54321    2754   main      INFO: HDFS subsystem successfully initialized
     04-20 16:14:00.460 192.168.1.70:54321    2754   main      INFO: S3 subsystem successfully initialized
     04-20 16:14:00.460 192.168.1.70:54321    2754   main      INFO: Flow dir: '/Users/H2O-3User/h2oflows'
     04-20 16:14:00.475 192.168.1.70:54321    2754   main      INFO: Cloud of size 1 formed [/192.168.1.70:54321]
    

    As you add more nodes to your cluster, the output is updated: INFO WATER: Cloud of size 2 formed [/...]...

  7. Access the H2O 3.0 web UI (Flow) with your browser. Point your browser to the HTTP address specified in the output Listening for HTTP and REST traffic on ....

… On EC2 and S3

Note: If you would like to try out H2O on an EC2 cluster, play.h2o.ai is the easiest way to get started. H2O Play provides access to a temporary cluster managed by H2O.

If you would still like to set up your own EC2 cluster, follow the instructions below.

On EC2

Tested on Redhat AMI, Amazon Linux AMI, and Ubuntu AMI

To use the Amazon Web Services (AWS) S3 storage solution, you will need to pass your S3 access credentials to H2O. This will allow you to access your data on S3 when importing data frames with path prefixes s3n://....

For security reasons, we recommend writing a script to read the access credentials that are stored in a separate file. This will not only keep your credentials from propagating to other locations, but it will also make it easier to change the credential information later.

Standalone Instance

When running H2O in standalone mode using the simple Java launch command, we can pass in the S3 credentials in two ways.


Multi-Node Instance

Python and the boto Python library are required to launch a multi-node instance of H2O on EC2. Confirm these dependencies are installed before proceeding.

For more information, refer to the H2O EC2 repo.

Build a cluster of EC2 instances by running the following commands on the host that can access the nodes using a public DNS name.

  1. Edit h2o-cluster-launch-instances.py to include your SSH key name and security group name, as well as any other environment-specific variables.

      ./h2o-cluster-launch-instances.py
      ./h2o-cluster-distribute-h2o.sh
    

    —OR—

      ./h2o-cluster-launch-instances.py
      ./h2o-cluster-download-h2o.sh
    

    Note: The second method may be faster than the first, since download pulls from S3.

  1. Distribute the credentials using ./h2o-cluster-distribute-aws-credentials.sh.

    Note: If you are running H2O using an IAM role, it is not necessary to distribute the AWS credentials to all the nodes in the cluster. The latest version of H2O can access the temporary access key.

    Caution: Distributing the AWS credentials copies the Amazon AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY to the instances to enable S3 and S3N access. Use caution when adding your security keys to the cloud.

  2. Start H2O by launching one H2O node per EC2 instance: ./h2o-cluster-start-h2o.sh

    Wait 60 seconds after entering the command before entering it on the next node.

  3. In your internet browser, substitute any of the public DNS node addresses for IP_ADDRESS in the following example: http://IP_ADDRESS:54321

    • To start H2O: ./h2o-cluster-start-h2o.sh
    • To stop H2O: ./h2o-cluster-stop-h2o.sh
    • To shut down the cluster, use your Amazon AWS console to shut down the cluster manually.

Core-site.xml Example

The following is an example core-site.xml file:

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<!-- Put site-specific property overrides in this file. -->

<configuration>

    <!--
    <property>
    <name>fs.default.name</name>
    <value>s3n://<your s3 bucket></value>
    </property>
    -->

    <property>
        <name>fs.s3n.awsAccessKeyId</name>
        <value>insert access key here</value>
    </property>

    <property>
        <name>fs.s3n.awsSecretAccessKey</name>
        <value>insert secret key here</value>
    </property>
    </configuration> 

Launching H2O

Note: Before launching H2O on an EC2 cluster, verify that ports 54321 and 54322 are both accessible by TCP and UDP.

Selecting the Operating System and Virtualization Type

Select your operating system and the virtualization type of the prebuilt AMI on Amazon. If you are using Windows, you will need to use a hardware-assisted virtual machine (HVM). If you are using Linux, you can choose between para-virtualization (PV) and HVM. These selections determine the type of instances you can launch.

EC2 Systems

For more information about virtualization types, refer to Amazon.


Configuring the Instance

  1. Select the IAM role and policy to use to launch the instance. H2O detects the temporary access keys associated with the instance, so you don’t need to copy your AWS credentials to the instances.

    EC2 Configuration

  2. When launching the instance, select an accessible key pair.

    EC2 Key Pair


(Windows Users) Tunneling into the Instance

For Windows users that do not have the ability to use ssh from the terminal, either download Cygwin or a Git Bash that has the capability to run ssh:

ssh -i amy_account.pem ec2-user@54.165.25.98

Otherwise, download PuTTY and follow these instructions:

  1. Launch the PuTTY Key Generator.
  2. Load your downloaded AWS pem key file. Note: To see the file, change the browser file type to “All”.
  3. Save the private key as a .ppk file.

    Private Key

  4. Launch the PuTTY client.

  5. In the Session section, enter the host name or IP address. For Ubuntu users, the default host name is ubuntu@<ip-address>. For Linux users, the default host name is ec2-user@<ip-address>.

    Configuring Session

  6. Select SSH, then Auth in the sidebar, and click the Browse button to select the private key file for authentication.

    Configuring SSH

  7. Start a new session and click the Yes button to confirm caching of the server’s rsa2 key fingerprint and continue connecting.

    PuTTY Alert


Downloading Java and H2O

  1. Download Java (JDK 1.7 or later) if it is not already available on the instance.
  2. To download H2O, run the wget command with the link to the zip file available on our website by copying the link associated with the Download button for the selected H2O build.

     wget http://h2o-release.s3.amazonaws.com/h2o/rel-tibshirani/3/index.html
     unzip h2o-3.6.0.3.zip
     cd h2o-3.6.0.3
     java -Xmx4g -jar h2o.jar
    
  3. From your browser, navigate to <Private_IP_Address>:54321 or <Public_DNS>:54321 to use H2O’s web interface.

… On Hadoop

Currently supported versions:

Important Points to Remember:

Prerequisite: Open Communication Paths

H2O communicates using two communication paths. Verify these are open and available for use by H2O.

Path 1: mapper to driver

Optionally specify this port using the -driverport option in the hadoop jar command (see “Hadoop Launch Parameters” below). This port is opened on the driver host (the host where you entered the hadoop jar command). By default, this port is chosen randomly by the operating system.

Path 2: mapper to mapper

Optionally specify this port using the -baseport option in the hadoop jar command (refer to Hadoop Launch Parameters below. This port and the next subsequent port are opened on the mapper hosts (the Hadoop worker nodes) where the H2O mapper nodes are placed by the Resource Manager. By default, ports 54321 (TCP) and 54322 (TCP & UDP) are used.

The mapper port is adaptive: if 54321 and 54322 are not available, H2O will try 54323 and 54324 and so on. The mapper port is designed to be adaptive because sometimes if the YARN cluster is low on resources, YARN will place two H2O mappers for the same H2O cluster request on the same physical host. For this reason, we recommend opening a range of more than two ports (20 ports should be sufficient).


Tutorial

The following tutorial will walk the user through the download or build of H2O and the parameters involved in launching H2O from the command line.

  1. Download the latest H2O release for your version of Hadoop:

     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-cdh5.2.zip
     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-cdh5.3.zip
     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-hdp2.1.zip
     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-hdp2.2.zip
     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-hdp2.3.zip
     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-mapr3.1.1.zip
     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-mapr4.0.1.zip
     wget http://h2o-release.s3.amazonaws.com/h2o/master/3/h2o-3.6.0.3-mapr5.0.zip
    

    Note: Enter only one of the above commands.

  2. Prepare the job input on the Hadoop Node by unzipping the build file and changing to the directory with the Hadoop and H2O’s driver jar files.

     unzip h2o-3.6.0.3-*.zip
     cd h2o-3.6.0.3-*
    
  3. To launch H2O nodes and form a cluster on the Hadoop cluster, run:

     hadoop jar h2odriver.jar -nodes 1 -mapperXmx 6g -output hdfsOutputDirName
    

    The above command launches a 6g node of H2O. We recommend you launch the cluster with at least four times the memory of your data file size.

    • mapperXmx is the mapper size or the amount of memory allocated to each node. Specify at least 6 GB.

    • nodes is the number of nodes requested to form the cluster.

    • output is the name of the directory created each time a H2O cloud is created so it is necessary for the name to be unique each time it is launched.

  1. To monitor your job, direct your web browser to your standard job tracker Web UI. To access H2O’s Web UI, direct your web browser to one of the launched instances. If you are unsure where your JVM is launched, review the output from your command after the nodes has clouded up and formed a cluster. Any of the nodes’ IP addresses will work as there is no master node.

     Determining driver host interface for mapper->driver callback...
     [Possible callback IP address: 172.16.2.181]
     [Possible callback IP address: 127.0.0.1]
     ...
     Waiting for H2O cluster to come up...
     H2O node 172.16.2.184:54321 requested flatfile
     Sending flatfiles to nodes...
      [Sending flatfile to node 172.16.2.184:54321]
     H2O node 172.16.2.184:54321 reports H2O cluster size 1 
     H2O cluster (1 nodes) is up
     Blocking until the H2O cluster shuts down...
    

Hadoop Launch Parameters

Accessing S3 Data from Hadoop

H2O launched on Hadoop can access S3 Data in addition to to HDFS. To enable access, follow the instructions below.

Edit Hadoop’s core-site.xml, then set the HADOOP_CONF_DIR environment property to the directory containing the core-site.xml file. For an example core-site.xml file, refer to Core-site.xml. Typically, the configuration directory for most Hadoop distributions is /etc/hadoop/conf.

You can also pass the S3 credentials when launching H2O with the Hadoop jar command. Use the -D flag to pass the credentials:

    hadoop jar h2odriver.jar -Dfs.s3.awsAccessKeyId="${AWS_ACCESS_KEY}" -Dfs.s3n.awsSecretAccessKey="${AWS_SECRET_KEY}" -n 3 -mapperXmx 10g  -output outputDirectory

where AWS_ACCESS_KEY represents your user name and AWS_SECRET_KEY represents your password.

Then import the data with the S3 URL path:

… Using Docker

This walkthrough describes:

Walkthrough

Prerequisites

Notes

Step 1 - Install and Launch Docker

Depending on your OS, select the appropriate installation method:

Step 2 - Create or Download Dockerfile

Note: If the following commands do not work, prepend with sudo.

Create a folder on the Host OS to host your Dockerfile by running:

mkdir -p /data/h2o-rel-tibshirani

Next, either download or create a Dockerfile, which is a build recipe that builds the container.

Download and use our Dockerfile template by running:

cd /data/h2o-rel-tibshirani
wget https://raw.githubusercontent.com/h2oai/h2o-3/master/Dockerfile

The Dockerfile:

Step 3 - Build Docker image from Dockerfile

From the /data/h2o-rel-tibshirani directory, run:

docker build -t "h2oai/rel-tibshirani:v5" .

Note: v5 represents the current version number.

Because it assembles all the necessary parts for the image, this process can take a few minutes.

Step 4 - Run Docker Build

On a Mac, use the argument -p 54321:54321 to expressly map the port 54321. This is not necessary on Linux.

docker run -ti -p 54321:54321 h2o.ai/rel-tibshirani:v5 /bin/bash

Note: v5 represents the version number.

Step 5 - Launch H2O

Navigate to the /opt directory and launch H2O. Change the value of -Xmx to the amount of memory you want to allocate to the H2O instance. By default, H2O launches on port 54321.

cd /opt
java -Xmx1g -jar h2o.jar

Step 6 - Access H2O from the web browser or R

03:58:25.963 main      INFO WATER: Cloud of size 1 formed [/172.17.0.5:54321 (00:00:00.000)]
$ boot2docker ip
192.168.59.103

You can also view the IP address (192.168.99.100 in the example below) by scrolling to the top of the Docker daemon window:


                        ##         .
                  ## ## ##        ==
               ## ## ## ## ##    ===
           /"""""""""""""""""\___/ ===
      ~~~ {~~ ~~~~ ~~~ ~~~~ ~~~ ~ /  ===- ~~~
           \______ o           __/
             \    \         __/
              \____\_______/


docker is configured to use the default machine with IP 192.168.99.100
For help getting started, check out the docs at https://docs.docker.com

After obtaining the IP address, point your browser to the specified ip address and port. In R, you can access the instance by installing the latest version of the H2O R package and running:

library(h2o)
dockerH2O <- h2o.init(ip = "192.168.59.103", port = 54321)

Flow Web UI …

H2O Flow is an open-source user interface for H2O. It is a web-based interactive environment that allows you to combine code execution, text, mathematics, plots, and rich media in a single document.

With H2O Flow, you can capture, rerun, annotate, present, and share your workflow. H2O Flow allows you to use H2O interactively to import files, build models, and iteratively improve them. Based on your models, you can make predictions and add rich text to create vignettes of your work - all within Flow’s browser-based environment.

Flow’s hybrid user interface seamlessly blends command-line computing with a modern graphical user interface. However, rather than displaying output as plain text, Flow provides a point-and-click user interface for every H2O operation. It allows you to access any H2O object in the form of well-organized tabular data.

H2O Flow sends commands to H2O as a sequence of executable cells. The cells can be modified, rearranged, or saved to a library. Each cell contains an input field that allows you to enter commands, define functions, call other functions, and access other cells or objects on the page. When you execute the cell, the output is a graphical object, which can be inspected to view additional details.

While H2O Flow supports REST API, R scripts, and CoffeeScript, no programming experience is required to run H2O Flow. You can click your way through any H2O operation without ever writing a single line of code. You can even disable the input cells to run H2O Flow using only the GUI. H2O Flow is designed to guide you every step of the way, by providing input prompts, interactive help, and example flows.

Introduction

This guide will walk you through how to use H2O’s web UI, H2O Flow. To view a demo video of H2O Flow, click here.


Getting Help


First, let’s go over the basics. Type h to view a list of helpful shortcuts.

The following help window displays:

help menu

To close this window, click the X in the upper-right corner, or click the Close button in the lower-right corner. You can also click behind the window to close it. You can also access this list of shortcuts by clicking the Help menu and selecting Keyboard Shortcuts.

For additional help, click Help > Assist Me or click the Assist Me! button in the row of buttons below the menus.

Assist Me

You can also type assist in a blank cell and press Ctrl+Enter. A list of common tasks displays to help you find the correct command.

Assist Me links

There are multiple resources to help you get started with Flow in the Help sidebar.

Note: To hide the sidebar, click the >> button above it. Flow - Hide Sidebar

To display the sidebar if it is hidden, click the << button. Flow - Hide Sidebar

To access this documentation, select the Flow Web UI… link below the General heading in the Help sidebar.

You can also explore the pre-configured flows available in H2O Flow for a demonstration of how to create a flow. To view the example flows:

If you have a flow currently open, a confirmation window appears asking if the current notebook should be replaced. To load the example flow, click the Load Notebook button.

To view the REST API documentation, click the Help tab in the sidebar and then select the type of REST API documentation (Routes or Schemas).

REST API documentation

Before getting started with H2O Flow, make sure you understand the different cell modes. Certain actions can only be performed when the cell is in a specific mode.


Understanding Cell Modes

There are two modes for cells: edit and command.

Using Edit Mode

In edit mode, the cell is yellow with a blinking bar to indicate where text can be entered and there is an orange flag to the left of the cell.

Edit Mode

Using Command Mode

In command mode, the flag is yellow. The flag also indicates the cell’s format:

NOTE: If there is an error in the cell, the flag is red.

Cell error

If the cell is executing commands, the flag is teal. The flag returns to yellow when the task is complete.

Cell executing

Changing Cell Formats

To change the cell’s format (for example, from code to Markdown), make sure you are in command (not edit) mode and that the cell you want to change is selected. The easiest way to do this is to click on the flag to the left of the cell. Enter the keyboard shortcut for the format you want to use. The flag’s text changes to display the current format.

Cell Mode Keyboard Shortcut
Code y
Markdown m
Raw text r
Heading 1 1
Heading 2 2
Heading 3 3
Heading 4 4
Heading 5 5
Heading 6 6

Running Cells

The series of buttons at the top of the page below the menus run cells in a flow.

Flow - Run Buttons

Running Flows

When you run the flow, a progress bar indicates the current status of the flow. You can cancel the currently running flow by clicking the Stop button in the progress bar.

Flow Progress Bar

When the flow is complete, a message displays in the upper right.

Flow - Completed Successfully Flow - Did Not Complete

Note: If there is an error in the flow, H2O Flow stops at the cell that contains the error.

Using Keyboard Shortcuts

Here are some important keyboard shortcuts to remember:

The following commands must be entered in command mode.

You can view these shortcuts by clicking Help > Keyboard Shortcuts or by clicking the Help tab in the sidebar.

Using Variables in Cells

Variables can be used to store information such as download locations. To use a variable in Flow:

  1. Define the variable in a code cell (for example, locA = "https://h2o-public-test-data.s3.amazonaws.com/bigdata/laptop/kdd2009/small-churn/kdd_train.csv"). Flow variable definition
  2. Run the cell. H2O validates the variable. Flow variable validation
  3. Use the variable in another code cell (for example, importFiles [locA]). Flow variable example To further simplify your workflow, you can save the cells containing the variables and definitions as clips.

Using Flow Buttons

There are also a series of buttons at the top of the page below the flow name that allow you to save the current flow, add a new cell, move cells up or down, run the current cell, and cut, copy, or paste the current cell. If you hover over the button, a description of the button’s function displays.

Flow buttons

You can also use the menus at the top of the screen to edit the order of the cells, toggle specific format types (such as input or output), create models, or score models. You can also access troubleshooting information or obtain help with Flow.

Flow menus

Note: To disable the code input and use H2O Flow strictly as a GUI, click the Cell menu, then Toggle Cell Input.

Now that you are familiar with the cell modes, let’s import some data.


… Importing Data

If you don’t have any data of your own to work with, you can find some example datasets here:

There are multiple ways to import data in H2O flow:

After selecting the file to import, the file path displays in the “Search Results” section. To import a single file, click the plus sign next to the file. To import all files in the search results, click the Add all link. The files selected for import display in the “Selected Files” section. Import Files

Note: If the file is compressed, it will only be read using a single thread. For best performance, we recommend uncompressing the file before importing, as this will allow use of the faster multithreaded distributed parallel reader during import. Please note that .zip files containing multiple files are not currently supported.

After you click the Import button, the raw code for the current job displays. A summary displays the results of the file import, including the number of imported files and their Network File System (nfs) locations.

Import Files - Results

Uploading Data

To upload a local file, click the Data menu and select Upload File…. Click the Choose File button, select the file, click the Choose button, then click the Upload button.

File Upload Pop-Up

When the file has uploaded successfully, a message displays in the upper right and the Setup Parse cell displays.

File Upload Successful

Ok, now that your data is available in H2O Flow, let’s move on to the next step: parsing. Click the Parse these files button to continue.


Parsing Data

After you have imported your data, parse the data.

Flow - Parse options

The read-only Sources field displays the file path for the imported data selected for parsing.

The ID contains the auto-generated name for the parsed data (by default, the file name of the imported file with .hex as the file extension). Use the default name or enter a custom name in this field.

Select the parser type (if necessary) from the drop-down Parser list. For most data parsing, H2O automatically recognizes the data type, so the default settings typically do not need to be changed. The following options are available:

If a separator or delimiter is used, select it from the Separator list.

Select a column header option, if applicable:

Select any necessary additional options:

A preview of the data displays in the “Edit Column Names and Types” section.

To change or add a column name, edit or enter the text in the column’s entry field. In the screenshot below, the entry field for column 16 is highlighted in red.

Flow - Column Name Entry Field

To change the column type, select the drop-down list to the right of the column name entry field and select the data type. The options are:

You can search for a column by entering it in the Search by column name… entry field above the first column name entry field. As you type, H2O displays the columns that match the specified search terms.

Note: Only custom column names are searchable. Default column names cannot be searched.

To navigate the data preview, click the <- Previous page or -> Next page buttons.

Flow - Pagination buttons

After making your selections, click the Parse button.

After you click the Parse button, the code for the current job displays.

Flow - Parse code

Since we’ve submitted a couple of jobs (data import & parse) to H2O now, let’s take a moment to learn more about jobs in H2O.


Viewing Jobs

Any command (such as importFiles) you enter in H2O is submitted as a job, which is associated with a key. The key identifies the job within H2O and is used as a reference.

Viewing All Jobs

To view all jobs, click the Admin menu, then click Jobs, or enter getJobs in a cell in CS mode.

View Jobs

The following information displays:

To refresh this information, click the Refresh button. To view the details of the job, click the View button.

Viewing Specific Jobs

To view a specific job, click the link in the “Destination” column.

View Job - Model

The following information displays:

NOTE: For a better understanding of how jobs work, make sure to review the Viewing Frames section as well.

Ok, now that you understand how to find jobs in H2O, let’s submit a new one by building a model.


… Building Models

To build a model:

The Build Model… button can be accessed from any page containing the .hex key for the parsed data (for example, getJobs > getFrame). The following image depicts the K-Means model type. Available options vary depending on model type.

Model Builder

In the Build a Model cell, select an algorithm from the drop-down menu:

The available options vary depending on the selected model. If an option is only available for a specific model type, the model type is listed. If no model type is specified, the option is applicable to all model types.

Advanced Options

Expert Options


Viewing Models

Click the Assist Me! button, then click the getModels link, or enter getModels in the cell in CS mode and press Ctrl+Enter. A list of available models displays.

Flow Models

To view all current models, you can also click the Model menu and click List All Models.

To inspect a model, check its checkbox then click the Inspect button, or click the Inspect button to the right of the model name.

Flow Model

A summary of the model’s parameters displays. To display more details, click the Show All Parameters button.

To delete a model, click the Delete button.

To generate a Plain Old Java Object (POJO) that can use the model outside of H2O, click the Download POJO button.

Note: A POJO can be run in standalone mode or it can be integrated into a platform, such as Hadoop’s Storm. To make the POJO work in your Java application, you will also need the h2o-genmodel.jar file (available in h2o-3/h2o-genmodel/build/libs/h2o-genmodel.jar).


Exporting and Importing Models

To export a built model:

  1. Click the Model menu at the top of the screen.
  2. Select Export Model…
  3. In the exportModel cell that appears, select the model from the drop-down Model: list.
  4. Enter a location for the exported model in the Path: entry field.

    Note: If you specify a location that doesn’t exist, it will be created. For example, if you only enter test in the Path: entry field, the model will be exported to h2o-3/test.

  5. To overwrite any files with the same name, check the Overwrite: checkbox.
  6. Click the Export button. A confirmation message displays when the model has been successfully exported.

    Export Model

To import a built model:

  1. Click the Model menu at the top of the screen.
  2. Select Import Model…
  3. Enter the location of the model in the Path: entry field.

    Note: The file path must be complete (e.g., Users/h2o-user/h2o-3/exported_models). Do not rename models while importing.

  4. To overwrite any files with the same name, check the Overwrite: checkbox.
  5. Click the Import button. A confirmation message displays when the model has been successfully imported. To view the imported model, click the View Model button.

    Import Model


Using Grid Search

To include a parameter in a grid search in Flow, check the checkbox in the Grid? column to the right of the parameter name (highlighted in red in the image below).

Grid Search Column


Checkpointing Models

Some model types, such as DRF, GBM, and Deep Learning, support checkpointing. A checkpoint resumes model training so that you can iterate your model. The dataset must be the same. The following model parameters must be the same when restarting a model from a checkpoint:

Must be the same as in checkpoint model
drop_na20_cols response_column activation
use_all_factor_levels adaptive_rate autoencoder
rho epsilon sparse
sparsity_beta col_major rate
rate_annealing rate_decay momentum_start
momentum_ramp momentum_stable nesterov_accelerated_gradient
ignore_const_cols max_categorical_features nfolds
distribution tweedie_power

The following parameters can be modified when restarting a model from a checkpoint:

Can be modified
seed checkpoint epochs
score_interval train_samples_per_iteration target_ratio_comm_to_comp
score_duty_cycle score_training_samples score_validation_samples
score_validation_sampling classification_stop regression_stop
quiet_mode max_confusion_matrix_size max_hit_ratio_k
diagnostics variable_importances initial_weight_distribution
initial_weight_scale force_load_balance replicate_training_data
shuffle_training_data single_node_mode fast_mode
l1 l2 max_w2
input_dropout_ratio hidden_dropout_ratios loss
overwrite_with_best_model missing_values_handling average_activation
reproducible export_weights_and_biases elastic_averaging
elastic_averaging_moving_rate elastic_averaging_regularization mini_batch_size
  1. After building your model, copy the model_id. To view the model_id, click the Model menu then click List All Models.
  2. Select the model type from the drop-down Model menu.

    Note: The model type must be the same as the checkpointed model.

  3. Paste the copied model_id in the checkpoint entry field.
  4. Click the Build Model button. The model will resume training.

Interpreting Model Results

Scoring history: GBM, DL Represents the error rate of the model as it is built. Typically, the error rate will be higher at the beginning (the left side of the graph) then decrease as the model building completes and accuracy improves.

Scoring History example

Variable importances: GBM, DL Represents the statistical significance of each variable in the data in terms of its affect on the model. Variables are listed in order of most to least importance. The percentage values represent the percentage of importance across all variables, scaled to 100%. The method of computing each variable’s importance depends on the algorithm. To view the scaled importance value of a variable, use your mouse to hover over the bar representing the variable.

Variable Importances example

Confusion Matrix: DL Table depicting performance of algorithm in terms of false positives, false negatives, true positives, and true negatives. The actual results display in the columns and the predictions display in the rows; correct predictions are highlighted in yellow. In the example below, 0 was predicted correctly 902 times, while 8 was predicted correctly 822 times and 0 was predicted as 4 once.

Confusion Matrix example

ROC Curve: DL, GLM Graph representing the ratio of true positives to false positives. To view a specific threshold, select a value from the drop-down Threshold list. To view any of the following details, select it from the drop-down Criterion list:

The lower-left side of the graph represents less tolerance for false positives while the upper-right represents more tolerance for false positives. Ideally, a highly accurate ROC resembles the following example.

ROC Curve example

To learn how to make predictions, continue to the next section.


… Making Predictions

After creating your model, click the key link for the model, then click the Predict button. Select the model to use in the prediction from the drop-down Model: menu and the data frame to use in the prediction from the drop-down Frame: menu, then click the Predict button.

Making Predictions


Viewing Predictions

Click the Assist Me! button, then click the getPredictions link, or enter getPredictions in the cell in CS mode and press Ctrl+Enter. A list of the stored predictions displays. To view a prediction, click the View button to the right of the model name.

Viewing Predictions

You can also view predictions by clicking the drop-down Score menu and selecting List All Predictions.


Viewing Frames

To view a specific frame, click the “Key” link for the specified frame, or enter getFrameSummary "FrameName" in a cell in CS mode (where FrameName is the name of a frame, such as allyears2k.hex).

Viewing specified frame

From the getFrameSummary cell, you can:

When you view a frame, you can “drill-down” to the necessary level of detail (such as a specific column or row) using the Inspect button or by clicking the links. The following screenshot displays the results of clicking the Inspect button for a frame.

Inspecting Frames

This screenshot displays the results of clicking the columns link.

Inspecting Columns

To view all frames, click the Assist Me! button, then click the getFrames link, or enter getFrames in the cell in CS mode and press Ctrl+Enter. You can also view all current frames by clicking the drop-down Data menu and selecting List All Frames.

A list of the current frames in H2O displays that includes the following information for each frame:

For parsed data, the following information displays:

To make a prediction, check the checkboxes for the frames you want to use to make the prediction, then click the Predict on Selected Frames button.


Splitting Frames

Datasets can be split within Flow for use in model training and testing.

splitFrame cell

  1. To split a frame, click the Assist Me button, then click splitFrame.

    Note: You can also click the drop-down Data menu and select Split Frame….

  2. From the drop-down Frame: list, select the frame to split.
  3. In the second Ratio entry field, specify the fractional value to determine the split. The first Ratio field is automatically calculated based on the values entered in the second Ratio field.

    Note: Only fractional values between 0 and 1 are supported (for example, enter .5 to split the frame in half). The total sum of the ratio values must equal one. H2O automatically adjusts the ratio values to equal one; if unsupported values are entered, an error displays.

  4. In the Key entry field, specify a name for the new frame.
  5. (Optional) To add another split, click the Add a split link. To remove a split, click the X to the right of the Key entry field.
  6. Click the Create button.

Creating Frames

To create a frame with a large amount of random data (for example, to use for testing), click the drop-down Admin menu, then select Create Synthetic Frame. Customize the frame as needed, then click the Create button to create the frame.

Flow - Creating Frames


Plotting Frames

To create a plot from a frame, click the Inspect button, then click the Plot button.

Select the type of plot (point, path, or rect) from the drop-down Type menu, then select the x-axis and y-axis from the following options:

Select one of the above options from the drop-down Color menu to display the specified data in color, then click the Plot button to plot the data.

Flow - Plotting Frames

Note: Because H2O stores enums internally as numeric then maps the integers to an array of strings, any min, max, or mean values for categorical columns are not meaningful and should be ignored. Displays for categorical data will be modified in a future version of H2O.


… Using Flows

You can use and modify flows in a variety of ways:


Using Clips

Clips enable you to save cells containing your workflow for later reuse. To save a cell as a clip, click the paperclip icon to the right of the cell (highlighted in the red box in the following screenshot). Paperclip icon

To use a clip in a workflow, click the “Clips” tab in the sidebar on the right.

Clips tab

All saved clips, including the default system clips (such as assist, importFiles, and predict), are listed. Clips you have created are listed under the “My Clips” heading. To select a clip to insert, click the circular button to the left of the clip name. To delete a clip, click the trashcan icon to right of the clip name.

NOTE: The default clips listed under “System” cannot be deleted.

Deleted clips are stored in the trash. To permanently delete all clips in the trash, click the Empty Trash button.

NOTE: Saved data, including flows and clips, are persistent as long as the same IP address is used for the cluster. If a new IP is used, previously saved flows and clips are not available.


Viewing Outlines

The Outline tab in the sidebar displays a brief summary of the cells currently used in your flow; essentially, a command history.


Saving Flows

You can save your flow for later reuse. To save your flow as a notebook, click the “Save” button (the first button in the row of buttons below the flow name), or click the drop-down “Flow” menu and select “Save Flow.” To enter a custom name for the flow, click the default flow name (“Untitled Flow”) and type the desired flow name. A pencil icon indicates where to enter the desired name.

Renaming Flows

To confirm the name, click the checkmark to the right of the name field.

Confirm Name

To reuse a saved flow, click the “Flows” tab in the sidebar, then click the flow name. To delete a saved flow, click the trashcan icon to the right of the flow name.

Flows

Finding Saved Flows on your Disk

By default, flows are saved to the h2oflows directory underneath your home directory. The directory where flows are saved is printed to stdout:

03-20 14:54:20.945 172.16.2.39:54323     95667  main      INFO: Flow dir: '/Users/<UserName>/h2oflows'

To back up saved flows, copy this directory to your preferred backup location.

To specify a different location for saved flows, use the command-line argument -flow_dir when launching H2O:

java -jar h2o.jar -flow_dir /<New>/<Location>/<For>/<Saved>/<Flows>

where /<New>/<Location>/<For>/<Saved>/<Flows> represents the specified location. If the directory does not exist, it will be created the first time you save a flow.

Saving Flows on a Hadoop cluster

If you are running H2O Flow on a Hadoop cluster, H2O will try to find the HDFS home directory to use as the default directory for flows. If the HDFS home directory is not found, flows cannot be saved unless a directory is specified while launching using -flow_dir:

hadoop jar h2odriver.jar -nodes 1 -mapperXmx 6g -output hdfsOutputDirName -flow_dir hdfs:///<Saved>/<Flows>/<Location>

The location specified in flow_dir may be either an hdfs or regular filesystem directory. If the directory does not exist, it will be created the first time you save a flow.

Copying Flows

To create a copy of the current flow, select the Flow menu, then click Make a Copy. The name of the current flow changes to Copy of <FlowName> (where <FlowName> is the name of the flow). You can save the duplicated flow using this name by clicking Flow > Save Flow, or rename it before saving.

Downloading Flows

After saving a flow as a notebook, click the Flow menu, then select Download this Flow. A new window opens and the saved flow is downloaded to the default downloads folder on your computer. The file is exported as <filename>.flow, where <filename> is the name specified when the flow was saved.

Caution: You must have an active internet connection to download flows.

Loading Flows

To load a saved flow, click the Flows tab in the sidebar at the right. In the pop-up confirmation window that appears, select Load Notebook, or click Cancel to return to the current flow.

Confirm Replace Flow

After clicking Load Notebook, the saved flow is loaded.

To load an exported flow, click the Flow menu and select Open Flow…. In the pop-up window that appears, click the Choose File button and select the exported flow, then click the Open button.

Open Flow

Notes:


…Troubleshooting Flow

To troubleshoot issues in Flow, use the Admin menu. The Admin menu allows you to check the status of the cluster, view a timeline of events, and view or download logs for issue analysis.

NOTE: To view the current H2O Flow version, click the Help menu, then click About.

Viewing Cluster Status

Click the Admin menu, then select Cluster Status. A summary of the status of the cluster (also known as a cloud) displays, which includes the same information:

The following information displays for each node:

To view more information, click the Show Advanced button.


Viewing CPU Status (Water Meter)

To view the current CPU usage, click the Admin menu, then click Water Meter (CPU Meter). A new window opens, displaying the current CPU use statistics.


Viewing Logs

To view the logs for troubleshooting, click the Admin menu, then click Inspect Log.

Inspect Log

To view the logs for a specific node, select it from the drop-down Select Node menu.


Downloading Logs

To download the logs for further analysis, click the Admin menu, then click Download Log. A new window opens and the logs download to your default download folder. You can close the new window after downloading the logs. Send the logs to h2ostream or file a JIRA ticket for issue resolution.


Viewing Stack Trace Information

To view the stack trace information, click the Admin menu, then click Stack Trace.

Stack Trace

To view the stack trace information for a specific node, select it from the drop-down Select Node menu.


Viewing Network Test Results

To view network test results, click the Admin menu, then click Network Test.

Network Test Results


Accessing the Profiler

The Profiler looks across the cluster to see where the same stack trace occurs, and can be helpful for identifying activity on the current CPU. To view the profiler, click the Admin menu, then click Profiler.

Profiler

To view the profiler information for a specific node, select it from the drop-down Select Node menu.


Viewing the Timeline

To view a timeline of events in Flow, click the Admin menu, then click Timeline. The following information displays for each event:

To obtain the most recent information, click the Refresh button.


Reporting Issues

If you experience an error with Flow, you can submit a JIRA ticket to notify our team.

  1. First, click the Admin menu, then click Download Logs. This will download a file contains information that will help our developers identify the cause of the issue.
  2. Click the Help menu, then click Report an issue. This will open our JIRA page where you can file your ticket.
  3. Click the Create button at the top of the JIRA page.
  4. Attach the log file from the first step, write a description of the error you experienced, then click the Create button at the bottom of the page. Our team will work to resolve the issue and you can track the progress of your ticket in JIRA.

Requesting Help

If you have a Google account, you can submit a request for assistance with H2O on our Google Groups page, H2Ostream.

To access H2Ostream from Flow:

  1. Click the Help menu.
  2. Click Forum/Ask a question.
  3. Click the red New topic button.
  4. Enter your question and click the red Post button. If you are requesting assistance for an error you experienced, be sure to include your logs.

You can also email your question to h2ostream@googlegroups.com.


Shutting Down H2O

To shut down H2O, click the Admin menu, then click Shut Down. A Shut down complete message displays in the upper right when the cluster has been shut down.


Data Science Algorithms

This document describes how to define the models and how to interpret the model, as well the algorithm itself, and provides an FAQ.

Commonalities

Missing Value Handling for Training

If missing values are found in the validation frame during model training or during the scoring process for creating predictions, the missing values are automatically imputed.

If the missing values are found during POJO scoring, the answer is converted to NaN.

K-Means

Introduction

K-Means falls in the general category of clustering algorithms.

Defining a K-Means Model

Interpreting a K-Means Model

By default, the following output displays:

K-Means randomly chooses starting points and converges to a local minimum of centroids. The number of clusters is arbitrary, and should be thought of as a tuning parameter. The output is a matrix of the cluster assignments and the coordinates of the cluster centers in terms of the originally chosen attributes. Your cluster centers may differ slightly from run to run as this problem is Non-deterministic Polynomial-time (NP)-hard.

FAQ

K-Means Algorithm

The number of clusters \(K\) is user-defined and is determined a priori.

  1. Choose \(K\) initial cluster centers \(m_{k}\) according to one of the following:

    • Randomization: Choose \(K\) clusters from the set of \(N\) observations at random so that each observation has an equal chance of being chosen.

    • Plus Plus

      a. Choose one center \(m_{1}\) at random.

      1. Calculate the difference between \(m_{1}\) and each of the remaining \(N-1\) observations \(x_{i}\). \(d(x_{i}, m_{1}) = ||(x_{i}-m_{1})||^2\)

      2. Let \(P(i)\) be the probability of choosing \(x_{i}\) as \(m_{2}\). Weight \(P(i)\) by \(d(x_{i}, m_{1})\) so that those \(x_{i}\) furthest from \(m_{2}\) have a higher probability of being selected than those \(x_{i}\) close to \(m_{1}\).

      3. Choose the next center \(m_{2}\) by drawing at random according to the weighted probability distribution.

      4. Repeat until \(K\) centers have been chosen.

    • Furthest

      a. Choose one center \(m_{1}\) at random.

      1. Calculate the difference between \(m_{1}\) and each of the remaining \(N-1\) observations \(x_{i}\). \(d(x_{i}, m_{1}) = ||(x_{i}-m_{1})||^2\)

      2. Choose \(m_{2}\) to be the \(x_{i}\) that maximizes \(d(x_{i}, m_{1})\).

      3. Repeat until \(K\) centers have been chosen.

  2. Once \(K\) initial centers have been chosen calculate the difference between each observation \(x_{i}\) and each of the centers \(m_{1},...,m_{K}\), where difference is the squared Euclidean distance taken over \(p\) parameters.

    \(d(x_{i}, m_{k})=\) \(\sum_{j=1}^{p}(x_{ij}-m_{k})^2=\) \(\lVert(x_{i}-m_{k})\rVert^2\)

  1. Assign \(x_{i}\) to the cluster \(k\) defined by \(m_{k}\) that minimizes \(d(x_{i}, m_{k})\)

  2. When all observations \(x_{i}\) are assigned to a cluster calculate the mean of the points in the cluster.

    \(\bar{x}(k)=\lbrace\bar{x_{i1}},…\bar{x_{ip}}\rbrace\)

  3. Set the \(\bar{x}(k)\) as the new cluster centers \(m_{k}\). Repeat steps 2 through 5 until the specified number of max iterations is reached or cluster assignments of the \(x_{i}\) are stable.

References

Hastie, Trevor, Robert Tibshirani, and J Jerome H Friedman. The Elements of Statistical Learning. Vol.1. N.p., Springer New York, 2001.

Xiong, Hui, Junjie Wu, and Jian Chen. “K-means Clustering Versus Validation Measures: A Data- distribution Perspective.” Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on 39.2 (2009): 318-331.


GLM

Introduction

Generalized Linear Models (GLM) estimate regression models for outcomes following exponential distributions. In addition to the Gaussian (i.e. normal) distribution, these include Poisson, binomial, and gamma distributions. Each serves a different purpose, and depending on distribution and link function choice, can be used either for prediction or classification.

The GLM suite includes:

Defining a GLM Model

Interpreting a GLM Model

By default, the following output displays:

FAQ

GLM Algorithm

Following the definitive text by P. McCullagh and J.A. Nelder (1989) on the generalization of linear models to non-linear distributions of the response variable Y, H2O fits GLM models based on the maximum likelihood estimation via iteratively reweighed least squares.

Let \(y_{1},…,y_{n}\) be n observations of the independent, random response variable \(Y_{i}\).

Assume that the observations are distributed according to a function from the exponential family and have a probability density function of the form:

\(f(y_{i})=exp[\frac{y_{i}\theta_{i} - b(\theta_{i})}{a_{i}(\phi)} + c(y_{i}; \phi)]\) where \(\theta\) and \(\phi\) are location and scale parameters, and \(\: a_{i}(\phi), \:b_{i}(\theta_{i}),\: c_{i}(y_{i}; \phi)\) are known functions.

\(a_{i}\) is of the form \(\:a_{i}=\frac{\phi}{p_{i}}; p_{i}\) is a known prior weight.

When \(Y\) has a pdf from the exponential family:

\(E(Y_{i})=\mu_{i}=b^{\prime}\) \(var(Y_{i})=\sigma_{i}^2=b^{\prime\prime}(\theta_{i})a_{i}(\phi)\)

Let \(g(\mu_{i})=\eta_{i}\) be a monotonic, differentiable transformation of the expected value of \(y_{i}\). The function \(\eta_{i}\) is the link function and follows a linear model.

\(g(\mu_{i})=\eta_{i}=\mathbf{x_{i}^{\prime}}\beta\)

When inverted: \(\mu=g^{-1}(\mathbf{x_{i}^{\prime}}\beta)\)

Maximum Likelihood Estimation

For an initial rough estimate of the parameters \(\hat{\beta}\), use the estimate to generate fitted values: \(\mu_{i}=g^{-1}(\hat{\eta_{i}})\)

Let \(z\) be a working dependent variable such that \(z_{i}=\hat{\eta_{i}}+(y_{i}-\hat{\mu_{i}})\frac{d\eta_{i}}{d\mu_{i}}\),

where \(\frac{d\eta_{i}}{d\mu_{i}}\) is the derivative of the link function evaluated at the trial estimate.

Calculate the iterative weights: \(w_{i}=\frac{p_{i}}{[b^{\prime\prime}(\theta_{i})\frac{d\eta_{i}}{d\mu_{i}}^{2}]}\)

Where \(b^{\prime\prime}\) is the second derivative of \(b(\theta_{i})\) evaluated at the trial estimate.

Assume \(a_{i}(\phi)\) is of the form \(\frac{\phi}{p_{i}}\). The weight \(w_{i}\) is inversely proportional to the variance of the working dependent variable \(z_{i}\) for current parameter estimates and proportionality factor \(\phi\).

Regress \(z_{i}\) on the predictors \(x_{i}\) using the weights \(w_{i}\) to obtain new estimates of \(\beta\). \(\hat{\beta}=(\mathbf{X}^{\prime}\mathbf{W}\mathbf{X})^{-1}\mathbf{X}^{\prime}\mathbf{W}\mathbf{z}\)

Where \(\mathbf{X}\) is the model matrix, \(\mathbf{W}\) is a diagonal matrix of \(w_{i}\), and \(\mathbf{z}\) is a vector of the working response variable \(z_{i}\).

This process is repeated until the estimates \(\hat{\beta}\) change by less than the specified amount.

Cost of computation

H2O can process large data sets because it relies on parallel processes. Large data sets are divided into smaller data sets and processed simultaneously and the results are communicated between computers as needed throughout the process.

In GLM, data are split by rows but not by columns, because the predicted Y values depend on information in each of the predictor variable vectors. If O is a complexity function, N is the number of observations (or rows), and P is the number of predictors (or columns) then

    \(Runtime\propto p^3+\frac{(N*p^2)}{CPUs}\)

Distribution reduces the time it takes an algorithm to process because it decreases N.

Relative to P, the larger that (N/CPUs) becomes, the more trivial p becomes to the overall computational cost. However, when p is greater than (N/CPUs), O is dominated by p.

    \(Complexity = O(p^3 + N*p^2)\)

For more information about how GLM works, refer to the Generalized Linear Modeling booklet.

References

Breslow, N E. “Generalized Linear Models: Checking Assumptions and Strengthening Conclusions.” Statistica Applicata 8 (1996): 23-41.

Frome, E L. “The Analysis of Rates Using Poisson Regression Models.” Biometrics (1983): 665-674.

Goldberger, Arthur S. “Best Linear Unbiased Prediction in the Generalized Linear Regression Model.” Journal of the American Statistical Association 57.298 (1962): 369-375.

Guisan, Antoine, Thomas C Edwards Jr, and Trevor Hastie. “Generalized Linear and Generalized Additive Models in Studies of Species Distributions: Setting the Scene.” Ecological modeling 157.2 (2002): 89-100.

Nelder, John A, and Robert WM Wedderburn. “Generalized Linear Models.” Journal of the Royal Statistical Society. Series A (General) (1972): 370-384.

Niu, Feng, et al. “Hogwild!: A lock-free approach to parallelizing stochastic gradient descent.” Advances in Neural Information Processing Systems 24 (2011): 693-701.*implemented algorithm on p.5

Pearce, Jennie, and Simon Ferrier. “Evaluating the Predictive Performance of Habitat Models Developed Using Logistic Regression.” Ecological modeling 133.3 (2000): 225-245.

Press, S James, and Sandra Wilson. “Choosing Between Logistic Regression and Discriminant Analysis.” Journal of the American Statistical Association 73.364 (April, 2012): 699–705.

Snee, Ronald D. “Validation of Regression Models: Methods and Examples.” Technometrics 19.4 (1977): 415-428.


DRF

Introduction

Distributed Random Forest (DRF) is a powerful classification tool. When given a set of data, DRF generates a forest of classification trees, rather than a single classification tree. Each of these trees is a weak learner built on a subset of rows and columns. More trees will reduce the variance. The classification from each H2O tree can be thought of as a vote; the most votes determines the classification.

The current version of DRF is fundamentally the same as in previous versions of H2O (same algorithmic steps, same histogramming techniques), with the exception of the following changes:

There was some code cleanup and refactoring to support the following features:

DRF no longer has a special-cased histogram for classification (class DBinomHistogram has been superseded by DRealHistogram), since it was not applicable to cases with observation weights or for cross-validation.

Defining a DRF Model

Interpreting a DRF Model

By default, the following output displays:

FAQ

DRF Algorithm

Building Random Forest at Scale from Sri Ambati

References


Naïve Bayes

Introduction

Naïve Bayes (NB) is a classification algorithm that relies on strong assumptions of the independence of covariates in applying Bayes Theorem. NB models are commonly used as an alternative to decision trees for classification problems.

Defining a Naïve Bayes Model

Interpreting a Naïve Bayes Model

The output from Naïve Bayes is a list of tables containing the a-priori and conditional probabilities of each class of the response. The a-priori probability is the estimated probability of a particular class before observing any of the predictors. Each conditional probability table corresponds to a predictor column. The row headers are the classes of the response and the column headers are the classes of the predictor. Thus, in the table below, the probability of survival (y) given a person is male (x) is 0.91543624.

                Sex
Survived       Male     Female
     No  0.91543624 0.08456376
     Yes 0.51617440 0.48382560

When the predictor is numeric, Naïve Bayes assumes it is sampled from a Gaussian distribution given the class of the response. The first column contains the mean and the second column contains the standard deviation of the distribution.

By default, the following output displays:

FAQ

For Naïve Bayes, we recommend using many smaller nodes because the distributed task doesn’t require intensive computation.

Naïve Bayes Algorithm

The algorithm is presented for the simplified binomial case without loss of generality.

Under the Naive Bayes assumption of independence, given a training set for a set of discrete valued features X \({(X^{(i)},\ y^{(i)};\ i=1,...m)}\)

The joint likelihood of the data can be expressed as:

\(\mathcal{L} \: (\phi(y),\: \phi_{i|y=1},\:\phi_{i|y=0})=\Pi_{i=1}^{m} p(X^{(i)},\: y^{(i)})\)

The model can be parameterized by:

\(\phi_{i|y=0}=\ p(x_{i}=1|\ y=0);\: \phi_{i|y=1}=\ p(x_{i}=1|y=1);\: \phi(y)\)

Where \(\phi_{i|y=0}=\ p(x_{i}=1|\ y=0)\) can be thought of as the fraction of the observed instances where feature \(x_{i}\) is observed, and the outcome is \(y=0, \phi_{i|y=1}=p(x_{i}=1|\ y=1)\) is the fraction of the observed instances where feature \(x_{i}\) is observed, and the outcome is \(y=1\), and so on.

The objective of the algorithm is to maximize with respect to \(\phi_{i|y=0}, \ \phi_{i|y=1},\ and \ \phi(y)\)

Where the maximum likelihood estimates are:

\(\phi_{j|y=1}= \frac{\Sigma_{i}^m 1(x_{j}^{(i)}=1 \ \bigcap y^{i} = 1)}{\Sigma_{i=1}^{m}(y^{(i)}=1}\)

\(\phi_{j|y=0}= \frac{\Sigma_{i}^m 1(x_{j}^{(i)}=1 \ \bigcap y^{i} = 0)}{\Sigma_{i=1}^{m}(y^{(i)}=0}\)

\(\phi(y)= \frac{(y^{i} = 1)}{m}\)

Once all parameters \(\phi_{j|y}\) are fitted, the model can be used to predict new examples with features \(X_{(i^*)}\).

This is carried out by calculating:

\(p(y=1|x)=\frac{\Pi p(x_i|y=1) p(y=1)}{\Pi p(x_i|y=1)p(y=1) \: +\: \Pi p(x_i|y=0)p(y=0)}\)

\(p(y=0|x)=\frac{\Pi p(x_i|y=0) p(y=0)}{\Pi p(x_i|y=1)p(y=1) \: +\: \Pi p(x_i|y=0)p(y=0)}\)

and predicting the class with the highest probability.

It is possible that prediction sets contain features not originally seen in the training set. If this occurs, the maximum likelihood estimates for these features predict a probability of 0 for all cases of y.

Laplace smoothing allows a model to predict on out of training data features by adjusting the maximum likelihood estimates to be:

\(\phi_{j|y=1}= \frac{\Sigma_{i}^m 1(x_{j}^{(i)}=1 \ \bigcap y^{i} = 1) \: + \: 1}{\Sigma_{i=1}^{m}(y^{(i)}=1 \: + \: 2}\)

\(\phi_{j|y=0}= \frac{\Sigma_{i}^m 1(x_{j}^{(i)}=1 \ \bigcap y^{i} = 0) \: + \: 1}{\Sigma_{i=1}^{m}(y^{(i)}=0 \: + \: 2}\)

Note that in the general case where y takes on k values, there are k+1 modified parameter estimates, and they are added in when the denominator is k (rather than two, as shown in the two-level classifier shown here.)

Laplace smoothing should be used with care; it is generally intended to allow for predictions in rare events. As prediction data becomes increasingly distinct from training data, train new models when possible to account for a broader set of possible X values.

References

Hastie, Trevor, Robert Tibshirani, and J Jerome H Friedman. The Elements of Statistical Learning. Vol.1. N.p., Springer New York, 2001.

Ng, Andrew. “Generative Learning algorithms.” (2008).


PCA

Introduction

Principal Components Analysis (PCA) is closely related to Principal Components Regression. The algorithm is carried out on a set of possibly collinear features and performs a transformation to produce a new set of uncorrelated features.

PCA is commonly used to model without regularization or perform dimensionality reduction. It can also be useful to carry out as a preprocessing step before distance-based algorithms such as K-Means since PCA guarantees that all dimensions of a manifold are orthogonal.

Defining a PCA Model

Interpreting a PCA Model

PCA output returns a table displaying the number of components specified by the value for k.

Scree and cumulative variance plots for the components are returned as well. Users can access them by clicking on the black button labeled “Scree and Variance Plots” at the top left of the results page. A scree plot shows the variance of each component, while the cumulative variance plot shows the total variance accounted for by the set of components.

The output for PCA includes the following:

FAQ

For the GramSVD and Power methods, all rows containing missing values are ignored during training. For the GLRM method, missing values are excluded from the sum over the loss function in the objective. For more information, refer to section 4 Generalized Loss Functions, equation (13), in “Generalized Low Rank Models” by Boyd et al.

For PCA, this is dependent on the selected pca_method parameter:

After the PCA model has been built using h2o.prcomp, use h2o.predict on the original data frame and the PCA model to produce the dimensionality-reduced representation. Use cbind to add the predictor column from the original data frame to the data frame produced by the output of h2o.predict. At this point, you can build supervised learning models on the new data frame.

PCA Algorithm

Let \(X\) be an \(M\times N\) matrix where

The covariance matrix \(C_{x}\) is

\(C_{x}=\frac{1}{n}XX^{T}\)

where \(n\) is the number of observations.

\(C_{x}\) is a square, symmetric \(m\times m\) matrix, the diagonal entries of which are the variances of attributes, and the off-diagonal entries are covariances between attributes.

PCA convergence is based on the method described by Gockenbach: “The rate of convergence of the power method depends on the ratio \(lambda_2|/|\lambda_1\). If this is small…then the power method converges rapidly. If the ratio is close to 1, then convergence is quite slow. The power method will fail if \(lambda_2| = |\lambda_1\).” (567).

The objective of PCA is to maximize variance while minimizing covariance.

To accomplish this, for a new matrix \(C_{y}\) with off diagonal entries of 0, and each successive dimension of Y ranked according to variance, PCA finds an orthonormal matrix \(P\) such that \(Y=PX\) constrained by the requirement that \(C_{y}=\frac{1}{n}YY^{T}\) be a diagonal matrix.

The rows of \(P\) are the principal components of X.

\(C_{y}=\frac{1}{n}YY^{T}\) \(=\frac{1}{n}(PX)(PX)^{T}\) \(C_{y}=PC_{x}P^{T}.\)

Because any symmetric matrix is diagonalized by an orthogonal matrix of its eigenvectors, solve matrix \(P\) to be a matrix where each row is an eigenvector of \(\frac{1}{n}XX^{T}=C_{x}\)

Then the principal components of \(X\) are the eigenvectors of \(C_{x}\), and the \(i^{th}\) diagonal value of \(C_{y}\) is the variance of \(X\) along \(p_{i}\).

Eigenvectors of \(C_{x}\) are found by first finding the eigenvalues \(\lambda\) of \(C_{x}\).

For each eigenvalue \(\lambda\) \((C-{x}-\lambda I)x =0\) where \(x\) is the eigenvector associated with \(\lambda\).

Solve for \(x\) by Gaussian elimination.

Recovering SVD from GLRM

GLRM gives \(x\) and \(y\), where \(x \in \rm \Bbb I \!\Bbb R ^{n * k}\) and \( y \in \rm \Bbb I \!\Bbb R ^{k*m} \)

   - \(n\)= number of rows (A)

   - \(m\)= number of columns (A)

   - \(k\)= user-specified rank    - \(A\)= training matrix

It is assumed that the \(x\) and \(y\) columns are independent.

First, perform QR decomposition of \(x\) and \(y^T\):

   \(x = QR\)

    \(y^T = ZS\), where \(Q^TQ = I = Z^TZ\)

      Call JAMA QR Decomposition directly on \(y^T\) to get \( Z \in \rm \Bbb I \! \Bbb R\), \( S \in \Bbb I \! \Bbb R \)

      \( R \) from QR decomposition of \( x \) is the upper triangular factor of Cholesky of \(X^TX\) Gram

      \( X^TX = LL^T, X = QR \)

      \( X^TX= (R^TQ^T) QR = R^TR \), since \(Q^TQ=I \) => \(R=L^T\) (transpose lower triangular)

Note: In code, \(X^TX \over n\) = \( LL^T \)

   \( X^TX = (L \sqrt{n})(L \sqrt{n})^T =R^TR \)

   \( R = L^T \sqrt{n} \in \rm \Bbb I \! \Bbb R^{k * k} \) reduced QR decomposition.

For more information, refer to the Rectangular matrix section of “QR Decomposition” on Wikipedia.

\( XY = QR(ZS)^T = Q(RS^T)Z^T \)

Note: \( (RS^T) \in \rm \Bbb I \!\Bbb R \)

Find SVD (locally) of \( RS^T \)

\( RS^T = U \sum V^T, U^TU = I = V^TV \) orthogonal

\( XY = Q(RS^T)Z^T = (QU \sum (V^T Z^T) SVD \)

   \( (QU)^T(QU) = U^T Q^TQU U^TU = I\)

   \( (ZV)^T(ZV) = V^TZ^TZV = V^TV =I \)

Right singular vectors: \( ZV \in \rm \Bbb I \!\Bbb R^{m * k} \)

Singular values: \( \sum \in \rm \Bbb I \!\Bbb R^{k * k} \) diagonal

Left singular vectors: \( (QU) \in \rm \Bbb I \!\Bbb R^{n * k}\)

References

Gockenbach, Mark S. “Finite-Dimensional Linear Algebra (Discrete Mathematics and Its Applications).” (2010): 566-567.


GBM

Introduction

Gradient Boosted Regression and Gradient Boosted Classification are forward learning ensemble methods. The guiding heuristic is that good predictive results can be obtained through increasingly refined approximations. H2O’s GBM sequentially builds regression trees on all the features of the dataset in a fully distributed way - each tree is built in parallel.

The current version of GBM is fundamentally the same as in previous versions of H2O (same algorithmic steps, same histogramming techniques), with the exception of the following changes:

There was some code cleanup and refactoring to support the following features:

Defining a GBM Model

Interpreting a GBM Model

The output for GBM includes the following:

FAQ

This is a known behavior of GBM that is similar to its behavior in R. If, for example, it takes 50 trees to learn all there is to learn from a frame without the random features, when you add a random predictor and train 1000 trees, the first 50 trees will be approximately the same. The final 950 trees are used to make sense of the random number, which will take a long time since there’s no structure. The variable importance will reflect the fact that all the splits from the first 950 trees are devoted to the random feature.

GBM Algorithm

H2O’s Gradient Boosting Algorithms follow the algorithm specified by Hastie et al (2001):

Initialize \(f_{k0} = 0,\: k=1,2,…,K\)

For \(m=1\) to \(M:\)

  (a) Set \(p_{k}(x)=\frac{e^{f_{k}(x)}}{\sum_{l=1}^{K}e^{f_{l}(x)}},\:k=1,2,…,K\)

  (b) For \(k=1\) to \(K\):

    i. Compute \(r_{ikm}=y_{ik}-p_{k}(x_{i}),\:i=1,2,…,N.\)     ii. Fit a regression tree to the targets \(r_{ikm},\:i=1,2,…,N\), giving terminal regions \(R_{jim},\:j=1,2,…,J_{m}.\) \(iii. Compute\) \(\gamma_{jkm}=\frac{K-1}{K}\:\frac{\sum_{x_{i}\in R_{jkm}}(r_{ikm})}{\sum_{x_{i}\in R_{jkm}}|r_{ikm}|(1-|r_{ikm})},\:j=1,2,…,J_{m}.\) \(\:iv.\:Update\:f_{km}(x)=f_{k,m-1}(x)+\sum_{j=1}^{J_{m}}\gamma_{jkm}I(x\in\:R_{jkm}).\)

Output \(\:\hat{f_{k}}(x)=f_{kM}(x),\:k=1,2,…,K.\)

Be aware that the column type affects how the histogram is created and the column type depends on whether rows are excluded or assigned a weight of 0. For example:

val weight 1 1 0.5 0 5 1 3.5 0

The above vec has a real-valued type if passed as a whole, but if the zero-weighted rows are sliced away first, the integer weight is used. The resulting histogram is either kept at full nbins resolution or potentially shrunk to the discrete integer range, which affects the split points.

For more information about the GBM algorithm, refer to the Gradient Boosted Machines booklet.

References

Dietterich, Thomas G, and Eun Bae Kong. “Machine Learning Bias, Statistical Bias, and Statistical Variance of Decision Tree Algorithms.” ML-95 255 (1995).

Elith, Jane, John R Leathwick, and Trevor Hastie. “A Working Guide to Boosted Regression Trees.” Journal of Animal Ecology 77.4 (2008): 802-813

Friedman, Jerome H. “Greedy Function Approximation: A Gradient Boosting Machine.” Annals of Statistics (2001): 1189-1232.

Friedman, Jerome, Trevor Hastie, Saharon Rosset, Robert Tibshirani, and Ji Zhu. “Discussion of Boosting Papers.” Ann. Statist 32 (2004): 102-107

Friedman, Jerome, Trevor Hastie, and Robert Tibshirani. “Additive Logistic Regression: A Statistical View of Boosting (With Discussion and a Rejoinder by the Authors).” The Annals of Statistics 28.2 (2000): 337-407

Hastie, Trevor, Robert Tibshirani, and J Jerome H Friedman. The Elements of Statistical Learning. Vol.1. N.p., page 339: Springer New York, 2001.


Deep Learning

Introduction

H2O’s Deep Learning is based on a multi-layer feed-forward artificial neural network that is trained with stochastic gradient descent using back-propagation. The network can contain a large number of hidden layers consisting of neurons with tanh, rectifier and maxout activation functions. Advanced features such as adaptive learning rate, rate annealing, momentum training, dropout, L1 or L2 regularization, checkpointing and grid search enable high predictive accuracy. Each compute node trains a copy of the global model parameters on its local data with multi-threading (asynchronously), and contributes periodically to the global model via model averaging across the network.

Defining a Deep Learning Model

H2O Deep Learning models have many input parameters, many of which are only accessible via the expert mode. For most cases, use the default values. Please read the following instructions before building extensive Deep Learning models. The application of grid search and successive continuation of winning models via checkpoint restart is highly recommended, as model performance can vary greatly.

Interpreting a Deep Learning Model

To view the results, click the View button. The output for the Deep Learning model includes the following information for both the training and testing sets:

FAQ

In general, to get the best possible model, we recommend building a model with train\_samples\_per\_iteration = -2 (which is the default value for auto-tuning) and saving it.

To improve the initial model, start from the previous model and add iterations by building another model, setting the checkpoint to the previous model, and changing train\_samples\_per\_iteration, target\_ratio\_comm\_to\_comp, or other parameters.

If you don’t know your model ID because it was generated by R, look it up using h2o.ls(). By default, Deep Learning model names start with deeplearning_ To view the model, use m <- h2o.getModel("my\_model\_id") or summary(m).

There are a few ways to manage checkpoint restarts:

Option 1: (Multi-node only) Leave train\_samples\_per\_iteration = -2, increase target\_comm\_to\_comp from 0.05 to 0.25 or 0.5, which provides more communication. This should result in a better model when using multiple nodes. Note: This does not affect single-node performance.

Option 2: (Single or multi-node) Set train\_samples\_per\_iteration to \(N\), where \(N\) is the number of training samples used for training by the entire cluster for one iteration. Each of the nodes then trains on \(N\) randomly-chosen rows for every iteration. The number defined as \(N\) depends on the dataset size and the model complexity.

Option 3: (Single or multi-node) Change regularization parameters such as l1, l2, max\_w2, input\_droput\_ratio or hidden\_dropout\_ratios. We recommend build the first mode using RectifierWithDropout, input\_dropout\_ratio = 0 (if there is suspected noise in the input), and hidden\_dropout\_ratios=c(0,0,0) (for the ability to enable dropout regularization later).

The max\_after\_balance\_size parameter defines the maximum size of the over-sampled dataset. For example, if max\_after\_balance\_size = 3, the over-sampled dataset will not be greater than three times the size of the original dataset.

For example, if you have five classes with priors of 90%, 2.5%, 2.5%, and 2.5% (out of a total of one million rows) and you oversample to obtain a class balance using balance\_classes = T, the result is all four minor classes are oversampled by forty times and the total dataset will be 4.5 times as large as the original dataset (900,000 rows of each class). If max\_after\_balance\_size = 3, all five balance classes are reduced by 3/5 resulting in 600,000 rows each (three million total).

To specify the per-class over- or under-sampling factors, use class\_sampling\_factors. In the previous example, the default behavior with balance\_classes is equivalent to c(1,40,40,40,40), while when max\_after\_balance\_size = 3, the results would be c(3/5,40*3/5,40*3/5,40*3/5).

In all cases, the probabilities are adjusted to the pre-sampled space, so the minority classes will have lower average final probabilities than the majority class, even if they were sampled to reach class balance.


Deep Learning Algorithm

To compute deviance for a Deep Learning regression model, the following formula is used:

Loss = Quadratic -> MSE==Deviance For Absolute/Laplace or Huber -> MSE != Deviance

For more information about how the Deep Learning algorithm works, refer to the Deep Learning booklet.

References

“Deep Learning.” Wikipedia: The free encyclopedia. Wikimedia Foundation, Inc. 1 May 2015. Web. 4 May 2015.

“Artificial Neural Network.” Wikipedia: The free encyclopedia. Wikimedia Foundation, Inc. 22 April 2015. Web. 4 May 2015.

Zeiler, Matthew D. ‘ADADELTA: An Adaptive Learning Rate Method’. Arxiv.org. N.p., 2012. Web. 4 May 2015.

Sutskever, Ilya et al. “On the importance of initialization and momementum in deep learning.” JMLR:W&CP vol. 28. (2013).

Hinton, G.E. et. al. “Improving neural networks by preventing co-adaptation of feature detectors.” University of Toronto. (2012).

Wager, Stefan et. al. “Dropout Training as Adaptive Regularization.” Advances in Neural Information Processing Systems. (2013).

Gedeon, TD. “Data mining of inputs: analysing magnitude and functional measures.” University of New South Wales. (1997).

Candel, Arno and Parmar, Viraj. “Deep Learning with H2O.” H2O.ai, Inc. (2015).

Deep Learning Training

Slideshare slide decks

Youtube channel

Candel, Arno. “The Definitive Performance Tuning Guide for H2O Deep Learning.” H2O.ai, Inc. (2015).

Niu, Feng, et al. “Hogwild!: A lock-free approach to parallelizing stochastic gradient descent.” Advances in Neural Information Processing Systems 24 (2011): 693-701. (algorithm implemented is on p.5)

Hawkins, Simon et al. “Outlier Detection Using Replicator Neural Networks.” CSIRO Mathematical and Information Sciences

YARN Best Practices

YARN (Yet Another Resource Manager) is a resource management framework. H2O can be launched as an application on YARN. If you want to run H2O on Hadoop, essentially, you are running H2O on YARN. If you are not currently using YARN to manage your cluster resources, we strongly recommend it.

Using H2O with YARN

When you launch H2O on Hadoop using the hadoop jar command, YARN allocates the necessary resources to launch the requested number of nodes. H2O launches as a MapReduce (V2) task, where each mapper is an H2O node of the specified size.

hadoop jar h2odriver.jar -nodes 1 -mapperXmx 6g -output hdfsOutputDirName

Occasionally, YARN may reject a job request. This usually occurs because either there is not enough memory to launch the job or because of an incorrect configuration.

If YARN rejects the job request, try launching the job with less memory to see if that is the cause of the failure. Specify smaller values for -mapperXmx (we recommend a minimum of 2g) and -nodes (start with 1) to confirm that H2O can launch successfully.

To resolve configuration issues, adjust the maximum memory that YARN will allow when launching each mapper. If the cluster manager settings are configured for the default maximum memory size but the memory required for the request exceeds that amount, YARN will not launch and H2O will time out. If you are using the default configuration, change the configuration settings in your cluster manager to specify memory allocation when launching mapper tasks. To calculate the amount of memory required for a successful launch, use the following formula:

YARN container size (mapreduce.map.memory.mb) = -mapperXmx value + (-mapperXmx * -extramempercent [default is 10%])

The mapreduce.map.memory.mb value must be less than the YARN memory configuration values for the launch to succeed.

Configuring YARN

For Cloudera, configure the settings in Cloudera Manager. Depending on how the cluster is configured, you may need to change the settings for more than one role group.

  1. Click Configuration and enter the following search term in quotes: yarn.nodemanager.resource.memory-mb.

  2. Enter the amount of memory (in GB) to allocate in the Value field. If more than one group is listed, change the values for all listed groups.

    Cloudera Configuration

  3. Click the Save Changes button in the upper-right corner.

  4. Enter the following search term in quotes: yarn.scheduler.maximum-allocation-mb
  5. Change the value, click the Save Changes button in the upper-right corner, and redeploy.

    Cloudera Configuration

For Hortonworks, configure the settings in Ambari.

  1. Select YARN, then click the Configs tab.
  2. Select the group.
  3. In the Node Manager section, enter the amount of memory (in MB) to allocate in the yarn.nodemanager.resource.memory-mb entry field.

    Ambari Configuration

  4. In the Scheduler section, enter the amount of memory (in MB)to allocate in the yarn.scheduler.maximum-allocation-mb entry field.

    Ambari Configuration

  5. Click the Save button at the bottom of the page and redeploy the cluster.

For MapR:

  1. Edit the yarn-site.xml file for the node running the ResourceManager.
  2. Change the values for the yarn.nodemanager.resource.memory-mb and yarn.scheduler.maximum-allocation-mb properties.
  3. Restart the ResourceManager and redeploy the cluster.

To verify the values were changed, check the values for the following properties:

 - <name>yarn.nodemanager.resource.memory-mb</name>
 - <name>yarn.scheduler.maximum-allocation-mb</name>

Limiting CPU Usage

To limit the number of CPUs used by H2O, use the -nthreads option and specify the maximum number of CPUs for a single container to use. The following example limits the number of CPUs to four:

hadoop jar h2odriver.jar -nthreads 4 -nodes 1 -mapperXmx 6g -output hdfsOutputDirName

Note: The default is 4*the number of CPUs. You must specify at least four CPUs; otherwise, the following error message displays: ERROR: nthreads invalid (must be >= 4)

Specifying Queues

If you do not specify a queue when launching H2O, H2O jobs are submitted to the default queue. Jobs submitted to the default queue have a lower priority than jobs submitted to a specific queue.

To specify a queue with Hadoop, enter -Dmapreduce.job.queuename=<my-h2o-queue>

(where <my-h2o-queue> is the name of the queue) when launching Hadoop.

For example,

hadoop jar h2odriver.jar -Dmapreduce.job.queuename=<my-h2o-queue> -nodes <num-nodes> -mapperXmx 6g -output hdfsOutputDirName

Specifying Output Directories

To prevent overwriting multiple users’ files, each job must have a unique output directory name. Change the -output hdfsOutputDir argument (where hdfsOutputDir is the name of the directory.

Alternatively, you can delete the directory (manually or by using a script) instead of creating a unique directory each time you launch H2O.

Customizing YARN

Most of the configurable YARN variables are stored in yarn-site.xml. To prevent settings from being overridden, you can mark a config as “final.” If you change any values in yarn-site.xml, you must restart YARN to confirm the changes.

Accessing Logs

To learn how to access logs in YARN, refer to Downloading Logs.

Downloading Logs

Accessing Logs

Depending on whether you are using Hadoop with H2O and whether the job is currently running, there are different ways of obtaining the logs for H2O.

Copy and email the logs to support@h2o.ai or submit them to h2ostream@googlegroups.com with a brief description of your Hadoop environment, including the Hadoop distribution and version.

Without Running Jobs

    jessica@mr-0x8:~/h2o-3.1.0.3008-cdh5.2$ hadoop jar h2odriver.jar -nodes 1 -mapperXmx 6g -output hdfsOutputDirName
Determining driver host interface for mapper->driver callback...
    [Possible callback IP address: 172.16.2.178]
    [Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 172.16.2.178:52030
(You can override these with -driverif and -driverport.)
Memory Settings:
    mapreduce.map.java.opts:     -Xms1g -Xmx1g -Dlog4j.defaultInitOverride=true
    Extra memory percent:        10
    mapreduce.map.memory.mb:     1126
15/05/06 17:11:50 INFO client.RMProxy: Connecting to ResourceManager at mr-0x10.0xdata.loc/172.16.2.180:8032
15/05/06 17:11:52 INFO mapreduce.JobSubmitter: number of splits:1
15/05/06 17:11:52 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1430127035640_0075
15/05/06 17:11:52 INFO impl.YarnClientImpl: Submitted application application_1430127035640_0075
15/05/06 17:11:52 INFO mapreduce.Job: The url to track the job: http://mr-0x10.0xdata.loc:8088/proxy/application_1430127035640_0075/
Job name 'H2O_29570' submitted
JobTracker job ID is 'job_1430127035640_0075'
For YARN users, logs command is 'yarn logs -applicationId application_1430127035640_0075'
Waiting for H2O cluster to come up...

In the above example, the command is specified in the next to last line (For YARN users, logs command is...). The command is unique for each instance. In Terminal, enter yarn logs -applicationId application_<UniqueID> to view the logs (where <UniqueID> is the number specified in the next to last line of the output that displayed when you created the cluster).


Use YARN to obtain the stdout and stderr logs that are used for troubleshooting. To learn how to access YARN based on management software, version, and job status, see Accessing YARN.

  1. Click the Applications link to view all jobs, then click the History link for the job.

    YARN - History

  2. Click the logs link.

    YARN - History

  3. Copy the information that displays and send it in an email to support@h2o.ai.

    YARN - History


With Running Jobs

If you are using Hadoop and the job is still running:


05-06 17:12:15.610 172.16.2.179:54321    26336  main      INFO: ----- H2O started  -----
05-06 17:12:15.731 172.16.2.179:54321    26336  main      INFO: Build git branch: master
05-06 17:12:15.731 172.16.2.179:54321    26336  main      INFO: Build git hash: 41d039196088df081ad77610d3e2d6550868f11b
05-06 17:12:15.731 172.16.2.179:54321    26336  main      INFO: Build git describe: jenkins-master-1187
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Build project version: 0.3.0.1187
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Built by: 'jenkins'
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Built on: '2015-05-05 23:31:12'
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java availableProcessors: 8
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java heap totalMemory: 982.0 MB
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java heap maxMemory: 982.0 MB
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java version: Java 1.7.0_80 (from Oracle Corporation)
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: OS   version: Linux 3.13.0-51-generic (amd64)
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Machine physical memory: 31.30 GB
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: X-h2o-cluster-id: 1430957535344
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Possible IP Address: virbr0 (virbr0), 192.168.122.1
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Possible IP Address: br0 (br0), 172.16.2.179
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Possible IP Address: lo (lo), 127.0.0.1
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Multiple local IPs detected:
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO:   /192.168.122.1  /172.16.2.179
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Attempting to determine correct address...
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Using /172.16.2.179
05-06 17:12:15.734 172.16.2.179:54321    26336  main      INFO: Internal communication uses port: 54322
05-06 17:12:15.734 172.16.2.179:54321    26336  main      INFO: Listening for HTTP and REST traffic on  http://172.16.2.179:54321/
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO: H2O cloud name: 'H2O_29570' on /172.16.2.179:54321, discovery address /237.61.246.13:60733
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO: If you have trouble connecting, try SSH tunneling from your local machine (e.g., via port 55555):
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO:   1. Open a terminal and run 'ssh -L 55555:localhost:54321 yarn@172.16.2.179'
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO:   2. Point your browser to http://localhost:55555
05-06 17:12:15.979 172.16.2.179:54321    26336  main      INFO: Log dir: '/home2/yarn/nm/usercache/jessica/appcache/application_1430127035640_0075/h2ologs'



Accessing YARN

Methods for accessing YARN vary depending on the default management software and version, as well as job status.


Cloudera 5 & 5.2

  1. In Cloudera Manager, click the YARN link in the cluster section.

    Cloudera Manager

  2. In the Quick Links section, select ResourceManager Web UI if the job is running or select HistoryServer Web UI if the job is not running.

    Cloudera Manager


Ambari

  1. From the Ambari Dashboard, select YARN.

    Ambari

  2. From the Quick Links drop-down menu, select ResourceManager UI.

    Ambari


For Non-Hadoop Users

Without Current Jobs

If you are not using Hadoop and the job is not running:


With Current Jobs

If you are not using Hadoop and the job is still running:

05-06 17:12:15.610 172.16.2.179:54321    26336  main      INFO: ----- H2O started  -----
05-06 17:12:15.731 172.16.2.179:54321    26336  main      INFO: Build git branch: master
05-06 17:12:15.731 172.16.2.179:54321    26336  main      INFO: Build git hash: 41d039196088df081ad77610d3e2d6550868f11b
05-06 17:12:15.731 172.16.2.179:54321    26336  main      INFO: Build git describe: jenkins-master-1187
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Build project version: 0.3.0.1187
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Built by: 'jenkins'
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Built on: '2015-05-05 23:31:12'
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java availableProcessors: 8
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java heap totalMemory: 982.0 MB
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java heap maxMemory: 982.0 MB
05-06 17:12:15.732 172.16.2.179:54321    26336  main      INFO: Java version: Java 1.7.0_80 (from Oracle Corporation)
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: OS   version: Linux 3.13.0-51-generic (amd64)
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Machine physical memory: 31.30 GB
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: X-h2o-cluster-id: 1430957535344
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Possible IP Address: virbr0 (virbr0), 192.168.122.1
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Possible IP Address: br0 (br0), 172.16.2.179
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Possible IP Address: lo (lo), 127.0.0.1
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Multiple local IPs detected:
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO:   /192.168.122.1  /172.16.2.179
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Attempting to determine correct address...
05-06 17:12:15.733 172.16.2.179:54321    26336  main      INFO: Using /172.16.2.179
05-06 17:12:15.734 172.16.2.179:54321    26336  main      INFO: Internal communication uses port: 54322
05-06 17:12:15.734 172.16.2.179:54321    26336  main      INFO: Listening for HTTP and REST traffic on  http://172.16.2.179:54321/
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO: H2O cloud name: 'H2O_29570' on /172.16.2.179:54321, discovery address /237.61.246.13:60733
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO: If you have trouble connecting, try SSH tunneling from your local machine (e.g., via port 55555):
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO:   1. Open a terminal and run 'ssh -L 55555:localhost:54321 yarn@172.16.2.179'
05-06 17:12:15.744 172.16.2.179:54321    26336  main      INFO:   2. Point your browser to http://localhost:55555
05-06 17:12:15.979 172.16.2.179:54321    26336  main      INFO: Log dir: '/home2/yarn/nm/usercache/jessica/appcache/application_1430127035640_0075/h2ologs'


        ------------------------------------------------------------

        Time:     2015-01-06 15:46:11.083

        GET       http://172.16.2.20:54321/3/Cloud.json
        postBody: 

        curlError:         FALSE
        curlErrorMessage:  
        httpStatusCode:    200
        httpStatusMessage: OK
        millis:            3

        {"__meta":{"schema_version":    1,"schema_name":"CloudV1","schema_type":"Iced"},"version":"0.1.17.1009","cloud_name":...[truncated]}
        -------------------------------------------------------------


Migrating to H2O 3.0

We’re excited about the upcoming release of the latest and greatest version of H2O, and we hope you are too! H2O 3.0 has lots of improvements, including:

and much more! Overall, H2O has been retooled for better accuracy and performance and to provide additional functionality. If you’re a current user of H2O, we strongly encourage you to upgrade to the latest version to take advantage of the latest features and capabilities.

Please be aware that H2O 3.0 will supersede all previous versions of H2O as the primary version as of May 15th, 2015. Support for previous versions will be offered for a limited time, but there will no longer be any significant updates to the previous version of H2O.

The following information and links will inform you about what’s new and different and help you prepare to upgrade to H2O 3.0.

Overall, H2O 3.0 is more stable, elegant, and simplified, with additional capabilities not available in previous versions of H2O.


Algorithm Changes

Most of the algorithms available in previous versions of H2O have been improved in terms of speed and accuracy. Currently available model types include:

Supervised

Unsupervised

There are a few algorithms that are still being refined to provide these same benefits and will be available in a future version of H2O.

Currently, the following algorithms and associated capabilities are still in development:

Check back for updates, as these algorithms will be re-introduced in an improved form in a future version of H2O.

Note: The SpeeDRF model has been removed, as it was originally intended as an optimization for small data only. This optimization will be added to the Distributed Random Forest model automatically for small data in a future version of H2O.


Parsing Changes

In H2O Classic, the parser reads all the data and tries to guess the column type. In H2O 3.0, the parser reads a subset and makes a type guess for each column. In Flow, you can view the preliminary parse results in the Edit Column Names and Types area. To change the column type, select an option from the drop-down menu to the right of the column. H2O 3.0 can also automatically identify mixed-type columns; in H2O Classic, if one column is mixed integers or real numbers using a string, the output is blank.


Web UI Changes

Our web UI has been completely overhauled with a much more intuitive interface that is similar to IPython Notebook. Each point-and-click action is translated immediately into an individual workflow script that can be saved for later interactive and offline use. As a result, you can now revise and rerun your workflows easily, and can even add comments and rich media.

For more information, refer to our Getting Started with Flow guide, which comprehensively documents how to use Flow. You can also view this brief video, which provides an overview of Flow in action.


API Users

H2O’s new Python API allows Pythonistas to use H2O in their favorite environment. Using the Python command line or an integrated development environment like IPython Notebook, H2O users can control clusters and manage massive datasets quickly.

H2O’s REST API is the basis for the web UI (Flow), as well as the R and Python APIs, and is versioned for stability. It is also easier to understand and use, with full metadata available dynamically from the server, allowing for easier integration by developers.


Java Users

Generated Java REST classes ease REST API use by external programs running in a Java Virtual Machine (JVM).

As in previous versions of H2O, users can export trained models as Java objects for easy integration into JVM applications. H2O is currently the only ML tool that provides this capability, making it the data science tool of choice for enterprise developers.


R Users

If you use H2O primarily in R, be aware that as a result of the improvements to the R package for H2O scripts created using previous versions (Nunes 2.8.6.2 or prior) will require minor revisions to work with H2O 3.0.

To assist our R users in upgrading to H2O 3.0, a “shim” tool has been developed. The shim reviews your script, identifies deprecated or revised parameters and arguments, and suggests replacements.

Note: As of Slater v.3.2.0.10, this shim will no longer be available.

There is also an R Porting Guide that provides a side-by-side comparison of the algorithms in the previous version of H2O with H2O 3.0. It outlines the new, revised, and deprecated parameters for each algorithm, as well as the changes to the output.


Porting R Scripts

This document outlines how to port R scripts written in previous versions of H2O (Nunes 2.8.6.2 or prior, also known as “H2O Classic”) for compatibility with the new H2O 3.0 API. When upgrading from H2O to H2O 3.0, most functions are the same. However, there are some differences that will need to be resolved when porting any scripts that were originally created using H2O to H2O 3.0.

The original R script for H2O is listed first, followed by the updated script for H2O 3.0.

Some of the parameters have been renamed for consistency. For each algorithm, a table that describes the differences is provided.

For additional assistance within R, enter a question mark before the command (for example, ?h2o.glm).

There is also a “shim” available that will review R scripts created with previous versions of H2O, identify deprecated or renamed parameters, and suggest replacements. For more information, refer to the repo here.

Changes from H2O 2.8 to H2O 3.0

h2o.exec

The h2o.exec command is no longer supported. Any workflows using h2o.exec must be revised to remove this command. If the H2O 3.0 workflow contains any parameters or commands from H2O Classic, errors will result and the workflow will fail.

The purpose of h2o.exec was to wrap expressions so that they could be evaluated in a single \Exec2 call. For example, h2o.exec(fr[,1] + 2/fr[,3]) and fr[,1] + 2/fr[,3] produced the same results in H2O. However, the first example makes a single REST call and uses a single temp object, while the second makes several REST calls and uses several temp objects.

Due to the improved architecture in H2O 3.0, the need to use h2o.exec has been eliminated, as the expression can be processed by R as an “unwrapped” typical R expression.

Currently, the only known exception is when factor is used in conjunction with h2o.exec. For example, h2o.exec(fr$myIntCol <- factor(fr$myIntCol)) would become fr$myIntCol <- as.factor(fr$myIntCol)

Note also that an array is not inside a string:

An int array is [1, 2, 3], not “[1, 2, 3]”.

A String array is [“f00”, “b4r”], not “[\”f00\”, \”b4r\”]”

Only string values are enclosed in double quotation marks (").

h2o.performance

To access any exclusively binomial output, use h2o.performance, optionally with the corresponding accessor. The accessor can only use the model metrics object created by h2o.performance. Each accessor is named for its corresponding field (for example, h2o.AUC, h2o.gini, h2o.F1). h2o.performance supports all current algorithms except for K-Means.

If you specify a data frame as a second parameter, H2O will use the specified data frame for scoring. If you do not specify a second parameter, the training metrics for the model metrics object are used.

xval and validation slots

The xval slot has been removed, as nfolds is not currently supported.

The validation slot has been merged with the model slot.

Principal Components Regression (PCR)

Principal Components Regression (PCR) has also been deprecated. To obtain PCR values, create a Principal Components Analysis (PCA) model, then create a GLM model from the scored data from the PCA model.

Saving and Loading Models

Saving and loading a model from R is supported in version 3.0.0.18 and later. H2O 3.0 uses the same binary serialization method as previous versions of H2O, but saves the model and its dependencies into a directory, with each object as a separate file. The save_CV option for available in previous versions of H2O has been deprecated, as h2o.saveAll and h2o.loadAll are not currently supported. The following commands are now supported:

Table of Contents

GBM

N-fold cross-validation and grid search are currently supported in H2O 3.0.

Renamed GBM Parameters

The following parameters have been renamed, but retain the same functions:

H2O Classic Parameter Name H2O 3.0 Parameter Name
data training_frame
key model_id
n.trees ntrees
interaction.depth max_depth
n.minobsinnode min_rows
shrinkage learn_rate
n.bins nbins
validation validation_frame
balance.classes balance_classes
max.after.balance.size max_after_balance_size

Deprecated GBM Parameters

The following parameters have been removed:

New GBM Parameters

The following parameters have been added:

GBM Algorithm Comparison

H2O Classic H2O 3.0
h2o.gbm <- function( h2o.gbm <- function(
x, x,
y, y,
data, training_frame,
key = "", model_id,
  checkpoint
distribution = 'multinomial', `distribution = c(“AUTO”,

“gaussian”, “bernoulli”, “multinomial”, “poisson”, “gamma”, “tweedie”),&nbsp; |tweedie_power = 1.5,`n.trees = 10, | ntrees = 50 interaction.depth = 5, | max_depth = 5, n.minobsinnode = 10, | min_rows = 10, shrinkage = 0.1, | learn_rate = 0.1, n.bins = 20,| nbins = 20,   | nbins_top_level,   | nbins_cats = 1024, validation, | validation_frame = NULL, balance.classes = FALSE | balance_classes = FALSE, max.after.balance.size = 5, | max_after_balance_size = 1,   | seed,   | build_tree_one_node = FALSE,   | nfolds = 0,   | fold_column = NULL,   | fold_assignment = c("AUTO", "Random", "Modulo"),   | keep_cross_validation_predictions = FALSE,   | score_each_iteration = FALSE,   | stopping_rounds = 0,   | stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "AUC", "r2", "misclassification"),   | stopping_tolerance = 0.001,   | offset_column = NULL,   | weights_column = NULL, group_split = TRUE, | importance = FALSE, | holdout.fraction = 0, | class.sampling.factors = NULL, | grid.parallelism = 1) |

Output

The following table provides the component name in H2O, the corresponding component name in H2O 3.0 (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O Classic H2O 3.0 Model Type
@model$priorDistribution   all
@model$params @allparameters all
@model$err @model$scoring_history all
@model$classification   all
@model$varimp @model$variable_importances all
@model$confusion @model$training_metrics@metrics$cm$table binomial and multinomial
@model$auc @model$training_metrics@metrics$AUC binomial
@model$gini @model$training_metrics@metrics$Gini binomial
@model$best_cutoff   binomial
@model$F1 @model$training_metrics@metrics$thresholds_and_metric_scores$f1 binomial
@model$F2 @model$training_metrics@metrics$thresholds_and_metric_scores$f2 binomial
@model$accuracy @model$training_metrics@metrics$thresholds_and_metric_scores$accuracy binomial
@model$error   binomial
@model$precision @model$training_metrics@metrics$thresholds_and_metric_scores$precision binomial
@model$recall @model$training_metrics@metrics$thresholds_and_metric_scores$recall binomial
@model$mcc @model$training_metrics@metrics$thresholds_and_metric_scores$absolute_MCC binomial
@model$max_per_class_err currently replaced by @model$training_metrics@metrics$thresholds_and_metric_scores$min_per_class_correct binomial

GLM

Renamed GLM Parameters

The following parameters have been renamed, but retain the same functions:

H2O Classic Parameter Name H2O 3.0 Parameter Name
data training_frame
key model_id
nlambda nlambdas
lambda.min.ratio lambda_min_ratio
iter.max max_iterations
epsilon beta_epsilon

Deprecated GLM Parameters

The following parameters have been removed:

New GLM Parameters

The following parameters have been added:

GLM Algorithm Comparison

H2O Classic H2O 3.0
h2o.glm <- function( h2o.glm(
x, x,
y, y,
data, training_frame,
key = "", model_id,
  validation_frame = NULL
iter.max = 100, max_iterations = 50,
epsilon = 1e-4 beta_epsilon = 0
strong_rules = TRUE,
return_all_lambda = FALSE,
intercept = TRUE, intercept = TRUE
non_negative = FALSE,
  solver = c("IRLSM", "L_BFGS"),
standardize = TRUE, standardize = TRUE,
family, family = c("gaussian", "binomial", "multinomial", "poisson", "gamma", "tweedie"),
link, link = c("family_default", "identity", "logit", "log", "inverse", "tweedie"),
tweedie.p = ifelse(family == "tweedie",1.5, NA_real_) tweedie_variance_power = NaN,
  tweedie_link_power = NaN,
alpha = 0.5, alpha = 0.5,
prior = NULL prior = 0.0,
lambda = 1e-5, lambda = 1e-05,
lambda_search = FALSE, lambda_search = FALSE,
nlambda = -1, nlambdas = -1,
lambda.min.ratio = -1, lambda_min_ratio = 1.0,
use_all_factor_levels = FALSE use_all_factor_levels = FALSE,
nfolds = 0, nfolds = 0,
  fold_column = NULL,
  fold_assignment = c("AUTO", "Random", "Modulo"),
  keep_cross_validation_predictions = FALSE,
beta_constraints = NULL, beta_constraints = NULL)
higher_accuracy = FALSE,
variable_importances = FALSE,
disable_line_search = FALSE,
offset = NULL, offset_column = NULL,
  weights_column = NULL,
  intercept = TRUE,
max_predictors = -1) max_active_predictors = -1)

Output

The following table provides the component name in H2O, the corresponding component name in H2O 3.0 (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O Classic H2O 3.0 Model Type
@model$params @allparameters all
@model$coefficients @model$coefficients all
@model$nomalized_coefficients @model$coefficients_table$norm_coefficients all
@model$rank @model$rank all
@model$iter @model$iter all
@model$lambda   all
@model$deviance @model$residual_deviance all
@model$null.deviance @model$null_deviance all
@model$df.residual @model$residual_degrees_of_freedom all
@model$df.null @model$null_degrees_of_freedom all
@model$aic @model$AIC all
@model$train.err   binomial
@model$prior   binomial
@model$thresholds @model$threshold binomial
@model$best_threshold   binomial
@model$auc @model$AUC binomial
@model$confusion   binomial

K-Means

Renamed K-Means Parameters

The following parameters have been renamed, but retain the same functions:

H2O Classic Parameter Name H2O 3.0 Parameter Name
data training_frame
key model_id
centers k
cols x
iter.max max_iterations
normalize standardize

Note In H2O, the normalize parameter was disabled by default. The standardize parameter is enabled by default in H2O 3.0 to provide more accurate results for datasets containing columns with large values.

New K-Means Parameters

The following parameters have been added:

K-Means Algorithm Comparison

H2O Classic H2O 3.0
h2o.kmeans <- function( h2o.kmeans(
data, training_frame,
cols = '', x,
centers, k,
key = "", model_id,
iter.max = 10, max_iterations = 1000,
normalize = FALSE, standardize = TRUE,
init = "none", init = c("Furthest","Random", "PlusPlus"),
seed = 0, seed,
  nfolds = 0,
  fold_column = NULL,
  fold_assignment = c("AUTO", "Random", "Modulo"),
  keep_cross_validation_predictions = FALSE)

Output

The following table provides the component name in H2O and the corresponding component name in H2O 3.0 (if supported).

H2O Classic H2O 3.0
@model$params @allparameters
@model$centers @model$centers
@model$tot.withinss @model$tot_withinss
@model$size @model$size
@model$iter @model$iterations
  @model$_scoring_history
  @model$_model_summary

Deep Learning

Note: If the results in the confusion matrix are incorrect, verify that score_training_samples is equal to 0. By default, only the first 10,000 rows are included.

Renamed Deep Learning Parameters

The following parameters have been renamed, but retain the same functions:

H2O Classic Parameter Name H2O 3.0 Parameter Name
data training_frame
key model_id
validation validation_frame
class.sampling.factors class_sampling_factors
override_with_best_model overwrite_with_best_model
dlmodel@model$valid_class_error @model$validation_metrics@$MSE

Deprecated DL Parameters

The following parameters have been removed:

New DL Parameters

The following parameters have been added:

The following options for the loss parameter have been added:

DL Algorithm Comparison

H2O Classic H2O 3.0
h2o.deeplearning <- function(x, h2o.deeplearning (x,
y, y,
data, training_frame,
key = "", model_id = "",
override_with_best_model, overwrite_with_best_model = true,
classification = TRUE,
nfolds = 0, nfolds = 0
validation, validation_frame,
holdout_fraction = 0,
checkpoint = " " checkpoint,
autoencoder, autoencoder = false,
use_all_factor_levels, use_all_factor_levels = true
activation, _activation = c("Rectifier", "Tanh", "TanhWithDropout", "RectifierWithDropout", "Maxout", "MaxoutWithDropout"),
hidden, hidden= c(200, 200),
epochs, epochs = 10.0,
train_samples_per_iteration, train_samples_per_iteration = -2,
seed, _seed,
adaptive_rate, adaptive_rate = true,
rho, rho = 0.99,
epsilon, epsilon = 1e-8,
rate, rate = .005,
rate_annealing, rate_annealing = 1e-6,
rate_decay, rate_decay = 1.0,
momentum_start, momentum_start = 0,
momentum_ramp, momentum_ramp = 1e6,
momentum_stable, momentum_stable = 0,
nesterov_accelerated_gradient, nesterov_accelerated_gradient = true,
input_dropout_ratio, input_dropout_ratio = 0.0,
hidden_dropout_ratios, hidden_dropout_ratios,
l1, l1 = 0.0,
l2, l2 = 0.0,
max_w2, max_w2 = Inf,
initial_weight_distribution, initial_weight_distribution = c("UniformAdaptive","Uniform", "Normal"),
initial_weight_scale, initial_weight_scale = 1.0,
loss, loss = "Automatic", "CrossEntropy", "Quadratic", "Absolute", "Huber"),
  ` distribution = c(“AUTO”,

“gaussian”, “bernoulli”, “multinomial”, “poisson”, “gamma”, “tweedie”, “laplace”, “huber”),&nbsp; |tweedie_power = 1.5,`score_interval, | score_interval = 5, score_training_samples, | score_training_samples = 10000l, score_validation_samples, | score_validation_samples = 0l, score_duty_cycle, | score_duty_cycle = 0.1, classification_stop, | classification_stop = 0 regression_stop, | regression_stop = 1e-6,   | stopping_rounds = 5,   | stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "AUC", "r2", "misclassification"),   | stopping_tolerance = 0, quiet_mode, | quiet_mode = false, max_confusion_matrix_size, | max_confusion_matrix_size, max_hit_ratio_k, | max_hit_ratio_k, balance_classes, | balance_classes = false, class_sampling_factors, | class_sampling_factors, max_after_balance_size, | max_after_balance_size, score_validation_sampling, | score_validation_sampling, diagnostics, | diagnostics = true, variable_importances, | variable_importances = false, fast_mode, | fast_mode = true, ignore_const_cols, | ignore_const_cols = true, force_load_balance, | force_load_balance = true, replicate_training_data, | replicate_training_data = true, single_node_mode, | single_node_mode = false, shuffle_training_data, | shuffle_training_data = false, sparse, | sparse = false, col_major, | col_major = false, max_categorical_features, | max_categorical_features, reproducible) | reproducible=FALSE, average_activation | average_activation = 0,   | sparsity_beta = 0   | export_weights_and_biases=FALSE,   | offset_column = NULL,   | weights_column = NULL,   | nfolds = 0,   | fold_column = NULL,   | fold_assignment = c("AUTO", "Random", "Modulo"),   | keep_cross_validation_predictions = FALSE)

Output

The following table provides the component name in H2O, the corresponding component name in H2O 3.0 (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O Classic H2O 3.0 Model Type
@model$priorDistribution   all
@model$params @allparameters all
@model$train_class_error @model$training_metrics@metrics@$MSE all
@model$valid_class_error @model$validation_metrics@$MSE all
@model$varimp @model$_variable_importances all
@model$confusion @model$training_metrics@metrics$cm$table binomial and multinomial
@model$train_auc @model$train_AUC binomial
  @model$_validation_metrics all
  @model$_model_summary all
  @model$_scoring_history all

Distributed Random Forest

Changes to DRF in H2O 3.0

Distributed Random Forest (DRF) was represented as h2o.randomForest(type="BigData", ...) in H2O Classic. In H2O Classic, SpeeDRF (type="fast") was not as accurate, especially for complex data with categoricals, and did not address regression problems. DRF (type="BigData") was at least as accurate as SpeeDRF (type="fast") and was the only algorithm that scaled to big data (data too large to fit on a single node). In H2O 3.0, our plan is to improve the performance of DRF so that the data fits on a single node (optimally, for all cases), which will make SpeeDRF obsolete. Ultimately, the goal is provide a single algorithm that provides the “best of both worlds” for all datasets and use cases. Please note that H2O does not currently support the ability to specify the number of trees when using h2o.predict for a DRF model.

Note: H2O 3.0 only supports DRF. SpeeDRF is no longer supported. The functionality of DRF in H2O 3.0 is similar to DRF functionality in H2O.

Renamed DRF Parameters

The following parameters have been renamed, but retain the same functions:

H2O Classic Parameter Name H2O 3.0 Parameter Name
data training_frame
key model_id
validation validation_frame
sample.rate sample_rate
ntree ntrees
depth max_depth
balance.classes balance_classes
score.each.iteration score_each_iteration
class.sampling.factors class_sampling_factors
nodesize min_rows

Deprecated DRF Parameters

The following parameters have been removed:

New DRF Parameters

The following parameter has been added:

DRF Algorithm Comparison

H2O Classic H2O 3.0
h2o.randomForest <- function(x, h2o.randomForest <- function(
x, x,
y, y,
data, training_frame,
key="", model_id,
validation, validation_frame,
mtries = -1, mtries = -1,
sample.rate=2/3, sample_rate = 0.632,
  build_tree_one_node = FALSE,
ntree=50 ntrees=50,
depth=20, max_depth = 20,
  min_rows = 1,
nbins=20, nbins = 20,
  nbins_top_level,
  nbins_cats =1024,
  binomial_double_trees = FALSE,
balance.classes = FALSE, balance_classes = FALSE,
seed = -1, seed
nodesize = 1,
classification=TRUE,
importance=FALSE,
  weights_column = NULL,
nfolds=0, nfolds = 0,
  fold_column = NULL,
  fold_assignment = c("AUTO", "Random", "Modulo"),
  keep_cross_validation_predictions = FALSE,
  score_each_iteration = FALSE,
  stopping_rounds = 0,
  stopping_metric = c("AUTO", "deviance", "logloss", "MSE", "AUC", "r2", "misclassification"),
  stopping_tolerance = 0.001)
holdout.fraction = 0,
max.after.balance.size = 5, max_after_balance_size,
class.sampling.factors = NULL,  
doGrpSplit = TRUE,
verbose = FALSE,
oobee = TRUE,
stat.type = "ENTROPY",
type = "fast")

Output

The following table provides the component name in H2O, the corresponding component name in H2O 3.0 (if supported), and the model type (binomial, multinomial, or all). Many components are now included in h2o.performance; for more information, refer to (h2o.performance).

H2O Classic H2O 3.0 Model Type
@model$priorDistribution   all
@model$params @allparameters all
@model$mse @model$scoring_history all
@model$forest @model$model_summary all
@model$classification   all
@model$varimp @model$variable_importances all
@model$confusion @model$training_metrics@metrics$cm$table binomial and multinomial
@model$auc @model$training_metrics@metrics$AUC binomial
@model$gini @model$training_metrics@metrics$Gini binomial
@model$best_cutoff   binomial
@model$F1 @model$training_metrics@metrics$thresholds_and_metric_scores$f1 binomial
@model$F2 @model$training_metrics@metrics$thresholds_and_metric_scores$f2 binomial
@model$accuracy @model$training_metrics@metrics$thresholds_and_metric_scores$accuracy binomial
@model$Error @model$Error binomial
@model$precision @model$training_metrics@metrics$thresholds_and_metric_scores$precision binomial
@model$recall @model$training_metrics@metrics$thresholds_and_metric_scores$recall binomial
@model$mcc @model$training_metrics@metrics$thresholds_and_metric_scores$absolute_MCC binomial
@model$max_per_class_err currently replaced by @model$training_metrics@metrics$thresholds_and_metric_scores$min_per_class_correct binomial

Github Users

All users who pull directly from the H2O classic repo on Github should be aware that this repo will be renamed. To retain access to the original H2O (2.8.6.2 and prior) repository:

The simple way

This is the easiest way to change your local repo and is recommended for most users.

  1. Enter git remote -v to view a list of your repositories.
  2. Copy the address your H2O classic repo (refer to the text in brackets below - your address will vary depending on your connection method):

    H2O_User-MBP:h2o H2O_User$ git remote -v
    origin    https://{H2O_User@github.com}/h2oai/h2o.git (fetch)
    origin    https://{H2O_User@github.com}/h2oai/h2o.git (push)
    
  3. Enter git remote set-url origin {H2O_User@github.com}:h2oai/h2o-2.git, where {H2O_User@github.com} represents the address copied in the previous step.

The more complicated way

This method involves editing the Github config file and should only be attempted by users who are confident enough with their knowledge of Github to do so.

  1. Enter vim .git/config.
  2. Look for the [remote "origin"] section:

    [remote "origin"]
         url = https://H2O_User@github.com/h2oai/h2o.git
         fetch = +refs/heads/*:refs/remotes/origin/*
    
  3. In the url = line, change h2o.git to h2o-2.git.
  4. Save the changes.

The latest version of H2O is stored in the h2o-3 repository. All previous links to this repo will still work, but if you would like to manually update your Github configuration, follow the instructions above, replacing h2o-2 with h2o-3.

POJO Quick Start

This document describes how to build and implement a POJO to use predictive scoring. Java developers should refer to the Javadoc for more information, including packages.

Note: POJOs are not supported for source files larger than 1G. For more information, refer to the FAQ below.

What is a POJO?

H2O allows you to convert the models you have built to a Plain Old Java Object (POJO), which can then be easily deployed within your Java app and scheduled to run on a specified dataset.

POJOs allow users to build a model using H2O and then deploy the model to score in real-time, using the POJO model or a REST API call to a scoring server.

  1. Start H2O in terminal window #1:

    $ java -jar h2o.jar

  2. Build a model using your web browser:

    1. Go to http://localhost:54321
    2. Click view example Flows near the right edge of the screen. Here is a screenshot of what to look for:
    3. Click GBM_Airlines_Classification.flow
    4. If a confirmation prompt appears asking you to “Load Notebook”, click it
    5. From the “Flow” menu choose the “Run all cells” option
    6. Scroll down and find the “Model” cell in the notebook. Click on the Download POJO button as shown in the following screenshot:

    Note: The instructions below assume that the POJO model was downloaded to the “Downloads” folder.

  3. Download model pieces in a new terminal window - H2O must still be running in terminal window #1:

     $ mkdir experiment
     $ cd experiment
     $ mv ~/Downloads/gbm_pojo_test.java .
     $ curl http://localhost:54321/3/h2o-genmodel.jar > h2o-genmodel.jar
    
  4. Create your main program in terminal window #2 by creating a new file called main.java (vim main.java) with the following contents:

     import java.io.*;
     import hex.genmodel.easy.RowData;
     import hex.genmodel.easy.EasyPredictModelWrapper;
     import hex.genmodel.easy.prediction.*;
    
     public class main {
       private static String modelClassName = "gbm_pojo_test";
    
       public static void main(String[] args) throws Exception {
         hex.genmodel.GenModel rawModel;
         rawModel = (hex.genmodel.GenModel) Class.forName(modelClassName).newInstance();
         EasyPredictModelWrapper model = new EasyPredictModelWrapper(rawModel);
    
         RowData row = new RowData();
         row.put("Year", "1987");
         row.put("Month", "10");
         row.put("DayofMonth", "14");
         row.put("DayOfWeek", "3");
         row.put("CRSDepTime", "730");
         row.put("UniqueCarrier", "PS");
         row.put("Origin", "SAN");
         row.put("Dest", "SFO");
    
         BinomialModelPrediction p = model.predictBinomial(row);
         System.out.println("Label (aka prediction) is flight departure delayed: " + p.label);
         System.out.print("Class probabilities: ");
         for (int i = 0; i < p.classProbabilities.length; i++) {
           if (i > 0) {
             System.out.print(",");
           }
           System.out.print(p.classProbabilities[i]);
         }
         System.out.println("");
       }
     }
    
  5. Compile and run in terminal window 2:

     $ javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m gbm_pojo_test.java main.java
     $ java -cp .:h2o-genmodel.jar main
    

    The following output displays:

     Label (aka prediction) is flight departure delayed: YES
     Class probabilities: 0.4790490513429604,0.5209509486570396
    

Extracting Models from H2O

Generated models can be extracted from H2O in the following ways:

Use Cases

The following use cases are demonstrated with code examples:

FAQ

Michals-MBP:b michal$ javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m drf_b9b9d3be_cf5a_464a_b518_90701549c12a.java
An exception has occurred in the compiler (1.7.0_60). Please file a bug at the Java Developer Connection (http://java.sun.com/webapps/bugreport)  after checking the Bug Parade for duplicates. Include your program and the following diagnostic in your report.  Thank you.
java.lang.IllegalArgumentException
    at java.nio.ByteBuffer.allocate(ByteBuffer.java:330)
    at com.sun.tools.javac.util.BaseFileManager$ByteBufferCache.get(BaseFileManager.java:308)
    at com.sun.tools.javac.util.BaseFileManager.makeByteBuffer(BaseFileManager.java:280)
    at com.sun.tools.javac.file.RegularFileObject.getCharContent(RegularFileObject.java:112)
    at com.sun.tools.javac.file.RegularFileObject.getCharContent(RegularFileObject.java:52)
    at com.sun.tools.javac.main.JavaCompiler.readSource(JavaCompiler.java:571)
    at com.sun.tools.javac.main.JavaCompiler.parse(JavaCompiler.java:632)
    at com.sun.tools.javac.main.JavaCompiler.parseFiles(JavaCompiler.java:909)
    at com.sun.tools.javac.main.JavaCompiler.compile(JavaCompiler.java:824)
    at com.sun.tools.javac.main.Main.compile(Main.java:439)
    at com.sun.tools.javac.main.Main.compile(Main.java:353)
    at com.sun.tools.javac.main.Main.compile(Main.java:342)
    at com.sun.tools.javac.main.Main.compile(Main.java:333)
    at com.sun.tools.javac.Main.compile(Main.java:76)
    at com.sun.tools.javac.Main.main(Main.java:61)

This error is generated when the source file is larger than 1G.

Grid Search API

The current implementation of the grid search REST API exposes the following endpoints:

Endpoints accept model-specific parameters (e.g., GBMParametersV3) and an additional parameter called hyper_parameters, which contains a JSON listing of the hyper parameters (e.g., {"ntrees":[1,5],"learn_rate":[0.1,0.01]}).

Each parameter exposed by the schema can specify if it is supported by grid search by specifying the attribute gridable=true in the schema @API annotation. In any case, the Java API does not restrict the parameters supported by grid search.

With grid search, each model is built sequentially, allowing users to view each model as it is built.

Example

Invoke a new GBM model grid search by passing the following request to H2O:

Method: POST  , URI: /99/Grid/gbm, route: /99/Grid/gbm, parms:{hyper_parameters={"ntrees":[1,5],"learn_rate":[0.1,0.01]}, training_frame=filefd41fe7ac0b_csv_1.hex_2, grid_id=gbm_grid_search, response_column=Species, ignored_columns=[""]}

Grid Search in R

Grid search in R provides the following capabilities:

Example

ntrees_opts = c(1, 5)
learn_rate_opts = c(0.1, 0.01)
hyper_parameters = list(ntrees = ntrees_opts, learn_rate = learn_rate_opts)
grid <- h2o.grid("gbm", grid_id="gbm_grid_test", x=1:4, y=5, training_frame=iris.hex, hyper_params = hyper_parameters)
grid_models <- lapply(grid@model_ids, function(mid) {
    model = h2o.getModel(mid)
  })

For more information, refer to the R grid search code.

Grid Search in Python

Example

 hyper_parameters = {'ntrees':[10,50], 'max_depth':[20,10]}
  grid_search = H2OGridSearch(H2ORandomForestEstimator, hyper_params=hyper_parameters)
  grid_search.train(x=["x1", "x2"], y="y", training_frame=train)
  grid_search.show()

For more information, refer to the Python grid search code.

Grid Search Java API

There are two core entities: Grid and GridSearch. GridSeach is a job-building Grid object and is defined by the user’s model factory and the hyperspace walk strategy. The model factory must be defined for each supported model type (DRF, GBM, DL, and K-means). The hyperspace walk strategy specifies how the user-defined space of hyper parameters is traversed. The space definition is not limited. For each point in hyperspace, model parameters of the specified type are produced.

Currently, the implementation supports a simple cartesian grid search, but additional space traversal strategies are currently in development. This triggers a new model builder job for each hyperspace point returned by the walk strategy. If the model builder job fails, it is ignored; however, it can still be tracked in the job list. Model builder jobs are run serially in sequential order. More advanced job scheduling schemes are under development.

The grid object contains the results of the grid search: a list of model keys produced by the grid search. The grid object publishes a simple API to get the models.

Launch the grid search by specifying:

The Java API can grid search any parameters defined in the model parameter’s class (e.g., GBMParameters). Paramters that are appropriate for gridding are marked by the @API parameter, but this is not enforced by the framework.

Additional parameters are available in the model builder to support creation of model parameters and configuration. This eliminates the requirement of the previous implementation where each gridable value was represented as a double. This also allows users to specify different building strategies for model parameters. For example, a REST layer uses a builder that validates parameters against the model parameter’s schema, where the Java API uses a simple reflective builder. Additional reflections support is provided by PojoUtils (methods setField, getFieldValue).

Example

HashMap<String, Object[]> hyperParms = new HashMap<>();
hyperParms.put("_ntrees", new Integer[]{1, 2});
hyperParms.put("_distribution", new Distribution.Family[]{Distribution.Family.multinomial});
hyperParms.put("_max_depth", new Integer[]{1, 2, 5});
hyperParms.put("_learn_rate", new Float[]{0.01f, 0.1f, 0.3f});

// Setup common model parameters
GBMModel.GBMParameters params = new GBMModel.GBMParameters();
params._train = fr._key;
params._response_column = "cylinders";
// Trigger new grid search job, block for results and get the resulting grid object
GridSearch gs = GridSearch.startGridSearch(params, hyperParms, GBM_MODEL_FACTORY);
Grid grid = (Grid) gs.get();

Exposing grid search end-point for new algorithm

In the following example, the PCA algorithm has been implemented and we would like to expose the algorithm via REST API. The following aspects are assumed:

To add support for PCA grid search:

  1. Add the PCA model build factory into the hex.grid.ModelFactories class:

       class ModelFactories {
         /* ... */
         public static ModelFactory<PCAModel.PCAParameters>
           PCA_MODEL_FACTORY =
           new ModelFactory<PCAModel.PCAParameters>() {
             @Override
             public String getModelName() {
               return "PCA";
             }
    
             @Override
             public ModelBuilder buildModel(PCAModel.PCAParameters params) {
               return new PCA(params);
             }
           };
       }
    
  2. Add the PCA REST end-point schema:

       public class PCAGridSearchV99 extends GridSearchSchema<PCAGridSearchHandler.PCAGrid,
         PCAGridSearchV99,
         PCAModel.PCAParameters,
         PCAV3.PCAParametersV3> {
    
       }
    
  3. Add the PCA REST end-point handler:

       public class PCAGridSearchHandler
         extends GridSearchHandler<PCAGridSearchHandler.PCAGrid,
         PCAGridSearchV99,
         PCAModel.PCAParameters,
         PCAV3.PCAParametersV3> {
    
         public PCAGridSearchV99 train(int version, PCAGridSearchV99 gridSearchSchema) {
           return super.do_train(version, gridSearchSchema);
         }
    
         @Override
         protected ModelFactory<PCAModel.PCAParameters> getModelFactory() {
           return ModelFactories.PCA_MODEL_FACTORY;
         }
    
         @Deprecated
         public static class PCAGrid extends Grid<PCAModel.PCAParameters> {
    
           public PCAGrid() {
             super(null, null, null, null);
           }
         }
       }
    
  4. Register the REST end-point in the register factory hex.api.Register:

       public class Register extends AbstractRegister {
           @Override
           public void register() {
               // ...
               H2O.registerPOST("/99/Grid/pca", PCAGridSearchHandler.class, "train", "Run grid search for PCA model.");
               // ...
            }
       }
    

Implementing a new grid search walk strategy

>In progress...

Grid Testing

This feature is tested with the intention of fixing semantics of the grid API. The current test infrastructure includes:

R Tests

JUnit Test

Caveats/In Progress

Documentation

Security

H2O Enterprise Edition contains security features intended for deployment inside a secure data center.

Please see the H2O Enterprise Edition web page for more information about the enterprise version of H2O.

Security model

Below is a discussion of what the security assumptions are, and what the H2O software does and does not do.

Terms

Term Definition
H2O cluster A collection of H2O nodes that work together. In the H2O Flow Web UI, the cluster status menu item shows the list of nodes in an H2O cluster.
H2O node One JVM instance running the H2O main class. One H2O node corresponds to one OS-level process. In the YARN case, one H2O node corresponds to one mapper instance and one YARN container.
H2O embedded web port Each H2O node contains an embedded web port (by default port 54321). This web port hosts H2O Flow as well as the H2O REST API. The user interacts directly with this web port.
H2O internal communication port Each H2O node also has an internal port (web port+1, so by default port 54322) for internal node-to-node communication. This is a proprietary binary protocol. An attacker using a tool like tcpdump or wireshark may be able to reverse engineer data captured on this communication path.

Assumptions (threat model)

  1. H2O lives in a secure data center.

  2. Denial of service is not a concern.

    • H2O is not designed to withstand a DOS attack.
  3. HTTP traffic between the user client and H2O cluster needs to be encrypted.

    • This is true for both interactive sessions (e.g the H2O Flow Web UI) and programmatic sessions (e.g. an R program).
  4. Man-in-the-middle attacks are of low concern.

    • Certificate checking on the client side for R/python is not yet implemented.
  5. Internal binary H2O node-to-H2O node traffic does not need to be secured.

    • The customer is responsible for the H2O cluster’s perimeter security if this is a concern.
    • An example would be putting the nodes for an H2O cluster in a VLAN and opening up one port so user clients can reach the H2O cluster on the embedded web port.
  6. You trust the person that starts H2O to start it correctly.

    • Enabling H2O security requires specifying the correct security options.
  7. User client sessions do not need to expire. A session lives at most as long as the cluster lifetime. H2O clusters are started and stopped “frequently enough”.

    • All data is stored in-memory, so restarting the H2O cluster wipes all data from memory, and there is nothing to clean from disk.
  8. Once a user is authenticated for access to H2O, they have full access.

    • H2O supports authentication but not authorization or access control (ACLs).
  9. H2O clusters are meant to be accessed by only one user.

    • Each user starts their own H2O cluster.
    • H2O only allows access to the embedded web port to the person that started the cluster.

Data chain-of-custody in a Hadoop data center environment

Notes:

Through this sequence, it is shown that a user is only able to access the same data from H2O that they could already access from normal Hadoop jobs.

  1. Data lives in HDFS
  2. The files in HDFS have permissions
  3. An HDFS user has permissions (capabilities) to access certain files
  4. Kerberos (kinit) can be used to authenticate a user in a Hadoop environment
  5. A user’s Hadoop MapReduce job inherits the permissions (capabilities) of the user, as well as kinit metadata
  6. H2O is a Hadoop MapReduce job
  7. H2O can only access the files in HDFS that the user has permission to access
  8. (Enterprise only) Only the user that started the cluster is authenticated for access to the H2O cluster
  9. (Enterprise only) The authenticated user can access the same data in H2O that he could access via HDFS

What is being secured today

  1. Standard file permissions security is provided by the Operating System and by HDFS.

  2. The embedded web port in each node of H2O can be secured in two ways:

Method Description
HTTPS Encrypted socket communication between the user client and the embedded H2O web port.
Authentication An HTTP Basic Auth username and password from the user client.

Note: Embedded web port HTTPS and authentication may be used separately or together.

What is specifically not being secured today

File security in H2O

H2O is a normal user program. Nothing specifically needs to be done by the user to get file security for H2O. Operating System and HDFS permissions “just work”. File security is provided by both H2O Open Source and Enterprise Editions.

Standalone H2O

Since H2O is a regular Java program, the files H2O can access are restricted by the user’s Operating System permissions (capabilities).

H2O on Hadoop

Since H2O is a regular Hadoop MapReduce program, the files H2O can access are restricted by the standard HDFS permissions of the user that starts H2O.

Since H2O is a regular Hadoop MapReduce program, Kerberos (kinit) works seamlessly. (No code was added to H2O to support Kerberos.)

Sparkling Water on YARN

Similar to H2O on Hadoop, this configuration is H2O on Spark on YARN. The YARN job inherits the HDFS permissions of the user.

Embedded web port (by default port 54321) security

For the client side, connection options have been added. These are present in both the Open Source and Enterprise versions of H2O (to make it easy to upgrade to the Enterprise version with purely a server-side upgrade).

For the server side, startup options have been added to the H2O Enterprise Edition to facilitate security. These are detailed below.

HTTPS

HTTPS client side

Flow Web UI client

When HTTPS is enabled on the server side, the user must provide the https URI scheme to the browser. No http access will exist.

R client

The following code snippet demonstrates connecting to an H2O cluster with HTTPS:

h2o.init(ip = "a.b.c.d", port = 54321, https = TRUE, insecure = TRUE)

The underlying HTTPS implementation is provided by RCurl and by extension libcurl and OpenSSL.

Caution:

Certificate checking has not been implemented yet. The insecure flag tells the client to ignore certificate checking. This means your client is exposed to a man-in-the-middle attack. We assume for the time being that in a secure corporate network such attacks are of low concern. Currently, the insecure flag must be set to TRUE so that in some future version of H2O you will confidently know when certificate checking has actually been implemented.

Python client

Not yet implemented. Please contact H2O for an update.

HTTPS server side

A Java Keystore must be provided on the server side to enable HTTPS. Keystores can be manipulated on the command line with the keytool command.

H2O Enterprise Edition ships with a (compromised) keystore file (h2o.jks) for convenience that you can use to get started. The JKS password for this keystore is “h2oh2o”.

The underlying HTTPS implementation is provided by Jetty 8 and the Java runtime. (Note: Jetty 8 was chosen to retain Java 6 compatibility.)

Standalone H2O EE

The following new options are available in H2O Enterprise Edition:

-jks <filename>
     Java keystore file

-jks_pass <password>
     (Default is 'h2oh2o')

Example:

java -jar h2o.jar -jks h2o.jks
H2O EE on Hadoop

The following new options are available in H2O Enterprise Edition:

-jks <filename>
     Java keystore file

-jks_pass <password>
     (Default is 'h2oh2o')

Example:

hadoop jar h2odriver.jar -n 3 -mapperXmx 10g -jks h2o.jks -output hdfsOutputDirectory
Sparkling Water EE

The following new Spark conf properties exist in H2O Enterprise Edition for Java Keystore configuration:

Spark conf property Description
spark.ext.h2o.jks Path to Java Keystore
spark.ext.h2o.jks.pass JKS password

Example:

$SPARK_HOME/bin/spark-submit --class water.SparklingWaterDriver --conf spark.ext.h2o.jks=/path/to/h2o.jks sparkling-water-assembly-0.2.17-SNAPSHOT-all.jar
Creating your own self-signed Java Keystore

Here is an example of how to create your own self-signed Java Keystore (mykeystore.jks) with a custom keystore password (mypass) and how to run standalone H2O using your Keystore:

# Be paranoid and delete any previously existing keystore.
rm -f mykeystore.jks

# Generate a new keystore.
keytool -genkey -keyalg RSA -keystore mykeystore.jks -storepass mypass -keysize 2048
What is your first and last name?
  [Unknown]:  
What is the name of your organizational unit?
  [Unknown]:  
What is the name of your organization?
  [Unknown]:  
What is the name of your City or Locality?
  [Unknown]:  
What is the name of your State or Province?
  [Unknown]:  
What is the two-letter country code for this unit?
  [Unknown]:  
Is CN=Unknown, OU=Unknown, O=Unknown, L=Unknown, ST=Unknown, C=Unknown correct?
  [no]:  yes

Enter key password for <mykey>
    (RETURN if same as keystore password):  

# Run H2O using the newly generated self-signed keystore.
java -jar h2o.jar -jks mykeystore.jks -jks_pass mypass

LDAP authentication

H2O client and server side configuration for LDAP is discussed below. Authentication is implemented using Basic Auth.

LDAP H2O-client side

Flow Web UI client

When authentication is enabled, the user will be presented with a username and password dialog box when attempting to reach Flow.

R client

The following code snippet demonstrates connecting to an H2O cluster with authentication:

h2o.init(ip = "a.b.c.d", port = 54321, username = "myusername", password = "mypassword")
Python client

Not yet implemented. Please contact H2O for an update.

LDAP H2O-server side

An ldap.conf configuration file must be provided by the user. As an example, this file works for H2O’s internal LDAP server. You will certainly need help from your IT security folks to adjust this configuration file for your environment.

Example ldap.conf:

ldaploginmodule {
    org.eclipse.jetty.plus.jaas.spi.LdapLoginModule required
    debug="true"
    useLdaps="false"
    contextFactory="com.sun.jndi.ldap.LdapCtxFactory"
    hostname="ldap.0xdata.loc"
    port="389"
    bindDn="cn=admin,dc=0xdata,dc=loc"
    bindPassword="0xdata"
    authenticationMethod="simple"
    forceBindingLogin="true"
    userBaseDn="ou=users,dc=0xdata,dc=loc"
    userRdnAttribute="uid"
    userIdAttribute="uid"
    userPasswordAttribute="userPassword"
    userObjectClass="inetOrgPerson"
    roleBaseDn="ou=groups,dc=0xdata,dc=loc"
    roleNameAttribute="cn"
    roleMemberAttribute="uniqueMember"
    roleObjectClass="groupOfUniqueNames";
};

See the Jetty 8 LdapLoginModule documentation for more information.

Standalone H2O EE

The following new options are available in H2O Enterprise Edition:

-ldap_login
      Use Jetty LdapLoginService

-login_conf <filename>
      LoginService configuration file

-user_name <username>
      Override name of user for which access is allowed

Example:

java -jar h2o.jar -ldap_login -login_conf ldap.conf

java -jar h2o.jar -ldap_login -login_conf ldap.conf -user_name myLDAPusername
H2O EE on Hadoop

The following new options are available in H2O Enterprise Edition:

-ldap_login
      Use Jetty LdapLoginService

-login_conf <filename>
      LoginService configuration file

-user_name <username>
      Override name of user for which access is allowed

Example:

hadoop jar h2odriver.jar -n 3 -mapperXmx 10g -ldap_login -login_conf ldap.conf -output hdfsOutputDirectory

hadoop jar h2odriver.jar -n 3 -mapperXmx 10g -ldap_login -login_conf ldap.conf -user_name myLDAPusername -output hdfsOutputDirectory
Sparkling Water EE

The following new Spark conf properties exist in H2O Enterprise Edition for Java keystore configuration:

Spark conf property Description
spark.ext.h2o.ldap.login Use Jetty LdapLoginService
spark.ext.h2o.login.conf LoginService configuration file
spark.ext.h2o.user.name Override name of user for which access is allowed

Example:

$SPARK_HOME/bin/spark-submit --class water.SparklingWaterDriver --conf spark.ext.h2o.ldap.login=true --conf spark.ext.h2o.login.conf=/path/to/ldap.conf sparkling-water-assembly-0.2.17-SNAPSHOT-all.jar

$SPARK_HOME/bin/spark-submit --class water.SparklingWaterDriver --conf spark.ext.h2o.ldap.login=true --conf spark.ext.h2o.user.name=myLDAPusername --conf spark.ext.h2o.login.conf=/path/to/ldap.conf sparkling-water-assembly-0.2.17-SNAPSHOT-all.jar

Hash file authentication

H2O client and server side configuration for a hardcoded hash file is discussed below. Authentication is implemented using Basic Auth.

Hash file H2O-client side

Flow Web UI client

When authentication is enabled, the user will be presented with a username and password dialog box when attempting to reach Flow.

R client

The following code snippet demonstrates connecting to an H2O cluster with authentication:

h2o.init(ip = "a.b.c.d", port = 54321, username = "myusername", password = "mypassword")
Python client

Not yet implemented. Please contact H2O for an update.

Hash file H2O-server side

A realm.properties configuration file must be provided by the user.

Example realm.properties:

# See https://wiki.eclipse.org/Jetty/Howto/Secure_Passwords
#     java -cp h2o.jar org.eclipse.jetty.util.security.Password
username1: password1
username2: MD5:6cb75f652a9b52798eb6cf2201057c73

Generate secure passwords using the Jetty secure password generation tool:

java -cp h2o.jar org.eclipse.jetty.util.security.Password username password

See the Jetty 8 HashLoginService documentation and Jetty 8 Secure Password HOWTO for more information.

Standalone H2O EE

The following new options are available in H2O Enterprise Edition:

-hash_login
      Use Jetty HashLoginService

-login_conf <filename>
      LoginService configuration file

Example:

java -jar h2o.jar -hash_login -login_conf realm.properties
H2O EE on Hadoop

The following new options are available in H2O Enterprise Edition:

-hash_login
      Use Jetty HashLoginService

-login_conf <filename>
      LoginService configuration file

Example:

hadoop jar h2odriver.jar -n 3 -mapperXmx 10g -hash_login -login_conf realm.propertes -output hdfsOutputDirectory
Sparkling Water EE

The following new Spark conf properties exist in H2O Enterprise Edition for hash login service configuration:

Spark conf property Description
spark.ext.h2o.hash.login Use Jetty HashLoginService
spark.ext.h2o.login.conf LoginService configuration file

Example:

$SPARK_HOME/bin/spark-submit --class water.SparklingWaterDriver --conf spark.ext.h2o.hash.login=true --conf spark.ext.h2o.login.conf=/path/to/realm.properties sparkling-water-assembly-0.2.17-SNAPSHOT-all.jar

FAQ

General Troubleshooting Tips


The following error message displayed when I tried to launch H2O - what should I do?

Exception in thread "main" java.lang.UnsupportedClassVersionError: water/H2OApp
: Unsupported major.minor version 51.0
        at java.lang.ClassLoader.defineClass1(Native Method)
        at java.lang.ClassLoader.defineClassCond(Unknown Source)
        at java.lang.ClassLoader.defineClass(Unknown Source)
        at java.security.SecureClassLoader.defineClass(Unknown Source)
        at java.net.URLClassLoader.defineClass(Unknown Source)
        at java.net.URLClassLoader.access$000(Unknown Source)
        at java.net.URLClassLoader$1.run(Unknown Source)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.net.URLClassLoader.findClass(Unknown Source)
        at java.lang.ClassLoader.loadClass(Unknown Source)
        at sun.misc.Launcher$AppClassLoader.loadClass(Unknown Source)
        at java.lang.ClassLoader.loadClass(Unknown Source)
Could not find the main class: water.H2OApp. Program will exit.

This error output indicates that your Java version is not supported. Upgrade to Java 7 (JVM) or later and H2O should launch successfully.


Algorithms

What does it mean if the r2 value in my model is negative?

The coefficient of determination (also known as r^2) can be negative if:

If your r2 value is negative after your model is complete, your model is likely incorrect. Make sure your data is suitable for the type of model, then try adding an intercept.


What’s the process for implementing new algorithms in H2O?

This blog post by Cliff walks you through building a new algorithm, using K-Means, Quantiles, and Grep as examples.

To learn more about performance characteristics when implementing new algorithms, refer to Cliff’s KV Store Guide.


How do I find the standard errors of the parameter estimates (p-values)?

P-values are currently not supported. They are on our road map and will be added, depending on the current customer demand/priorities. Generally, adding p-values involves significant engineering effort because p-values for regularized GLM are not straightforward and have been defined only recently (with no standard implementation available that we know of). P-values for a restricted set of GLM problems (no regularization, low number of predictors) are easier to do and may be added sooner, if there is a sufficient demand.

For now, we recommend using a non-zero l1 penalty (alpha > 0) and considering all non-zero coefficients in the model as significant. The recommended use case is running GLM with lambda search enabled and alpha > 0 and picking the best lambda value based on cross-validation or hold-out set validation.


How do I specify regression or classification for Distributed Random Forest in the web UI?

If the response column is numeric, H2O generates a regression model. If the response column is enum, the model uses classification. To specify the column type, select it from the drop-down column heading list in the Data Preview section during parsing.


What’s the largest number of classes that H2O supports for multinomial prediction?

For tree-based algorithms, the maximum number of classes (or levels) for a response column is 1000.


How do I obtain a tree diagram of my DRF model?

Output the SVG code for the edges and nodes. A simple tree visitor is available here and the Java code generator is available here.


Is Word2Vec available? I can see the Java and R sources, but calling the API generates an error.

Word2Vec, along with other natural language processing (NLP) algos, are currently in development in the current version of H2O.


What are the “best practices” for preparing data for a K-Means model?

There aren’t specific “best practices,” as it depends on your data and the column types. However, removing outliers and transforming any categorical columns to have the same weight as the numeric columns will help, especially if you’re standardizing your data.


What is your implementation of Deep Learning based on?

Our Deep Learning algorithm is based on the feedforward neural net. For more information, refer to our Data Science documentation or Wikipedia.


How is deviance computed for a Deep Learning regression model?

For a Deep Learning regression model, deviance is computed as follows:

Loss = MeanSquare -> MSE==Deviance For Absolute/Laplace or Huber -> MSE != Deviance.


For my 0-tree GBM multinomial model, I got a different score depending on whether or not validation was enabled, even though my dataset was the same - why is that?

Different results may be generated because of the way H2O computes the initial MSE.


How does your Deep Learning Autoencoder work? Is it deep or shallow?

H2O’s DL autoencoder is based on the standard deep (multi-layer) neural net architecture, where the entire network is learned together, instead of being stacked layer-by-layer. The only difference is that no response is required in the input and that the output layer has as many neurons as the input layer. If you don’t achieve convergence, then try using the Tanh activation and fewer layers. We have some example test scripts here, and even some that show how stacked auto-encoders can be implemented in R.


Are there any H2O examples using text for classification?

Currently, the following examples are available for Sparkling Water:

a) Use TF-IDF weighting scheme for classifying text messages https://github.com/h2oai/sparkling-water/blob/master/examples/scripts/mlconf_2015_hamSpam.script.scala

b) Use Word2Vec Skip-gram model + GBM for classifying job titles https://github.com/h2oai/sparkling-water/blob/master/examples/scripts/craigslistJobTitles.scala


Most machine learning tools cannot predict with a new categorical level that was not included in the training set. How does H2O make predictions in this scenario?

Here is an example of how the prediction process works in H2O:

  1. Train a model using data that has a categorical predictor column with levels B,C, and D (no other levels); this level will be the “training set domain”: {B,C,D}
  2. During scoring, the test set has only rows with levels A,C, and E for that column; this is the “test set domain”: {A,C,E}
  3. For scoring, a combined “scoring domain” is created, which is the training domain appended with the extra test set domain entries: {B,C,D,A,E}
  4. Each model can handle these extra levels {A,E} separately during scoring.

The behavior for unseen categorical levels depends on the algorithm and how it handles missing levels (NA values):


Building H2O

Using ./gradlew build doesn’t generate a build successfully - is there anything I can do to troubleshoot?

Use ./gradlew clean before running ./gradlew build.


I tried using ./gradlew build after using git pull to update my local H2O repo, but now I can’t get H2O to build successfully - what should I do?

Try using ./gradlew build -x test - the build may be failing tests if data is not synced correctly.


Clusters

When trying to launch H2O, I received the following error message: ERROR: Too many retries starting cloud. What should I do?

If you are trying to start a multi-node cluster where the nodes use multiple network interfaces, by default H2O will resort to using the default host (127.0.0.1).

To specify an IP address, launch H2O using the following command:

java -jar h2o.jar -ip <IP_Address> -port <PortNumber>

If this does not resolve the issue, try the following additional troubleshooting tips:


What should I do if I tried to start a cluster but the nodes started independent clouds that are not connected?

Because the default cloud name is the user name of the node, if the nodes are on different operating systems (for example, one node is using Windows and the other uses OS X), the different user names on each machine will prevent the nodes from recognizing that they belong to the same cloud. To resolve this issue, use -name to configure the same name for all nodes.


One of the nodes in my cluster is unavailable — what do I do?

H2O does not support high availability (HA). If a node in the cluster is unavailable, bring the cluster down and create a new healthy cluster.


How do I add new nodes to an existing cluster?

New nodes can only be added if H2O has not started any jobs. Once H2O starts a task, it locks the cluster to prevent new nodes from joining. If H2O has started a job, you must create a new cluster to include additional nodes.


How do I check if all the nodes in the cluster are healthy and communicating?

In the Flow web UI, click the Admin menu and select Cluster Status.


How do I create a cluster behind a firewall?

H2O uses two ports:

You can start the cluster behind the firewall, but to reach it, you must make a tunnel to reach the REST_API port. To use the cluster, the REST_API port of at least one node must be reachable.


I launched H2O instances on my nodes - why won’t they form a cloud?

If you launch without specifying the IP address by adding argument -ip:

$ java -Xmx20g -jar h2o.jar -flatfile flatfile.txt -port 54321

and multiple local IP addresses are detected, H2O uses the default localhost (127.0.0.1) as shown below:

  10:26:32.266 main      WARN WATER: Multiple local IPs detected:
  +                                    /198.168.1.161  /198.168.58.102
  +                                  Attempting to determine correct address...
  10:26:32.284 main      WARN WATER: Failed to determine IP, falling back to localhost.
  10:26:32.325 main      INFO WATER: Internal communication uses port: 54322
  +                                  Listening for HTTP and REST traffic
  +                                  on http://127.0.0.1:54321/
  10:26:32.378 main      WARN WATER: Flatfile configuration does not include self:
  /127.0.0.1:54321 but contains [/192.168.1.161:54321, /192.168.1.162:54321]

To avoid using 127.0.0.1 on servers with multiple local IP addresses, run the command with the -ip argument to force H2O to launch at the specified IP:

$ java -Xmx20g -jar h2o.jar -flatfile flatfile.txt -ip 192.168.1.161 -port 54321


How does the timeline tool work?

The timeline is a debugging tool that provides information on the current communication between H2O nodes. It shows a snapshot of the most recent messages passed between the nodes. Each node retains its own history of messages sent to or received from other nodes.

H2O collects these messages from all the nodes and orders them by whether they were sent or received. Each node has an implicit internal order where sent messages must precede received messages on the other node.

The following information displays for each message:


Data

How should I format my SVMLight data before importing?

The data must be formatted as a sorted list of unique integers, the column indices must be >= 1, and the columns must be in ascending order.


What date and time formats does H2O support?

H2O is set to auto-detect two major date/time formats. Because many date time formats are ambiguous (e.g. 01/02/03), general date time detection is not used.

The first format is for dates formatted as yyyy-MM-dd. Year is a four-digit number, the month is a two-digit number ranging from 1 to 12, and the day is a two-digit value ranging from 1 to 31. This format can also be followed by a space and then a time (specified below).

The second date format is for dates formatted as dd-MMM-yy. Here the day must be one or two digits with a value ranging from 1 to 31. The month must be either a three-letter abbreviation or the full month name but is not case sensitive. The year must be either two or four digits. In agreement with POSIX standards, two-digit dates >= 69 are assumed to be in the 20th century (e.g. 1969) and the rest are part of the 21st century. This date format can be followed by either a space or colon character and then a time. The ‘-‘ between the values is optional.

Times are specified as HH:mm:ss. HH is a two-digit hour and must be a value between 0-23 (for 24-hour time) or 1-12 (for a twelve-hour clock). mm is a two-digit minute value and must be a value between 0-59. ss is a two-digit second value and must be a value between 0-59. This format can be followed with either milliseconds, nanoseconds, and/or the cycle (i.e. AM/PM). If milliseconds are included, the format is HH:mm:ss:SSS. If nanoseconds are included, the format is HH:mm:ss:SSSnnnnnn. H2O only stores fractions of a second up to the millisecond, so accuracy may be lost. Nanosecond parsing is only included for convenience. Finally, a valid time can end with a space character and then either “AM” or “PM”. For this format, the hours must range from 1 to 12. Within the time, the ‘:’ character can be replaced with a ‘.’ character.


How does H2O handle name collisions/conflicts in the dataset?

If there is a name conflict (for example, column 48 isn’t named, but C48 already exists), then the column name in concatenated to itself until a unique name is created. So for the previously cited example, H2O will try renaming the column to C48C48, then C48C48C48, and so on until an unused name is generated.


What types of data columns does H2O support?

Currently, H2O supports:


I am trying to parse a Gzip data file containing multiple files, but it does not parse as quickly as the uncompressed files. Why is this?

Parsing Gzip files is not done in parallel, so it is sequential and uses only one core. Other parallel parse compression schemes are on the roadmap.


General

How do I score using an exported JSON model?

Since JSON is just a representation format, it cannot be directly executed, so a JSON export can’t be used for scoring. However, you can score by:


How do I score using an exported POJO?

The generated POJO can be used indepedently of a H2O cluster. First use curl to send the h2o-genmodel.jar file and the java code for model to the server. The following is an example; the ip address and model names will need to be changed.

mkdir tmpdir
cd tmpdir
curl http://127.0.0.1:54321/3/h2o-genmodel.jar > h2o-genmodel.jar
curl http://127.0.0.1:54321/3/Models.java/gbm_model > gbm_model.java

To score a simple .CSV file, download the PredictCSV.java file and compile it with the POJO. Make a subdirectory for the compilation (this is useful if you have multiple models to score on).

wget https://raw.githubusercontent.com/h2oai/h2o-3/master/h2o-r/tests/testdir_javapredict/PredictCSV.java
mkdir gbm_model_dir
javac -cp h2o-genmodel.jar -J-Xmx2g -J-XX:MaxPermSize=128m PredictCSV.java gbm_model.java -d gbm_model_dir

Specify the following:

You must match the table column names to the order specified in the POJO. The output file will be in a .hex format, which is a lossless text representation of floating point numbers. Both R and Java will be able to read the hex strings as numerics.

java -ea -cp h2o-genmodel.jar:gbm_model_dir -Xmx4g -XX:MaxPermSize=256m -XX:ReservedCodeCacheSize=256m PredictCSV --header --model gbm_model --input input.csv --output output.csv

How do I predict using multiple response variables?

Currently, H2O does not support multiple response variables. To predict different response variables, build multiple models.


How do I kill any running instances of H2O?

In Terminal, enter ps -efww | grep h2o, then kill any running PIDs. You can also find the running instance in Terminal and press Ctrl + C on your keyboard. To confirm no H2O sessions are still running, go to http://localhost:54321 and verify that the H2O web UI does not display.


Why is H2O not launching from the command line?

$ java -jar h2o.jar &
% Exception in thread "main" java.lang.ExceptionInInitializerError
at java.lang.Class.initializeClass(libgcj.so.10)
at water.Boot.getMD5(Boot.java:73)
at water.Boot.<init>(Boot.java:114)
at water.Boot.<clinit>(Boot.java:57)
at java.lang.Class.initializeClass(libgcj.so.10)
Caused by: java.lang.IllegalArgumentException
at java.util.regex.Pattern.compile(libgcj.so.10)
at water.util.Utils.<clinit>(Utils.java:1286)
at java.lang.Class.initializeClass(libgcj.so.10)
...4 more

The only prerequisite for running H2O is a compatible version of Java. We recommend Oracle’s Java 1.7.


Why did I receive the following error when I tried to launch H2O?

[root@sandbox h2o-dev-0.3.0.1188-hdp2.2]hadoop jar h2odriver.jar -nodes 2 -mapperXmx 1g -output hdfsOutputDirName
Determining driver host interface for mapper->driver callback...
   [Possible callback IP address: 10.0.2.15]
   [Possible callback IP address: 127.0.0.1]
Using mapper->driver callback IP address and port: 10.0.2.15:41188
(You can override these with -driverif and -driverport.)
Memory Settings:
   mapreduce.map.java.opts:     -Xms1g -Xmx1g -Dlog4j.defaultInitOverride=true
   Extra memory percent:        10
   mapreduce.map.memory.mb:     1126
15/05/08 02:33:40 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
15/05/08 02:33:41 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050
15/05/08 02:33:47 INFO mapreduce.JobSubmitter: number of splits:2
15/05/08 02:33:48 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1431052132967_0001
15/05/08 02:33:51 INFO impl.YarnClientImpl: Submitted application application_1431052132967_0001
15/05/08 02:33:51 INFO mapreduce.Job: The url to track the job: http://sandbox.hortonworks.com:8088/proxy/application_1431052132967_0001/
Job name 'H2O_3889' submitted
JobTracker job ID is 'job_1431052132967_0001'
For YARN users, logs command is 'yarn logs -applicationId application_1431052132967_0001'
Waiting for H2O cluster to come up...
H2O node 10.0.2.15:54321 requested flatfile
ERROR: Timed out waiting for H2O cluster to come up (120 seconds)
ERROR: (Try specifying the -timeout option to increase the waiting time limit)
15/05/08 02:35:59 INFO impl.TimelineClientImpl: Timeline service address: http://sandbox.hortonworks.com:8188/ws/v1/timeline/
15/05/08 02:35:59 INFO client.RMProxy: Connecting to ResourceManager at sandbox.hortonworks.com/10.0.2.15:8050

----- YARN cluster metrics -----
Number of YARN worker nodes: 1

----- Nodes -----
Node: http://sandbox.hortonworks.com:8042 Rack: /default-rack, RUNNING, 1 containers used, 0.2 / 2.2 GB used, 1 / 8 vcores used

----- Queues -----
Queue name:            default
   Queue state:       RUNNING
   Current capacity:  0.11
   Capacity:          1.00
   Maximum capacity:  1.00
   Application count: 1
   ----- Applications in this queue -----
   Application ID:                  application_1431052132967_0001 (H2O_3889)
       Started:                     root (Fri May 08 02:33:50 UTC 2015)
       Application state:           FINISHED
       Tracking URL:                http://sandbox.hortonworks.com:8088/proxy/application_1431052132967_0001/jobhistory/job/job_1431052132967_0001
       Queue name:                  default
       Used/Reserved containers:    1 / 0
       Needed/Used/Reserved memory: 0.2 GB / 0.2 GB / 0.0 GB
       Needed/Used/Reserved vcores: 1 / 1 / 0

Queue 'default' approximate utilization: 0.2 / 2.2 GB used, 1 / 8 vcores used

----------------------------------------------------------------------

ERROR:   Job memory request (2.2 GB) exceeds available YARN cluster memory (2.2 GB)
WARNING: Job memory request (2.2 GB) exceeds queue available memory capacity (2.0 GB)
ERROR:   Only 1 out of the requested 2 worker containers were started due to YARN cluster resource limitations

----------------------------------------------------------------------
Attempting to clean up hadoop job...
15/05/08 02:35:59 INFO impl.YarnClientImpl: Killed application application_1431052132967_0001
Killed.
[root@sandbox h2o-dev-0.3.0.1188-hdp2.2]#

The H2O launch failed because more memory was requested than was available. Make sure you are not trying to specify more memory in the launch parameters than you have available.


How does the architecture of H2O work?

This PDF includes diagrams and slides depicting how H2O works in big data environments.


How does H2O work with Excel?

For more information on how H2O works with Excel, refer to this page.


I received the following error message when launching H2O - how do I resolve the error?

Invalid flow_dir illegal character at index 12...

This error message means that there is a space (or other unsupported character) in your H2O directory. To resolve this error:


How does importFiles() work in H2O?

importFiles() gets the basic information for the file and then returns a key representing that file. This key is used during parsing to read in the file and to save space so that the file isn’t loaded every time; instead, it is loaded into H2O then referenced using the key. For files hosted online, H2O verifies the destination is valid, creates a vec that loads the file when necessary, and returns a key.


Does H2O support GPUs?

Currently, we do not support this capability. If you are interested in contributing your efforts to support this feature to our open-source code database, please contact us at h2ostream@googlegroups.com.


How can I continue working on a model in H2O after restarting?

There are a number of ways you can save your model in H2O:


How can I find out more about H2O’s real-time, nano-fast scoring engine?

H2O’s scoring engine uses a Plain Old Java Object (POJO). The POJO code runs quickly but is single-threaded. It is intended for embedding into lightweight real-time environments.

All the work is done by the call to the appropriate predict method. There is no involvement from H2O in this case.

To compare multiple models simultaneously, use the POJO to call the models using multiple threads. For more information on using POJOs, refer to the POJO Quick Start Guide and POJO Java Documentation

In-H2O scoring is triggered on an existing H2O cluster, typically using a REST API call. H2O evaluates the predictions in a parallel and distributed fashion for this case. The predictions are stored into a new Frame and can be written out using h2o.exportFile(), for example.


I am using an older version of H2O (2.8 or prior) - where can I find documentation for this version?

If you are using H2O 2.8 or prior, we strongly recommend upgrading to the latest version of H2O if possible.

If you do not wish to upgrade to the latest version, documentation for H2O Classic is available here.


Hadoop

Why did I get an error in R when I tried to save my model to my home directory in Hadoop?

To save the model in HDFS, prepend the save directory with hdfs://:

# build model
model = h2o.glm(model params)

# save model
hdfs_name_node <- "mr-0x6"
hdfs_tmp_dir <- "/tmp/runit”
model_path <- sprintf("hdfs://%s%s", hdfs_name_node, hdfs_tmp_dir)
h2o.saveModel(model, dir = model_path, name = “mymodel")

How do I specify which nodes should run H2O in a Hadoop cluster?

After creating and applying the desired node labels and associating them with specific queues as described in the Hadoop documentation, launch H2O using the following command:

hadoop jar h2odriver.jar -Dmapreduce.job.queuename=<my-h2o-queue> -nodes <num-nodes> -mapperXmx 6g -output hdfsOutputDirName


How do I import data from HDFS in R and in Flow?

To import from HDFS in R:

h2o.importHDFS(path, conn = h2o.getConnection(), pattern = "",
destination_frame = "", parse = TRUE, header = NA, sep = "",
col.names = NULL, na.strings = NULL)

Here is another example:

# pathToAirlines <- "hdfs://mr-0xd6.0xdata.loc/datasets/airlines_all.csv"
# airlines.hex <- h2o.importFile(conn = h, path = pathToAirlines, destination_frame = "airlines.hex")

In Flow, the easiest way is to let the auto-suggestion feature in the Search: field complete the path for you. Just start typing the path to the file, starting with the top-level directory, and H2O provides a list of matching files.

Flow - Import Auto-Suggest

Click the file to add it to the Search: field.


Why do I receive the following error when I try to save my notebook in Flow?

Error saving notebook: Error calling POST /3/NodePersistentStorage/notebook/Test%201 with opts

When you are running H2O on Hadoop, H2O tries to determine the home HDFS directory so it can use that as the download location. If the default home HDFS directory is not found, manually set the download location from the command line using the -flow_dir parameter (for example, hadoop jar h2odriver.jar <...> -flow_dir hdfs:///user/yourname/yourflowdir). You can view the default download directory in the logs by clicking Admin > View logs… and looking for the line that begins Flow dir:.


Java

How do I use H2O with Java?

There are two ways to use H2O with Java. The simplest way is to call the REST API from your Java program to a remote cluster and should meet the needs of most users.

You can access the REST API documentation within Flow, or on our documentation site.

Flow, Python, and R all rely on the REST API to run H2O. For example, each action in Flow translates into one or more REST API calls. The script fragments in the cells in Flow are essentially the payloads for the REST API calls. Most R and Python API calls translate into a single REST API call.

To see how the REST API is used with H2O:

The second way to use H2O with Java is to embed H2O within your Java application, similar to Sparkling Water.


How do I communicate with a remote cluster using the REST API?

To create a set of bare POJOs for the REST API payloads that can be used by JVM REST API clients:

  1. Clone the sources from GitHub.
  2. Start an H2O instance.
  3. Enter % cd py.
  4. Enter % python generate_java_binding.py.

This script connects to the server, gets all the metadata for the REST API schemas, and writes the Java POJOs to {sourcehome}/build/bindings/Java.


I keep getting a message that I need to install Java. I have the latest version of Java installed, but I am still getting this message. What should I do?

This error message displays if the JAVA_HOME environment variable is not set correctly. The JAVA_HOME variable is likely points to Apple Java version 6 instead of Oracle Java version 8.

If you are running OS X 10.7 or earlier, enter the following in Terminal: export JAVA_HOME=/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home

If you are running OS X 10.8 or later, modify the launchd.plist by entering the following in Terminal:

cat << EOF | sudo tee /Library/LaunchDaemons/setenv.JAVA_HOME.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
  <dict>
  <key>Label</key>
  <string>setenv.JAVA_HOME</string>
  <key>ProgramArguments</key>
  <array>
    <string>/bin/launchctl</string>
    <string>setenv</string>
    <string>JAVA_HOME</string>
    <string>/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>ServiceIPC</key>
  <false/>
</dict>
</plist>
EOF

Python

How do I specify a value as an enum in Python? Is there a Python equivalent of as.factor() in R?

Use .asfactor() to specify a value as an enum.


I received the following error when I tried to install H2O using the Python instructions on the downloads page - what should I do to resolve it?

Downloading/unpacking http://h2o-release.s3.amazonaws.com/h2o/rel-shannon/12/Python/h2o-3.0.0.12-py2.py3-none-any.whl 
  Downloading h2o-3.0.0.12-py2.py3-none-any.whl (43.1Mb): 43.1Mb downloaded 
  Running setup.py egg_info for package from http://h2o-release.s3.amazonaws.com/h2o/rel-shannon/12/Python/h2o-3.0.0.12-py2.py3-none-any.whl 
    Traceback (most recent call last): 
      File "<string>", line 14, in <module> 
    IOError: [Errno 2] No such file or directory: '/tmp/pip-nTu3HK-build/setup.py' 
    Complete output from command python setup.py egg_info: 
    Traceback (most recent call last): 

  File "<string>", line 14, in <module> 

IOError: [Errno 2] No such file or directory: '/tmp/pip-nTu3HK-build/setup.py' 

---------------------------------------- 
Command python setup.py egg_info failed with error code 1 in /tmp/pip-nTu3HK-build

With Python, there is no automatic update of installed packages, so you must upgrade manually. Additionally, the package distribution method recently changed from distutils to wheel. The following procedure should be tried first if you are having trouble installing the H2O package, particularly if error messages related to bdist_wheel or eggs display.

# this gets the latest setuptools 
# see https://pip.pypa.io/en/latest/installing.html 
wget https://bootstrap.pypa.io/ez_setup.py -O - | sudo python 

# platform dependent ways of installing pip are at 
# https://pip.pypa.io/en/latest/installing.html 
# but the above should work on most linux platforms? 

# on ubuntu 
# if you already have some version of pip, you can skip this. 
sudo apt-get install python-pip 

# the package manager doesn't install the latest. upgrade to latest 
# we're not using easy_install any more, so don't care about checking that 
pip install pip --upgrade 

# I've seen pip not install to the final version ..i.e. it goes to an almost 
# final version first, then another upgrade gets it to the final version. 
# We'll cover that, and also double check the install. 

# after upgrading pip, the path name may change from /usr/bin to /usr/local/bin 
# start a new shell, just to make sure you see any path changes 

bash 

# Also: I like double checking that the install is bulletproof by reinstalling. 
# Sometimes it seems like things say they are installed, but have errors during the install. Check for no errors or stack traces. 

pip install pip --upgrade --force-reinstall 

# distribute should be at the most recent now. Just in case 
# don't do --force-reinstall here, it causes an issue. 

pip install distribute --upgrade 


# Now check the versions 
pip list | egrep '(distribute|pip|setuptools)' 
distribute (0.7.3) 
pip (7.0.3) 
setuptools (17.0) 


# Re-install wheel 
pip install wheel --upgrade --force-reinstall

After completing this procedure, go to Python and use h2o.init() to start H2O in Python.

Note:

If you use gradlew to build the jar yourself, you have to start the jar >yourself before you do h2o.init().

If you download the jar and the H2O package, h2o.init() will work like R >and you don’t have to start the jar yourself.


How should I specify the datatype during import in Python?

Refer to the following example:

#Let's say you want to change the second column "CAPSULE" of prostate.csv
#to categorical. You have 3 options.

#Option 1. Use a dictionary of column names to types. 
fr = h2o.import_file("smalldata/logreg/prostate.csv", col_types = {"CAPSULE":"Enum"})
fr.describe()

#Option 2. Use a list of column types.
c_types = [None]*9
c_types[1] = "Enum"
fr = h2o.import_file("smalldata/logreg/prostate.csv", col_types = c_types)
fr.describe()

#Option 3. Use parse_setup().
fraw = h2o.import_file("smalldata/logreg/prostate.csv", parse = False)
fsetup = h2o.parse_setup(fraw) 
fsetup["column_types"][1] = '"Enum"'
fr = h2o.parse_raw(fsetup) 
fr.describe()

How do I view a list of variable importances in Python?

Use model.varimp(return_list=True) as shown in the following example:

model = h2o.gbm(y = "IsDepDelayed", x = ["Month"], training_frame = df)
vi = model.varimp(return_list=True)
Out[26]:
[(u'Month', 69.27436828613281, 1.0, 1.0)]

R

Which versions of R are compatible with H2O?

Currently, the only version of R that is known to not work well with H2O is R version 3.1.0 (codename “Spring Dance”). If you are using this version, we recommend upgrading the R version before using H2O.


How can I install the H2O R package if I am having permissions problems?

This issue typically occurs for Linux users when the R software was installed by a root user. For more information, refer to the following link.

To specify the installation location for the R packages, create a file that contains the R_LIBS_USER environment variable:

echo R_LIBS_USER=\"~/.Rlibrary\" > ~/.Renviron

Confirm the file was created successfully using cat:

$ cat ~/.Renviron

You should see the following output:

R_LIBS_USER="~/.Rlibrary"

Create a new directory for the environment variable:

$ mkdir ~/.Rlibrary

Start R and enter the following:

.libPaths()

Look for the following output to confirm the changes:

[1] "<Your home directory>/.Rlibrary"                                         
[2] "/Library/Frameworks/R.framework/Versions/3.1/Resources/library"

I received the following error message after launching H2O in RStudio and using h2o.init - what should I do to resolve this error?

Error in h2o.init() : 
Version mismatch! H2O is running version 3.2.0.9 but R package is version 3.2.0.3

This error is due to a version mismatch between the H2O package and the running H2O instance. Make sure you are using the latest version of both files by downloading H2O from the downloads page and installing the latest version and that you have removed any previous H2O R package versions by running:

if ("package:h2o" %in% search()) { detach("package:h2o", unload=TRUE) }
if ("h2o" %in% rownames(installed.packages())) { remove.packages("h2o") }

Make sure to install the dependencies for the H2O R package as well:

if (! ("methods" %in% rownames(installed.packages()))) { install.packages("methods") }
if (! ("statmod" %in% rownames(installed.packages()))) { install.packages("statmod") }
if (! ("stats" %in% rownames(installed.packages()))) { install.packages("stats") }
if (! ("graphics" %in% rownames(installed.packages()))) { install.packages("graphics") }
if (! ("RCurl" %in% rownames(installed.packages()))) { install.packages("RCurl") }
if (! ("jsonlite" %in% rownames(installed.packages()))) { install.packages("jsonlite") }
if (! ("tools" %in% rownames(installed.packages()))) { install.packages("tools") }
if (! ("utils" %in% rownames(installed.packages()))) { install.packages("utils") }

Finally, install the latest version of the H2O package for R:

install.packages("h2o", type="source", repos=(c("http://h2o-release.s3.amazonaws.com/h2o/master/3/R")))
library(h2o)
localH2O = h2o.init(nthreads=-1)

I received the following error message after trying to run some code - what should I do?

> fit <- h2o.deeplearning(x=2:4, y=1, training_frame=train_hex)
  |=========================================================================================================| 100%
Error in model$training_metrics$MSE :
  $ operator not defined for this S4 class
In addition: Warning message:
Not all shim outputs are fully supported, please see ?h2o.shim for more information

Remove the h2o.shim(enable=TRUE) line and try running the code again. Note that the h2o.shim is only a way to notify users of previous versions of H2O about changes to the H2O R package - it will not revise your code, but provides suggested replacements for deprecated commands and parameters.


How do I extract the model weights from a model I’ve creating using H2O in R? I’ve enabled extract_model_weights_and_biases, but the output refers to a file I can’t open in R.

For an example of how to extract weights and biases from a model, refer to the following repo location on GitHub.


I’m using CentOS and I want to run H2O in R - are there any dependencies I need to install?

Yes, make sure to install libcurl, which allows H2O to communicate with R. We also recommend disabling SElinux and any firewalls, at least initially until you have confirmed H2O can initialize.


How do I change variable/header names on an H2O frame in R?

There are two ways to change header names. To specify the headers during parsing, import the headers in R and then specify the header as the column name when the actual data frame is imported:

header <- h2o.importFile(path = pathToHeader)
data   <- h2o.importFile(path = pathToData, col.names = header)
data

You can also use the names() function:

header <- c("user", "specified", "column", "names")
data   <- h2o.importFile(path = pathToData)
names(data) <- header

To replace specific column names, you can also use a sub/gsub in R:

header <- c("user", "specified", "column", "names")
## I want to replace "user" column with "computer"
data   <- h2o.importFile(path = pathToData)
names(data) <- sub(pattern = "user", replacement = "computer", x = names(header))

My R terminal crashed - how can I re-access my H2O frame?

Launch H2O and use your web browser to access the web UI, Flow, at localhost:54321. Click the Data menu, then click List All Frames. Copy the frame ID, then run h2o.ls() in R to list all the frames, or use the frame ID in the following code (replacing YOUR_FRAME_ID with the frame ID):

library(h2o)
localH2O = h2o.init(ip="sri.h2o.ai", port=54321, startH2O = F, strict_version_check=T)
data_frame <- h2o.getFrame(frame_id = "YOUR_FRAME_ID",conn = localH2O)

How do I remove rows containing NAs in an H2OFrame?

To remove NAs from rows:

  a   b    c    d    e
1 0   NA   NA   NA   NA
2 0   2    2    2    2
3 0   NA   NA   NA   NA
4 0   NA   NA   1    2
5 0   NA   NA   NA   NA
6 0   1    2    3    2

Removing rows 1, 3, 4, 5 to get:

  a   b    c    d    e
2 0   2    2    2    2
6 0   1    2    3    2

Use na.omit(myFrame), where myFrame represents the name of the frame you are editing.


I installed H2O in R using OS X and updated all the dependencies, but the following error message displayed: Error in .h2o.doSafeREST(conn = conn, h2oRestApiVersion = h2oRestApiVersion, Unexpected CURL error: Empty reply from server - what should I do?

This error message displays if the JAVA_HOME environment variable is not set correctly. The JAVA_HOME variable is likely points to Apple Java version 6 instead of Oracle Java version 8.

If you are running OS X 10.7 or earlier, enter the following in Terminal: export JAVA_HOME=/Library/Internet\ Plug-Ins/JavaAppletPlugin.plugin/Contents/Home

If you are running OS X 10.8 or later, modify the launchd.plist by entering the following in Terminal:

cat << EOF | sudo tee /Library/LaunchDaemons/setenv.JAVA_HOME.plist
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
  <plist version="1.0">
  <dict>
  <key>Label</key>
  <string>setenv.JAVA_HOME</string>
  <key>ProgramArguments</key>
  <array>
    <string>/bin/launchctl</string>
    <string>setenv</string>
    <string>JAVA_HOME</string>
    <string>/Library/Internet Plug-Ins/JavaAppletPlugin.plugin/Contents/Home</string>
  </array>
  <key>RunAtLoad</key>
  <true/>
  <key>ServiceIPC</key>
  <false/>
</dict>
</plist>
EOF


How does the col.names argument work in group_by?

You need to add the col.names inside the gb.control list. Refer to the following example:

newframe <- h2o.group_by(dd, by="footwear_category", nrow("email_event_click_ct"), sum("email_event_click_ct"), mean("email_event_click_ct"),
    sd("email_event_click_ct"), gb.control = list( col.names=c("count", "total_email_event_click_ct", "avg_email_event_click_ct", "std_email_event_click_ct") ) )
newframe$avg_email_event_click_ct2 = newframe$total_email_event_click_ct / newframe$count

How are the results of h2o.predict displayed?

The order of the rows in the results for h2o.predict is the same as the order in which the data was loaded, even if some rows fail (for example, due to missing values or unseen factor levels). To bind a per-row identifier, use cbind.


How do I view all the variable importances for a model?

By default, H2O returns the top five and lowest five variable importances. To view all the variable importances, use the following:

model <- h2o.getModel(model_id = "my_H2O_modelID",conn=localH2O)

varimp<-as.data.frame(h2o.varimp(model))

How do I add random noise to a column in an H2O frame?

To add random noise to a column in an H2O frame, refer to the following example:

h2o.init()

fr <- as.h2o(iris)

  |======================================================================| 100%

random_column <- h2o.runif(fr)

new_fr <- h2o.cbind(fr,random_column)

new_fr

Sparkling Water

How do I filter an H2OFrame using Sparkling Water?

Filtering columns is easy: just remove the unnecessary columns or create a new H2OFrame from the columns you want to include (Frame(String[] names, Vec[] vec)), then make the H2OFrame wrapper around it (new H2OFrame(frame)).

Filtering rows is a little bit harder. There are two ways:


How do I inspect H2O using Flow while a droplet is running?

If your droplet execution time is very short, add a simple sleep statement to your code:

Thread.sleep(...)


How do I change the memory size of the executors in a droplet?

There are two ways to do this:


I received the following error while running Sparkling Water using multiple nodes, but not when using a single node - what should I do?

onExCompletion for water.parser.ParseDataset$MultiFileParseTask@31cd4150
water.DException$DistributedException: from /10.23.36.177:54321; by class water.parser.ParseDataset$MultiFileParseTask; class water.DException$DistributedException: from /10.23.36.177:54325; by class water.parser.ParseDataset$MultiFileParseTask; class water.DException$DistributedException: from /10.23.36.178:54325; by class water.parser.ParseDataset$MultiFileParseTask$DistributedParse; class java.lang.NullPointerException: null
    at water.persist.PersistManager.load(PersistManager.java:141)
    at water.Value.loadPersist(Value.java:226)
    at water.Value.memOrLoad(Value.java:123)
    at water.Value.get(Value.java:137)
    at water.fvec.Vec.chunkForChunkIdx(Vec.java:794)
    at water.fvec.ByteVec.chunkForChunkIdx(ByteVec.java:18)
    at water.fvec.ByteVec.chunkForChunkIdx(ByteVec.java:14)
    at water.MRTask.compute2(MRTask.java:426)
    at water.MRTask.compute2(MRTask.java:398)

This error output displays if the input file is not present on all nodes. Because of the way that Sparkling Water distributes data, the input file is required on all nodes (including remote), not just the primary node. Make sure there is a copy of the input file on all the nodes, then try again.


Are there any drawbacks to using Sparkling Water compared to standalone H2O?

The version of H2O embedded in Sparkling Water is the same as the standalone version.


How do I use Sparkling Water from the Spark shell?

There are two methods:

The software distribution provides example scripts in the examples/scripts directory:

bin/sparkling-shell -i examples/scripts/chicagoCrimeSmallShell.script.scala

For either method, initialize H2O as shown in the following example:

import org.apache.spark.h2o._
val h2oContext = new H2OContext(sc).start()

After successfully launching H2O, the following output displays:

Sparkling Water Context:
 * number of executors: 3
 * list of used executors:
  (executorId, host, port)
  ------------------------
  (1,Michals-MBP.0xdata.loc,54325)
  (0,Michals-MBP.0xdata.loc,54321)
  (2,Michals-MBP.0xdata.loc,54323)
  ------------------------

  Open H2O Flow in browser: http://172.16.2.223:54327 (CMD + click in Mac OSX)

How do I use H2O with Spark Submit?

Spark Submit is for submitting self-contained applications. For more information, refer to the Spark documentation.

First, initialize H2O:

import org.apache.spark.h2o._
val h2oContext = new H2OContext(sc).start()

The Sparkling Water distribution provides several examples of self-contained applications built with Sparkling Water. To run the examples:

bin/run-example.sh ChicagoCrimeAppSmall

The “magic” behind run-example.sh is a regular Spark Submit:

$SPARK_HOME/bin/spark-submit ChicagoCrimeAppSmall --packages ai.h2o:sparkling-water-core_2.10:1.3.3 --packages ai.h2o:sparkling-water-examples_2.10:1.3.3


How do I use Sparkling Water with Databricks cloud?

Sparkling Water compatibility with Databricks cloud is still in development.


How do I develop applications with Sparkling Water?

For a regular Spark application (a self-contained application as described in the Spark documentation), the app needs to initialize H2OServices via H2OContext:

import org.apache.spark.h2o._
val h2oContext = new H2OContext(sc).start()

For more information, refer to the Sparkling Water development documentation.


How do I connect to Sparkling Water from R or Python?

After starting H2OServices by starting H2OContext, point your client to the IP address and port number specified in H2OContext.


I’m getting a java.lang.ArrayIndexOutOfBoundsException when I try to run Sparkling Water - what do I need to do to resolve this error?

This error message displays if you have not set up the H2OContext before running Sparkling Water. To set up the H2OContext:

import org.apache.spark.h2o._
val h2oContext = new H2OContext(sc)

After setting up H2OContext, try to run Sparkling Water again.



Tableau

Where can I learn more about running H2O with Tableau?

For more information about using H2O with Tableau, refer to this link and our demo in our GitHub repository. Other demos are available here and here.


Tunneling between servers with H2O

To tunnel between servers (for example, due to firewalls):

  1. Use ssh to log in to the machine where H2O will run.
  2. Start an instance of H2O by locating the working directory and calling a java command similar to the following example.

    The port number chosen here is arbitrary; yours may be different.

    $ java -jar h2o.jar -port 55599

    This returns output similar to the following:

     irene@mr-0x3:~/target$ java -jar h2o.jar -port 55599
     04:48:58.053 main      INFO WATER: ----- H2O started -----
     04:48:58.055 main      INFO WATER: Build git branch: master
     04:48:58.055 main      INFO WATER: Build git hash: 64fe68c59ced5875ac6bac26a784ce210ef9f7a0
     04:48:58.055 main      INFO WATER: Build git describe: 64fe68c
     04:48:58.055 main      INFO WATER: Build project version: 1.7.0.99999
     04:48:58.055 main      INFO WATER: Built by: 'Irene'
     04:48:58.055 main      INFO WATER: Built on: 'Wed Sep  4 07:30:45 PDT 2013'
     04:48:58.055 main      INFO WATER: Java availableProcessors: 4
     04:48:58.059 main      INFO WATER: Java heap totalMemory: 0.47 gb
     04:48:58.059 main      INFO WATER: Java heap maxMemory: 6.96 gb
     04:48:58.060 main      INFO WATER: ICE root: '/tmp'
     04:48:58.081 main      INFO WATER: Internal communication uses port: 55600
     +                                  Listening for HTTP and REST traffic on
     +                                  http://192.168.1.173:55599/
     04:48:58.109 main      INFO WATER: H2O cloud name: 'irene'
     04:48:58.109 main      INFO WATER: (v1.7.0.99999) 'irene' on
     /192.168.1.173:55599, discovery address /230 .252.255.19:59132
     04:48:58.111 main      INFO WATER: Cloud of size 1 formed [/192.168.1.173:55599]
     04:48:58.247 main      INFO WATER: Log dir: '/tmp/h2ologs'
    
  3. Log into the remote machine where the running instance of H2O will be forwarded using a command similar to the following (your specified port numbers and IP address will be different)

    ssh -L 55577:localhost:55599 irene@192.168.1.173

  4. Check the cluster status.

You are now using H2O from localhost:55577, but the instance of H2O is running on the remote server (in this case the server with the ip address 192.168.1.xxx) at port number 55599.

To see this in action note that the web UI is pointed at localhost:55577, but that the cluster status shows the cluster running on 192.168.1.173:55599


Glossary

Term Definition
H2O.ai Maker of H2O. Visit our website.
Autoencoder An extension of the Deep Learning framework. Can be used to compress input features (similar to PCA). Sparse autoencoders are simple extensions that can increase accuracy. Use autoencoders for:

- generic dimensionality reduction (for pre-processing for any algorithm)

- anomaly detection (for comparing the reconstructed signal with the original to find differences that may be anomalies)

- layer-by-layer pre-training (using stacked auto-encoders)
Backpropogation Uses a known, desired output for each input value to calculate the loss function gradient for training. If enabled, performed after each training sample in Deep Learning.
BAD A column type that contains only missing values.
Balance classes A parameter that oversamples the minority classes to balance the distribution.
Beta constraints A data.frame or H2OParsedData object with the columns [“names”, “lower_bounds”,”upper_bounds”, “beta_given”], where each row corresponds to a predictor in the GLM. “names” contains the predictor names, “lower_bounds” and “upper_bounds” are the lower and upper bounds of beta, and “beta_given” is some supplied starting values for beta.
Binary A variable with only two possible outcomes. Refer to binomial.
Binomial A variable with the value 0 or 1. Binomial variables assigned as 0 indicate that an event hasn’t occurred or that the observation lacks a feature, where 1 indicates occurrence or instance of an attribute.
Bins Bins are linear-sized from the observed min-to-max for the subset that is being split. Large bins are enforced for shallow tree depths. Based on the tree decisions, as the tree gets deeper, the bins are distributed symmetrically over the reduced range of each subset.
Categorical A qualitative, unordered variable (for example, A, B, AB, and O would be values for the category blood type); synonym for enumerator or factor. Stored as an int column with a String mapping in H2O; limited to 10 million unique strings in H2O.
Classification A model whose goal is to predict the category for the response input.
Clip In the H2O web UI Flow, a clip is a single cell in a flow containing an action that is saved for later reuse.
Cloud Synonym for cluster. Refer to the definition for cluster.
Cluster 1. A group of H2O nodes that work together; when a job is submitted to a cluster, all the nodes in the cluster work on a portion of the job. Synonym for cloud.

2. In statistics, a cluster is a group of observations from a data set identified as similar according to a particular clustering algorithm.

Confusion matrix Table that depicts the performance of the algorithm (using the false positive rate, false negative, true positive, and true negative rates).
Continuous A variable that can take on all or nearly all values along an interval on the real number line (for example, height or weight). The opposite of a discrete value, which can only take on certain numerical values (for example, the number of patients treated).
CSV file CSV is an acronym for comma-separated value. A CSV file stores data in a plain text format.
Deep Learning Uses a composition of multiple non-linear transformations to model high-level abstractions in data.
Dependent variable The response column in H2O; what you are trying to measure, observe, or predict. The opposite of an independent variable.
Data frame A distributed representation of a large dataset.
Destination key Automatically generated key for a model that allows recall of a specific model later in analysis. Users can specify a different destination key than the key generated by H2O.
Deviance Deviance is the difference between an expected value and an observed value. It plays a critical role in defining GLM models. For a more detailed discussion of deviance, please refer to the H2O Data Science documentation on GLM.
Distributed key/value (DKV) Distributed key/value store. Refer also to key/value store.
Discrete A variable that can only take on certain numerical values (for example, the number of patients treated). The opposite of a continuous variable.
Enumerator/enum A data type where the value is one of a defined set of named values known as “elements”, “members”, or “enumerators.” For example, cat, dog, & mouse are enumerators of the enumerated type animal.
Epoch A round or iteration of model training or testing. Refer also to iteration.
Factor A data type where the value is one of a defined set of categories. Refer to Enum and Categorical.
Family The distribution options available for predictive modeling in GLM.
Feature Synonym for attribute, predictor, or independent variable. Usually refers to the data observed on features given in the columns of a data set.
Feed-forward Associates input with output for pattern recognition.
Flatfile A basic text file containing multiple IP addresses (one per line) used by H2O to configure a cluster.
Flow Refers to the series of cell-based actions created in H2O’s web UI or the web UI itself.
Gzipped (gz) file Gzip is a type of file compression commonly used for H2O file dependencies.
HEX format Records made up of hexadecimal numbers representing machine language code or constant data. In H2O, data must be parsed into .hex format before you can perform operations on it.
Independent variable The factors can be manipulated or controlled (also known as predictors). The opposite of a dependent variable.
Hit ratio The number of times the prediction was correct out of the total number of predictions.
Instance Occurs each time H2O is started. This process builds a cluster of nodes (even if it is only a one-node cluster on a local machine). The instance begins when the cluster is formed and ends when the program is closed.
Integer A whole number (can be negative but cannot be a fraction). Can be represented in H2O as an int, which is not a type but a property of the data.
Iteration A round or instance of model testing or training. Also known as an epoch.
Job A task performed by H2O. For example, reading a data file, parsing a data file, or building a model. In the browser-based GUI of H2O, each job is listed in the Admin menu under Jobs.
JVM Java virtual machine; used to run H2O.
Key The .hex key generated when data are parsed into H2O. In the web-based GUI, key is an input on each page where users define models and any page where users validate models on a new data set or use a model to generate predictions.
Key/value pair A type of data that associates a particular key index to a certain datum.
Key/value store A tool that allows storage of schema-less data. Data usually consists of a string that represents the key, and the data itself, which is the value. Refer also to distributed key/value.
L1 regularization A regularization method that constrains the absolute value of the weights and has the net effect of dropping some values (setting them to zero) from a model to reduce complexity and avoid overfitting.
L2 regularization A regularization method that constrains the sum of the squared weights. This method introduces bias into parameter estimates but frequently produces substantial gains in modeling as estimate variance is reduced.
Link function A user-defined option in GLM.
Loss function The function minimized in order to achieve a desired estimator; synonymous to objective function and criterion function. For example, linear regression defines the set of best parameter estimates as the set of estimates that produces the minimum of the sum of the squared errors. Errors are the difference between the predicted value and the observed value.
MSE Mean squared error; measures the average of the squares of the error rate (the difference between the predictors and what was predicted).
Multinomial A variable where the value can be one of more than two possible outcomes (for example, blood type).
N-folds User-defined number of cross validation models generated by H2O.
Node In distributed computing systems, nodes include clients,servers, or peers. In statistics, a node is a decision or terminal point in a classification tree.
Numeric A column type containing real numbers, small integers, or booleans.
Offset A parameter that compensates for differences in units of observation (for example, different populations or geographic sizes) to make sure outcome is proportional.
Outline In H2O’s web UI Flow, a brief summary of the actions contained in the cells.
Parse Analysis of a string of symbols or datum that results in the conversion of a set of information from a person-readable format to a machine-readable format.
POJO Plain Old Java Object; a way to export a model built in H2O and implement it in a Java application.
Regression A model where the input is numerical and the output is a prediction of numerical values. Also known as “quantitative”; the opposite of a classification model.
Response column Method of selecting the dependent variable in H2O.
Real A fractional number.
ROC Curve Graph representing the ratio to true positives to false positives.
Scoring history Represents the error rate of the model as it is built.
Seed A starting point for randomization. Seed specification is used when machine learning models have a random component; it allows users to recreate the exact “random” conditions used in a model at a later time.
Separator What separates the entries in the dataset; usually a comma, semicolon, etc.
Sparse A dataset where many of the rows contain blank values or “NA” instead of data.
Standard deviation The standard deviation of the data in the column, defined as the square root of the sum of the deviance of observed values from the mean divided by the number of elements in the column minus one. Abbreviated sd.
Standardization Transformation of a variable so that it is mean-centered at 0 and scaled by the standard deviation; helps prevent precision problems.
String Refers to data where each entry is typically unique (for example, a dataset containing people’s names and addresses).
Supervised learning Model type where the input is labeled so that the algorithm can identify it and learn from it.
Time Data type supported by H2O; represented as “milliseconds-since-the-Unix-Epoch”; stored internally as a 64-bit integer in a standard int column. Used directly by the Cox Proportional Hazards model but also used to build other features.
Training frame The dataset used to build the model.
Unsupervised learning Model type where the input is not labeled.
UUID A dense representation of universally unique identifiers (UUIDs) used to label and group events; stored as a 128-bit numeric value.
Validation An analysis of how well the model fits.
Validation frame The dataset used to evaluate the accuracy of the model.
Variable importance Represents the statistical significance of each variable in the data in terms of its affect on the model.
Weights A parameter that specifies certain outcomes as more significant (for example, if you are trying to identify incidence of disease, one “positive” result can be more meaningful than 50 “negative” responses). Higher values indicate more importance.
XLS file A Microsoft Excel 2003-2007 spreadsheet file format.
Y Dependent variable used in GLM; a user-defined input selected from the set of variables present in the user’s data.
YARN Yet Another Resource Manager; used to manage H2O on a Hadoop cluster.

Quick Start Videos

H2O Quick Start with Flow


H2O Quick Start with Python


H2O Quick Start on Hadoop


H2O Quick Start with Sparkling Water


H2O Quick Start with R


REST API Reference

GET /3/About

Return information about this H2O cluster.

InputAboutV3
OutputAboutV3

GET /3/Cloud

Determine the status of the nodes in the H2O cloud.

InputCloudV3
OutputCloudV3

HEAD /3/Cloud

Determine the status of the nodes in the H2O cloud.

InputCloudV3
OutputCloudV3

POST /3/Configuration/ModelBuilders/visibility

Set Model Builders visibility level.

InputModelBuildersVisibilityV3
OutputModelBuildersVisibilityV3

GET /3/Configuration/ModelBuilders/visibility

Get Model Builders visibility level.

InputModelBuildersVisibilityV3
OutputModelBuildersVisibilityV3

POST /3/CreateFrame

Create a synthetic H2O Frame.

InputCreateFrameV3
OutputCreateFrameV3

DELETE /3/DKV

Remove all keys from the H2O distributed K/V store.

InputRemoveAllV3
OutputRemoveAllV3

DELETE /3/DKV/(?.*)

Remove an arbitrary key from the H2O distributed K/V store.

InputRemoveV3
OutputRemoveV3

GET /3/DownloadDataset

Download dataset as a CSV.

InputDownloadDataV3
OutputDownloadDataV3

GET /3/DownloadDataset.bin

Download dataset as a CSV.

InputDownloadDataV3
OutputDownloadDataV3

GET /3/Find

Find a value within a Frame.

InputFindV3
OutputFindV3

GET /3/Frames

Return all Frames in the H2O distributed K/V store.

InputFramesV3
OutputFramesV3

DELETE /3/Frames

Delete all Frames from the H2O distributed K/V store.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)

Return the specified Frame.

InputFramesV3
OutputFramesV3

DELETE /3/Frames/(?.*)

Delete the specified Frame from the H2O distributed K/V store.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns

Return all the columns from a Frame.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns/(?.*)

Return the specified column from a Frame.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns/(?.*)/domain

Return the domains for the specified column. “null” if the column is not a categorical.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/columns/(?.*)/summary

Return the summary metrics for a column, e.g. mins, maxes, mean, sigma, percentiles, etc.

InputFramesV3
OutputFramesV3

POST /3/Frames/(?.*)/export

Export a Frame to the given path with optional overwrite.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/export/(?.*)/overwrite/(?.*)

Export a Frame to the given path with optional overwrite.

InputFramesV3
OutputFramesV3

GET /3/Frames/(?.*)/summary

Return a Frame, including the histograms, after forcing computation of rollups.

InputFramesV3
OutputFramesV3

POST /3/GarbageCollect

Explicitly call System.gc().

InputGarbageCollectV3
OutputGarbageCollectV3

GET /3/ImportFiles

Import raw data files into a single-column H2O Frame.

InputImportFilesV3
OutputImportFilesV3

POST /3/ImportFiles

Import raw data files into a single-column H2O Frame.

InputImportFilesV3
OutputImportFilesV3

GET /3/InitID

Issue a new session ID.

InputInitIDV3
OutputInitIDV3

DELETE /3/InitID

End a session.

InputInitIDV3
OutputInitIDV3

POST /3/Interaction

Create interactions between categorical columns.

InputInteractionV3
OutputInteractionV3

GET /3/JStack

Report stack traces for all threads on all nodes.

InputJStackV3
OutputJStackV3

GET /3/Jobs

Get a list of all the H2O Jobs (long-running actions).

InputJobsV3
OutputJobsV3

GET /3/Jobs/(?.*)

Get the status of the given H2O Job (long-running action).

InputJobsV3
OutputJobsV3

POST /3/Jobs/(?.*)/cancel

Cancel a running job.

InputJobsV3
OutputJobsV3

GET /3/KillMinus3

Kill minus 3 on this node

InputKillMinus3V3
OutputKillMinus3V3

POST /3/LogAndEcho

Save a message to the H2O logfile.

InputLogAndEchoV3
OutputLogAndEchoV3

GET /3/Logs/nodes/(?.*)/files/(?.*)

Get named log file for a node.

InputLogsV3
OutputLogsV3

POST /3/MakeGLMModel

make a new GLM model based on existing one

InputMakeGLMModelV3
OutputGLMModelV3

GET /3/Metadata/endpoints

Return a list of all the REST API endpoints.

InputMetadataV3
OutputMetadataV3

GET /3/Metadata/endpoints/(?[0-9]+)

Return the REST API endpoint metadata, including documentation, for the endpoint specified by number.

InputMetadataV3
OutputMetadataV3

GET /3/Metadata/endpoints/(?.*)

Return the REST API endpoint metadata, including documentation, for the endpoint specified by path.

InputMetadataV3
OutputMetadataV3

GET /3/Metadata/schemaclasses/(?.*)

Return the REST API schema metadata for specified schema class.

InputMetadataV3
OutputMetadataV3

GET /3/Metadata/schemas

Return list of all REST API schemas.

InputMetadataV3
OutputMetadataV3

GET /3/Metadata/schemas/(?.*)

Return the REST API schema metadata for specified schema.

InputMetadataV3
OutputMetadataV3

POST /3/MissingInserter

Insert missing values.

InputMissingInserterV3
OutputMissingInserterV3

GET /3/ModelBuilders

Return the Model Builder metadata for all available algorithms.

InputModelBuildersV3
OutputModelBuildersV3

GET /3/ModelBuilders/(?.*)

Return the Model Builder metadata for the specified algorithm.

InputModelBuildersV3
OutputModelBuildersV3

POST /3/ModelBuilders/(?.*)/model_id

Return a new unique model_id for the specified algorithm.

InputModelBuildersV3
OutputModelIdV3

POST /3/ModelBuilders/deeplearning

Train a Deep Learning model on the specified Frame.

InputDeepLearningV3
OutputDeepLearningV3

POST /3/ModelBuilders/deeplearning/parameters

Validate a set of Deep Learning model builder parameters.

InputDeepLearningV3
OutputDeepLearningV3

POST /3/ModelBuilders/drf

Train a DRF model on the specified Frame.

InputDRFV3
OutputDRFV3

POST /3/ModelBuilders/drf/parameters

Validate a set of DRF model builder parameters.

InputDRFV3
OutputDRFV3

POST /3/ModelBuilders/gbm

Train a GBM model on the specified Frame.

InputGBMV3
OutputGBMV3

POST /3/ModelBuilders/gbm/parameters

Validate a set of GBM model builder parameters.

InputGBMV3
OutputGBMV3

POST /3/ModelBuilders/glm

Train a GLM model on the specified Frame.

InputGLMV3
OutputGLMV3

POST /3/ModelBuilders/glm/parameters

Validate a set of GLM model builder parameters.

InputGLMV3
OutputGLMV3

POST /3/ModelBuilders/glrm

Train a GLRM model on the specified Frame.

InputGLRMV3
OutputGLRMV3

POST /3/ModelBuilders/glrm/parameters

Validate a set of GLRM model builder parameters.

InputGLRMV3
OutputGLRMV3

POST /3/ModelBuilders/kmeans

Train a KMeans model on the specified Frame.

InputKMeansV3
OutputKMeansV3

POST /3/ModelBuilders/kmeans/parameters

Validate a set of KMeans model builder parameters.

InputKMeansV3
OutputKMeansV3

POST /3/ModelBuilders/naivebayes

Train a Naive Bayes model on the specified Frame.

InputNaiveBayesV3
OutputNaiveBayesV3

POST /3/ModelBuilders/naivebayes/parameters

Validate a set of Naive Bayes model builder parameters.

InputNaiveBayesV3
OutputNaiveBayesV3

POST /3/ModelBuilders/pca

Train a PCA model on the specified Frame.

InputPCAV3
OutputPCAV3

POST /3/ModelBuilders/pca/parameters

Validate a set of PCA model builder parameters.

InputPCAV3
OutputPCAV3

GET /3/ModelMetrics

Return all the saved scoring metrics.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/frames/(?.*)

Return the saved scoring metrics for the specified Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/frames/(?.*)/models/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

DELETE /3/ModelMetrics/frames/(?.*)/models/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/models/(?.*)

Return the saved scoring metrics for the specified Model.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/ModelMetrics/models/(?.*)/frames/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

DELETE /3/ModelMetrics/models/(?.*)/frames/(?.*)

Return the saved scoring metrics for the specified Model and Frame.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

POST /3/ModelMetrics/models/(?.*)/frames/(?.*)

Return the scoring metrics for the specified Frame with the specified Model. If the Frame has already been scored with the Model then cached results will be returned; otherwise predictions for all rows in the Frame will be generated and the metrics will be returned.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/Models

Return all Models from the H2O distributed K/V store.

InputModelsV3
OutputModelsV3

DELETE /3/Models

Delete all Models from the H2O distributed K/V store.

InputModelsV3
OutputModelsV3

GET /3/Models.java/(?.*)

Return the stream containing model implementation in Java code.

InputModelsV3
OutputStreamingSchema

GET /3/Models.java/(?.*)/preview

Return potentially abridged model suitable for viewing in a browser (currently only used for java model code).

InputModelsV3
OutputStreamingSchema

GET /3/Models/(?.*)

Return the specified Model from the H2O distributed K/V store, optionally with the list of compatible Frames.

InputModelsV3
OutputModelsV3

DELETE /3/Models/(?.*)

Delete the specified Model from the H2O distributed K/V store.

InputModelsV3
OutputModelsV3

GET /3/NetworkTest

Run a network test to measure the performance of the cluster interconnect.

InputNetworkTestV3
OutputNetworkTestV3

POST /3/NodePersistentStorage/(?.*)

Store a value.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/(?.*)

Return all keys stored for a given category.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

POST /3/NodePersistentStorage/(?.*)/(?.*)

Store a named value.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/(?.*)/(?.*)

Return value for a given name.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

DELETE /3/NodePersistentStorage/(?.*)/(?.*)

Delete a key.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/categories/(?.*)/exists

Return true or false.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/categories/(?.*)/names/(?.*)/exists

Return true or false.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

GET /3/NodePersistentStorage/configured

Return true or false.

InputNodePersistentStorageV3
OutputNodePersistentStorageV3

POST /3/Parse

Parse a raw byte-oriented Frame into a useful columnar data Frame.

InputParseV3
OutputParseV3

POST /3/ParseSetup

Guess the parameters for parsing raw byte-oriented data into an H2O Frame.

InputParseSetupV3
OutputParseSetupV3

POST /3/Predictions/models/(?.*)/frames/(?.*)

Score (generate predictions) for the specified Frame with the specified Model. Both the Frame of predictions and the metrics will be returned.

InputModelMetricsListSchemaV3
OutputModelMetricsListSchemaV3

GET /3/Profiler

Report real-time profiling information for all nodes (sorted, aggregated stack traces).

InputProfilerV3
OutputProfilerV3

POST /3/Shutdown

Shut down the cluster

InputShutdownV3
OutputShutdownV3

POST /3/SplitFrame

Split a H2O Frame.

InputSplitFrameV3
OutputSplitFrameV3

GET /3/Timeline

Debugging tool that provides information on current communication between nodes.

InputTimelineV3
OutputTimelineV3

GET /3/Typeahead/files

Typehead hander for filename completion.

InputTypeaheadV3
OutputTypeaheadV3

POST /3/UnlockKeys

Unlock all keys in the H2O distributed K/V store, to attempt to recover from a crash.

InputUnlockKeysV3
OutputUnlockKeysV3

GET /3/WaterMeterCpuTicks/(?.*)

Return a CPU usage snapshot of all cores of all nodes in the H2O cluster.

InputWaterMeterCpuTicksV3
OutputWaterMeterCpuTicksV3

GET /3/WaterMeterIo

Return IO usage snapshot of all nodes in the H2O cluster.

InputWaterMeterIoV3
OutputWaterMeterIoV3

GET /3/WaterMeterIo/(?.*)

Return IO usage snapshot of all nodes in the H2O cluster.

InputWaterMeterIoV3
OutputWaterMeterIoV3

POST /99/Assembly

Fit an assembly to an input frame

InputAssemblyV99
OutputAssemblyV99

GET /99/Assembly.java/(?.*)/(?.*)

Generate a Java POJO from the Assembly

InputAssemblyV99
OutputAssemblyV99

POST /99/DCTTransformer

Row-by-Row discrete cosine transforms in 1D, 2D and 3D.

InputDCTTransformerV3
OutputDCTTransformerV3

POST /99/Grid/deeplearning

Run grid search for DeepLearning model.

InputDeepLearningGridSearchV99
OutputDeepLearningGridSearchV99

POST /99/Grid/drf

Run grid search for DRF model.

InputDRFGridSearchV99
OutputDRFGridSearchV99

POST /99/Grid/gbm

Run grid search for GBM model.

InputGBMGridSearchV99
OutputGBMGridSearchV99

POST /99/Grid/glm

Run grid search for GLRM model.

InputGLMGridSearchV99
OutputGLMGridSearchV99

POST /99/Grid/glrm

Run grid search for GLRM model.

InputGLRMGridSearchV3
OutputGLRMGridSearchV3

POST /99/Grid/kmeans

Run grid search for KMeans model.

InputKMeansGridSearchV99
OutputKMeansGridSearchV99

POST /99/Grid/naivebayes

Run grid search for Naive Bayes model.

InputNaiveBayesGridSearchV99
OutputNaiveBayesGridSearchV99

POST /99/Grid/pca

Run grid search for PCA model.

InputPCAGridSearchV99
OutputPCAGridSearchV99

POST /99/Grid/svd

Run grid search for SVD model.

InputSVDGridSearchV99
OutputSVDGridSearchV99

GET /99/Grids

Return all grids from H2O distributed K/V store.

InputGridsV99
OutputGridsV99

GET /99/Grids/(?.*)

Return the specified grid search result.

InputGridSchemaV99
OutputGridSchemaV99

POST /99/ModelBuilders/svd

Train a SVD model on the specified Frame.

InputSVDV99
OutputSVDV99

POST /99/ModelBuilders/svd/parameters

Validate a set of SVD model builder parameters.

InputSVDV99
OutputSVDV99

POST /99/Models.bin/(?.*)

Import given binary model into H2O.

InputModelImportV3
OutputModelsV3

GET /99/Models.bin/(?.*)

Export given model.

InputModelExportV3
OutputModelExportV3

POST /99/Rapids

Execute an Rapids AST.

InputRapidsSchema
OutputRapidsSchema

GET /99/Sample

Example of an experimental endpoint. Call via /EXPERIMENTAL/Sample. Experimental endpoints can change at any moment.

InputCloudV3
OutputCloudV3

POST /99/Tabulate

Tabulate one column vs another.

InputTabulateV3
OutputTabulateV3

REST API Schema Reference

AboutEntryV3

name
string
Property nameOut
value
string
Property valueOut

AboutV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
entries
Iced[]
List of properties about this running H2O instanceOut

AssemblyKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

AssemblyV99

steps
string[]
A list of steps describing the assembly line.In
frame
Key
Input Frame for the assembly.In
pojo_name
string
The name of the file and generated class In
assembly_id
string
The key of the Assembly object to retrieve from the DKV.In
result
Key
Output of the assembly line.In
assembly
Key
A Key to the fit Assembly data structureIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

CloudV3

skip_ticks
boolean
skip_ticksIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
version
string
versionOut
node_idx
int
Node index number cloud status is collected from (zero-based)Out
cloud_name
string
cloud_nameOut
cloud_size
int
cloud_sizeOut
cloud_uptime_millis
long
cloud_uptime_millisOut
cloud_healthy
boolean
cloud_healthyOut
bad_nodes
int
Nodes reporting unhealthyOut
consensus
boolean
Cloud voting is stableOut
locked
boolean
Cloud is accepting new members or notOut
is_client
boolean
Cloud is in client mode.Out
nodes
Iced[]
nodesOut

ClusteringModelBuilderSchema

parameters
Parameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

ClusteringModelParametersSchema

k
int
Number of clustersIn/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

ColSpecifierV3

column_name
string
Name of the columnIn/Out
is_member_of_frames
string[]
List of fields which specify columns that must contain this columnIn/Out

ColV3

label
string
labelOut
missing_count
long
missingOut
zero_count
long
zerosOut
positive_infinity_count
long
positive infinitiesOut
negative_infinity_count
long
negative infinitiesOut
mins
double[]
minsOut
maxs
double[]
maxsOut
mean
double
meanOut
sigma
double
sigmaOut
type
string
datatype: {enum, string, int, real, time, uuid}Out
domain
string[]
domain; not-null for categorical columns onlyOut
domain_cardinality
int
cardinality of this column’s domain; not-null for categorical columns onlyOut
data
double[]
dataOut
string_data
string[]
string dataOut
precision
byte
decimal precision, -1 for all digitsOut
histogram_bins
long[]
Histogram bins; null if not computedOut
histogram_base
double
Start of histogram bin zeroOut
histogram_stride
double
Stride per binOut
percentiles
double[]
Percentile values, matching the default percentilesOut

ColumnSpecsBase

name
string
Column NameOut
type
string
Column TypeOut
format
string
Column Format (printf)Out
description
string
Column DescriptionOut

ConfusionMatrixBase

table
TwoDimTable
Annotated confusion matrixOut

ConfusionMatrixV3

table
TwoDimTable
Annotated confusion matrixOut

CoxPHModelOutputV3

names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

CoxPHModelV3

model_id
Key
Model keyIn/Out
parameters
CoxPHParameters
The build parameters for the model (e.g. K for KMeans).Out
output
CoxPHOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

CoxPHParametersV3

model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

CoxPHV3

parameters
CoxPHParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

CreateFrameV3

rows
long
Number of rowsIn
cols
int
Number of data columns (in addition to the first response column)In
seed
long
Random number seedIn
randomize
boolean
Whether frame should be randomizedIn
value
long
Constant value (for randomize=false)In
real_range
long
Range for real variables (-range … range)In
categorical_fraction
double
Fraction of categorical columns (for randomize=true)In
factors
int
Factor levels for categorical variablesIn
integer_fraction
double
Fraction of integer columns (for randomize=true)In
integer_range
long
Range for integer variables (-range … range)In
binary_fraction
double
Fraction of binary columns (for randomize=true)In
binary_ones_fraction
double
Fraction of 1’s in binary columnsIn
missing_fraction
double
Fraction of missing valuesIn
response_factors
int
Number of factor levels of the first column (1=real, 2=binomial, N=multinomial)In
has_response
boolean
Whether an additional response column should be generatedIn
key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

DCTTransformerV3

dataset
Key
DatasetIn
destination_frame
Key
Destination Frame IDIn
dimensions
int[]
Dimensions of the input array: Height, Width, Depth (Nx1x1 for 1D, NxMx1 for 2D)In
inverse
boolean
Whether to do the inverse transformIn
key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

DRFGridSearchV99

parameters
DRFParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

DRFModelOutputV3

variable_importances
TwoDimTable
Variable ImportancesOut
init_f
double
The Intercept term, the initial model function value to which trees make adjustmentsOut
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

DRFModelV3

model_id
Key
Model keyIn/Out
parameters
DRFParameters
The build parameters for the model (e.g. K for KMeans).Out
output
DRFOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

DRFParametersV3

mtries
int
Number of variables randomly sampled as candidates at each split. If set to -1, defaults to sqrt{p} for classification and p/3 for regression (where p is the # of predictorsIn
binomial_double_trees
boolean
For binary classification: Build 2x as many trees (one per class) - can lead to higher accuracy.In
ntrees
int
Number of trees.In
max_depth
int
Maximum tree depth.In
min_rows
double
Fewest allowed (weighted) observations in a leaf (in R called ‘nodesize’).In
nbins
int
For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best pointIn
nbins_top_level
int
For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then decrease by factor of two per levelIn
nbins_cats
int
For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher values can lead to more overfitting.In
r2_stopping
double
Stop making trees when the R^2 metric equals or exceeds thisIn
seed
long
Seed for pseudo random number generator (if applicable)In
build_tree_one_node
boolean
Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets.In
sample_rate
float
Row sample rate (from 0.0 to 1.0)In
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

DRFV3

parameters
DRFParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

DStackTraceV3

node
string
Node nameOut
time
long
Unix epoch timeOut
thread_traces
string[]
One trace per threadOut

DeepLearningGridSearchV99

parameters
DeepLearningParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

DeepLearningModelOutputV3

weights
Key[]
Frame keys for weight matricesIn
biases
Key[]
Frame keys for bias vectorsIn
normmul
double[]
Normalization/Standardization multipliers for numeric predictorsOut
normsub
double[]
Normalization/Standardization offsets for numeric predictorsOut
normrespmul
double[]
Normalization/Standardization multipliers for numeric responseOut
normrespsub
double[]
Normalization/Standardization offsets for numeric responseOut
catoffsets
int[]
Categorical offsets for one-hot encodingOut
variable_importances
TwoDimTable
Variable ImportancesOut
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

DeepLearningModelV3

model_id
Key
Model keyIn/Out
parameters
DeepLearningParameters
The build parameters for the model (e.g. K for KMeans).Out
output
DeepLearningModelOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

DeepLearningParametersV3

distribution
enum
Distribution functionIn
tweedie_power
double
Tweedie PowerIn
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
overwrite_with_best_model
boolean
If enabled, override the final model with the best model found during trainingIn/Out
autoencoder
boolean
Auto-EncoderIn/Out
use_all_factor_levels
boolean
Use all factor levels of categorical variables. Otherwise, the first factor level is omitted (without loss of accuracy). Useful for variable importances and auto-enabled for autoencoder.In/Out
activation
enum
Activation functionIn/Out
hidden
int[]
Hidden layer sizes (e.g. 100,100).In/Out
epochs
double
How many times the dataset should be iterated (streamed), can be fractionalIn/Out
train_samples_per_iteration
long
Number of training samples (globally) per MapReduce iteration. Special values are 0: one epoch, -1: all available data (e.g., replicated training data), -2: automaticIn/Out
target_ratio_comm_to_comp
double
Target ratio of communication overhead to computation. Only for multi-node operation and train_samples_per_iteration=-2 (auto-tuning)In/Out
seed
long
Seed for random numbers (affects sampling) - Note: only reproducible when running single threadedIn/Out
adaptive_rate
boolean
Adaptive learning rateIn/Out
rho
double
Adaptive learning rate time decay factor (similarity to prior updates)In/Out
epsilon
double
Adaptive learning rate smoothing factor (to avoid divisions by zero and allow progress)In/Out
rate
double
Learning rate (higher => less stable, lower => slower convergence)In/Out
rate_annealing
double
Learning rate annealing: rate / (1 + rate_annealing * samples)In/Out
rate_decay
double
Learning rate decay factor between layers (N-th layer: rate*alpha^(N-1))In/Out
momentum_start
double
Initial momentum at the beginning of training (try 0.5)In/Out
momentum_ramp
double
Number of training samples for which momentum increasesIn/Out
momentum_stable
double
Final momentum after the ramp is over (try 0.99)In/Out
nesterov_accelerated_gradient
boolean
Use Nesterov accelerated gradient (recommended)In/Out
input_dropout_ratio
double
Input layer dropout ratio (can improve generalization, try 0.1 or 0.2)In/Out
hidden_dropout_ratios
double[]
Hidden layer dropout ratios (can improve generalization), specify one value per hidden layer, defaults to 0.5In/Out
l1
double
L1 regularization (can add stability and improve generalization, causes many weights to become 0)In/Out
l2
double
L2 regularization (can add stability and improve generalization, causes many weights to be smallIn/Out
max_w2
float
Constraint for squared sum of incoming weights per unit (e.g. for Rectifier)In/Out
initial_weight_distribution
enum
Initial Weight DistributionIn/Out
initial_weight_scale
double
Uniform: -value…value, Normal: stddev)In/Out
loss
enum
Loss functionIn/Out
score_interval
double
Shortest time interval (in secs) between model scoringIn/Out
score_training_samples
long
Number of training set samples for scoring (0 for all)In/Out
score_validation_samples
long
Number of validation set samples for scoring (0 for all)In/Out
score_duty_cycle
double
Maximum duty cycle fraction for scoring (lower: more training, higher: more scoring).In/Out
classification_stop
double
Stopping criterion for classification error fraction on training data (-1 to disable)In/Out
regression_stop
double
Stopping criterion for regression error (MSE) on training data (-1 to disable)In/Out
quiet_mode
boolean
Enable quiet mode for less output to standard outputIn/Out
score_validation_sampling
enum
Method used to sample validation dataset for scoringIn/Out
diagnostics
boolean
Enable diagnostics for hidden layersIn/Out
variable_importances
boolean
Compute variable importances for input features (Gedeon method) - can be slow for large networksIn/Out
fast_mode
boolean
Enable fast mode (minor approximation in back-propagation)In/Out
force_load_balance
boolean
Force extra load balancing to increase training speed for small datasets (to keep all cores busy)In/Out
replicate_training_data
boolean
Replicate the entire training dataset onto every node for faster training on small datasetsIn/Out
single_node_mode
boolean
Run on a single node for fine-tuning of model parametersIn/Out
shuffle_training_data
boolean
Enable shuffling of training data (recommended if training data is replicated and train_samples_per_iteration is close to #nodes x #rows, of if using balance_classes)In/Out
missing_values_handling
enum
Handling of missing values. Either Skip or MeanImputation.In/Out
sparse
boolean
Sparse data handling (more efficient for data with lots of 0 values).In/Out
col_major
boolean
Use a column major weight matrix for input layer. Can speed up forward propagation, but might slow down backpropagation (Deprecated).In/Out
average_activation
double
Average activation for sparse auto-encoder (Experimental)In/Out
sparsity_beta
double
Sparsity regularization (Experimental)In/Out
max_categorical_features
int
Max. number of categorical features, enforced via hashing (Experimental)In/Out
reproducible
boolean
Force reproducibility on small data (will be slow - only uses 1 thread)In/Out
export_weights_and_biases
boolean
Whether to export Neural Network weights and biases to H2O FramesIn/Out
elastic_averaging
boolean
Elastic averaging between compute nodes can improve distributed model convergence (Experimental)In/Out
elastic_averaging_moving_rate
double
Elastic averaging moving rate (only if elastic averaging is enabled).In/Out
elastic_averaging_regularization
double
Elastic averaging regularization strength (only if elastic averaging is enabled).In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

DeepLearningV3

parameters
DeepLearningParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

DownloadDataV3

frame_id
Key
Frame to downloadIn
hex_string
boolean
Emit double values in a machine readable lossless format with Double.toHexString().In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
csv
string
CSV StreamOut
filename
string
Suggested FilenameOut

EventV3

date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

ExampleModelOutputV3

iterations
int
Iterations executedIn
maxs
double[]
(No description available)In
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

ExampleModelV3

model_id
Key
Model keyIn/Out
parameters
ExampleParameters
The build parameters for the model (e.g. K for KMeans).Out
output
ExampleOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

ExampleParametersV3

max_iterations
int
Maximum training iterations.In
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

ExampleV3

parameters
ExampleParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

FieldMetadataBase

schema_name
string
Schema name for this field, if it is_schema, or the name of the enum, if it’s an enum.In
name
string
Field name in the SchemaOut
type
string
Type for this fieldOut
is_schema
boolean
Type for this field is itself a Schema.Out
value
Polymorphic
Value for this fieldOut
help
string
A short help description to appear alongside the field in a UIOut
label
string
The label that should be displayed for the field if the name is insufficientOut
required
boolean
Is this field required, or is the default value generally sufficient?Out
level
enum
How important is this field? The web UI uses the level to do a slow reveal of the parametersOut
direction
enum
Is this field an input, output or inout?Out
is_gridable
boolean
Is the field gridable (i.e., it can be used in grid call)Out
values
string[]
For enum-type fields the allowed values are specified using the values annotation; this is used in UIs to tell the user the allowed values, and for validationOut
json
boolean
Should this field be rendered in the JSON representation?Out
is_member_of_frames
string[]
For Vec-type fields this is the set of other Vec-type fields which must contain mutually exclusive values; for example, for a SupervisedModel the response_column must be mutually exclusive with the weights_columnOut
is_mutually_exclusive_with
string[]
For Vec-type fields this is the set of Frame-type fields which must contain the named column; for example, for a SupervisedModel the response_column must be in both the training_frame and (if it’s set) the validation_frameOut

FieldMetadataV3

schema_name
string
Schema name for this field, if it is_schema, or the name of the enum, if it’s an enum.In
name
string
Field name in the SchemaOut
type
string
Type for this fieldOut
is_schema
boolean
Type for this field is itself a Schema.Out
value
Polymorphic
Value for this fieldOut
help
string
A short help description to appear alongside the field in a UIOut
label
string
The label that should be displayed for the field if the name is insufficientOut
required
boolean
Is this field required, or is the default value generally sufficient?Out
level
enum
How important is this field? The web UI uses the level to do a slow reveal of the parametersOut
direction
enum
Is this field an input, output or inout?Out
is_gridable
boolean
Is the field gridable (i.e., it can be used in grid call)Out
values
string[]
For enum-type fields the allowed values are specified using the values annotation; this is used in UIs to tell the user the allowed values, and for validationOut
json
boolean
Should this field be rendered in the JSON representation?Out
is_member_of_frames
string[]
For Vec-type fields this is the set of other Vec-type fields which must contain mutually exclusive values; for example, for a SupervisedModel the response_column must be mutually exclusive with the weights_columnOut
is_mutually_exclusive_with
string[]
For Vec-type fields this is the set of Frame-type fields which must contain the named column; for example, for a SupervisedModel the response_column must be in both the training_frame and (if it’s set) the validation_frameOut

FindV3

key
Frame
Frame to searchIn
column
string
Column, or null for allIn
row
long
Starting row for searchIn
match
string
Value to search for; leave blank for a search for missing valuesIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
prev
long
previous row with matching value, or -1Out
next
long
next row with matching value, or -1Out

FrameBase

frame_id
Key
Frame IDIn/Out
byte_size
long
Total data size in bytesOut
is_text
boolean
Is this Frame raw unparsed data?Out

FrameKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

FrameSynopsisV3

frame_id
Key
Frame IDIn/Out
rows
long
Number of rows in the FrameOut
columns
long
Number of columns in the FrameOut
byte_size
long
Total data size in bytesOut
is_text
boolean
Is this Frame raw unparsed data?Out

FrameV3

row_offset
long
Row offset to displayIn
row_count
int
Number of rows to displayIn/Out
column_offset
int
Column offset to returnIn/Out
column_count
int
Number of columns to returnIn/Out
total_column_count
int
Total number of columns in the FrameIn/Out
frame_id
Key
Frame IDIn/Out
checksum
long
checksumOut
rows
long
Number of rows in the FrameOut
default_percentiles
double[]
Default percentiles, from 0 to 1Out
columns
Vec[]
Columns in the FrameOut
compatible_models
string[]
Compatible models, if requestedOut
chunk_summary
TwoDimTable
Chunk summaryOut
distribution_summary
TwoDimTable
Distribution summaryOut
byte_size
long
Total data size in bytesOut
is_text
boolean
Is this Frame raw unparsed data?Out

FramesBase

frame_id
Key
Name of Frame of interestIn
column
string
Name of column of interestIn
find_compatible_models
boolean
Find and return compatible models?In
path
string
File output pathIn
force
boolean
Overwrite existing fileIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
row_offset
long
Row offset to returnIn/Out
row_count
int
Number of rows to returnIn/Out
column_offset
int
Column offset to returnIn/Out
column_count
int
Number of columns to returnIn/Out
job
Job
Job for export fileOut
frames
Iced[]
FramesOut
compatible_models
Model[]
Compatible modelsOut
domain
string[][]
DomainsOut

FramesV3

frame_id
Key
Name of Frame of interestIn
column
string
Name of column of interestIn
find_compatible_models
boolean
Find and return compatible models?In
path
string
File output pathIn
force
boolean
Overwrite existing fileIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
row_offset
long
Row offset to returnIn/Out
row_count
int
Number of rows to returnIn/Out
column_offset
int
Column offset to returnIn/Out
column_count
int
Number of columns to returnIn/Out
job
Job
Job for export fileOut
frames
Iced[]
FramesOut
compatible_models
Model[]
Compatible modelsOut
domain
string[][]
DomainsOut

GBMGridSearchV99

parameters
GBMParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

GBMModelOutputV3

variable_importances
TwoDimTable
Variable ImportancesOut
init_f
double
The Intercept term, the initial model function value to which trees make adjustmentsOut
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

GBMModelV3

model_id
Key
Model keyIn/Out
parameters
GBMParameters
The build parameters for the model (e.g. K for KMeans).Out
output
GBMOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

GBMParametersV3

learn_rate
float
Learning rate (from 0.0 to 1.0)In
distribution
enum
Distribution functionIn
tweedie_power
double
Tweedie Power (between 1 and 2)In
col_sample_rate
float
Column sample rate (from 0.0 to 1.0)In
ntrees
int
Number of trees.In
max_depth
int
Maximum tree depth.In
min_rows
double
Fewest allowed (weighted) observations in a leaf (in R called ‘nodesize’).In
nbins
int
For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best pointIn
nbins_top_level
int
For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then decrease by factor of two per levelIn
nbins_cats
int
For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher values can lead to more overfitting.In
r2_stopping
double
Stop making trees when the R^2 metric equals or exceeds thisIn
seed
long
Seed for pseudo random number generator (if applicable)In
build_tree_one_node
boolean
Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets.In
sample_rate
float
Row sample rate (from 0.0 to 1.0)In
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

GBMV3

parameters
GBMParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

GLMGridSearchV99

parameters
GLMParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

GLMModelOutputV3

coefficients_table
TwoDimTable
Table of CoefficientsIn
standardized_coefficient_magnitudes
TwoDimTable
Standardized Coefficient MagnitudesIn
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

GLMModelV3

model_id
Key
Model keyIn/Out
parameters
GLMParameters
The build parameters for the model (e.g. K for KMeans).Out
output
GLMOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

GLMParametersV3

family
enum
Family. Use binomial for classification with logistic regression, others are for regression problems.In
tweedie_variance_power
double
Tweedie variance powerIn
tweedie_link_power
double
Tweedie link powerIn
solver
enum
AUTO will set the solver based on given data and the other parameters. IRLSM is fast on on problems with small number of predictors and for lambda-search with L1 penalty, L_BFGS scales better for datasets with many columns. Coordinate descent is experimental (beta).In
alpha
double[]
distribution of regularization between L1 and L2.In
lambda
double[]
regularization strengthIn
lambda_search
boolean
use lambda search starting at lambda max, given lambda is then interpreted as lambda minIn
nlambdas
int
number of lambdas to be used in a searchIn
standardize
boolean
Standardize numeric columns to have zero mean and unit varianceIn
non_negative
boolean
Restrict coefficients (not intercept) to be non-negativeIn
max_iterations
int
Maximum number of iterationsIn
beta_epsilon
double
converge if beta changes less (using L-infinity norm) than beta esilon, ONLY applies to IRLSM solver In
objective_epsilon
double
converge if objective value changes less than thisIn
gradient_epsilon
double
converge if objective changes less (using L-infinity norm) than this, ONLY applies to L-BFGS solverIn
obj_reg
double
likelihood divider in objective value computation, default is 1/nobsIn
link
enum
(No description available)In
intercept
boolean
include constant term in the modelIn
prior
double
prior probability for y==1. To be used only for logistic regression iff the data has been sampled and the mean of response does not reflect reality.In
lambda_min_ratio
double
min lambda used in lambda search, specified as a ratio of lambda_maxIn
beta_constraints
Key
beta constraintsIn
max_active_predictors
int
Maximum number of active predictors during computation. Use as a stopping criterium to prevent expensive model building with many predictors.In
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

GLMV3

parameters
GLMParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

GLRMGridSearchV3

parameters
GLRMParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

GLRMModelOutputV3

iterations
int
Iterations executedIn
objective
double
Objective valueIn
avg_change_obj
double
Average change in objective value on final iterationIn
step_size
double
Final step sizeIn
archetypes
TwoDimTable
Mapping from lower dimensional k-space to training featuresIn
singular_vals
double[]
Singular values of XY matrixIn
eigenvectors
TwoDimTable
Eigenvectors of XY matrixIn
representation_name
string
Frame key name for X matrixIn
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

GLRMModelV3

model_id
Key
Model keyIn/Out
parameters
GLRMParameters
The build parameters for the model (e.g. K for KMeans).Out
output
GLRMOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

GLRMParametersV3

transform
enum
Transformation of training dataIn
k
int
Rank of matrix approximationIn
loss
enum
Numeric loss functionIn
multi_loss
enum
Categorical loss functionIn
loss_by_col
enum[]
Loss function by column (override)In
loss_by_col_idx
int[]
Loss function by column index (override)In
period
int
Length of period (only used with periodic loss function)In
regularization_x
enum
Regularization function for X matrixIn
regularization_y
enum
Regularization function for Y matrixIn
gamma_x
double
Regularization weight on X matrixIn
gamma_y
double
Regularization weight on Y matrixIn
max_iterations
int
Maximum number of iterationsIn
init_step_size
double
Initial step sizeIn
min_step_size
double
Minimum step sizeIn
seed
long
RNG seed for initializationIn
init
enum
Initialization modeIn
svd_method
enum
Method for computing SVD during initialization (Caution: Power and Randomized are currently experimental and unstable)In
user_y
Key
User-specified initial YIn
user_x
Key
User-specified initial XIn
loading_name
string
Frame key to save resulting XIn
expand_user_y
boolean
Expand categorical columns in user-specified initial YIn
impute_original
boolean
Reconstruct original training data by reversing transformIn
recover_svd
boolean
Recover singular values and eigenvectors of XYIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

GLRMV3

parameters
GLRMParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

GarbageCollectV3

(No fields)

GrepModelOutputV3

matches
string[]
Matching stringsIn
offsets
long[]
Byte offsets of matchesIn
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

GrepModelV3

model_id
Key
Model keyIn/Out
parameters
GrepParameters
The build parameters for the model (e.g. K for KMeans).Out
output
GrepOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

GrepParametersV3

regex
string
regexIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

GrepV3

parameters
GrepParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

GridKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

GridSchemaV99

grid_id
Key
Grid idIn
model_ids
Key[]
Model IDs build by a grid searchIn
hyper_names
string[]
Used hyper parameters.Out
failed_params
Parameters[]
List of failed parametersOut
failure_details
string[]
List of detailed failure messagesOut
failure_stack_traces
string[]
List of detailed failure stack tracesOut
failed_raw_params
string[][]
List of raw parameters causing model building failureOut

GridSearchSchema

parameters
Parameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

GridsV99

grids
Grid[]
GridsOut

H2OErrorV3

timestamp
long
Milliseconds since the epoch for the time that this H2OError instance was created. Generally this is a short time since the underlying error ocurred.Out
error_url
string
Error urlOut
msg
string
Message intended for the end user (a data scientist).Out
dev_msg
string
Potentially more detailed message intended for a developer (e.g. a front end engineer or someone designing a language binding).Out
http_status
int
HTTP status code for this error.Out
values
Map
Any values that are relevant to reporting or handling this error. Examples are a key name if the error is on a key, or a field name and object name if it’s on a specific field.Out
exception_type
string
Exception type, if any.Out
exception_msg
string
Raw exception message, if any.Out
stacktrace
string[]
Stacktrace, if any.Out

H2OModelBuilderErrorV3

parameters
Parameters
Model builder parameters.Out
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut
timestamp
long
Milliseconds since the epoch for the time that this H2OError instance was created. Generally this is a short time since the underlying error ocurred.Out
error_url
string
Error urlOut
msg
string
Message intended for the end user (a data scientist).Out
dev_msg
string
Potentially more detailed message intended for a developer (e.g. a front end engineer or someone designing a language binding).Out
http_status
int
HTTP status code for this error.Out
values
Map
Any values that are relevant to reporting or handling this error. Examples are a key name if the error is on a key, or a field name and object name if it’s on a specific field.Out
exception_type
string
Exception type, if any.Out
exception_msg
string
Raw exception message, if any.Out
stacktrace
string[]
Stacktrace, if any.Out

HeartBeatEvent

sends
int
number of sent heartbeatsIn
recvs
int
number of received heartbeatsIn
date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

IOEvent

io_flavor
string
flavor of the recorded io (ice/hdfs/…)In
node
string
node where this io event happenedIn
data
string
data infoIn
date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

ImportFilesV3

path
string
pathIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
files
string[]
filesOut
destination_frames
string[]
namesOut
fails
string[]
failsOut
dels
string[]
delsOut

InitIDV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
session_key
string
Session IDOut

InteractionV3

key
Key
Job KeyIn
description
string
Job descriptionIn
source_frame
Key
Input data frameIn/Out
factor_columns
string[]
Factor columnsIn/Out
pairwise
boolean
Whether to create pairwise quadratic interactions between factors (otherwise create one higher-order interaction). Only applicable if there are 3 or more factors.In/Out
max_factors
int
Max. number of factor levels in pair-wise interaction terms (if enforced, one extra catch-all factor will be made)In/Out
min_occurrence
int
Min. occurrence threshold for factor levels in pair-wise interaction termsIn/Out
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

IoStatsEntry

backend
string
Back end typeOut
store_count
long
Number of store eventsOut
store_bytes
long
Cumulative stored bytesOut
delete_count
long
Number of delete eventsOut
load_count
long
Number of load eventsOut
load_bytes
long
Cumulative loaded bytesOut

JStackV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
traces
DStackTrace[]
StacktracesOut

JobKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

JobV3

key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

JobsV3

job_id
Key
Optional Job identifierIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
jobs
Job[]
jobsOut

KMeansGridSearchV99

parameters
KMeansParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

KMeansModelOutputV3

centers
TwoDimTable
Cluster Centers[k][features]In
centers_std
TwoDimTable
Cluster Centers[k][features] on Standardized DataIn
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

KMeansModelV3

model_id
Key
Model keyIn/Out
parameters
KMeansParameters
The build parameters for the model (e.g. K for KMeans).Out
output
KMeansOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

KMeansParametersV3

user_points
Key
User-specified pointsIn
max_iterations
int
Maximum training iterationsIn
standardize
boolean
Standardize columnsIn
seed
long
RNG SeedIn
init
enum
Initialization modeIn
k
int
Number of clustersIn/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

KMeansV3

parameters
KMeansParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

KeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

KeyedVoidV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

KillMinus3V3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

LogAndEchoV3

message
string
Message to be Logged and EchoedIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

LogsV3

nodeidx
int
Index of node to query ticks for (0-based). -1 means current node.In
name
string
Which specific log file to read from the log file directory. If left unspecified, the system chooses a default for you.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
log
string
Content of log fileOut

MakeGLMModelV3

model
Key
source modelIn
dest
Key
destination keyIn
names
string[]
coefficient namesIn
beta
double[]
new glm coefficientsIn
threshold
float
decision threshold for label-generationIn

MetadataBase

num
int
Number for specifying an endpointIn
http_method
string
HTTP method (GET, POST, DELETE) if fetching by pathIn
path
string
Path for specifying an endpointIn
classname
string
Class name, for fetching docs for a schema (DEPRECATED)In
schemaname
string
Schema name (e.g., DocsV1), for fetching docs for a schemaIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
routes
Route[]
List of endpoint routesOut
schemas
SchemaMetadata[]
List of schemasOut
markdown
string
Table of Contents MarkdownOut

MetadataV3

num
int
Number for specifying an endpointIn
http_method
string
HTTP method (GET, POST, DELETE) if fetching by pathIn
path
string
Path for specifying an endpointIn
classname
string
Class name, for fetching docs for a schema (DEPRECATED)In
schemaname
string
Schema name (e.g., DocsV1), for fetching docs for a schemaIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
routes
Route[]
List of endpoint routesOut
schemas
SchemaMetadata[]
List of schemasOut
markdown
string
Table of Contents MarkdownOut

MissingInserterV3

dataset
Key
datasetIn
fraction
double
Fraction of data to replace with a missing valueIn
seed
long
SeedIn
key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

ModelBuilderJobV3

key
Key
Job KeyIn
description
string
Job descriptionIn
dest
Key
destination keyIn/Out
parameters
Parameters
Model builder parameters.Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

ModelBuilderSchema

parameters
Parameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

ModelBuildersBase

algo
string
Algo of ModelBuilder of interestIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
model_builders
Map
ModelBuildersOut

ModelBuildersV3

algo
string
Algo of ModelBuilder of interestIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
model_builders
Map
ModelBuildersOut

ModelBuildersVisibilityV3

value
string
Stable, Beta, ExperimentalIn/Out

ModelExportV3

model_id
Key
Name of Model of interestIn
dir
string
Destination directory (hdfs, s3, local)In
force
boolean
Overwrite destination directory in case it exists or throw exception if set to false.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

ModelIdV3

model_id
string
Model IDOut

ModelImportV3

model_id
Key
Save imported model under given key into DKV.In
dir
string
Source directory (hdfs, s3, local) containing serialized modelIn
force
boolean
Override existing model in case it exists or throw exception if set to falseIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

ModelKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

ModelMetricsAutoEncoderV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsBase

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsBinomialGLMV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
residual_deviance
double
residual devianceOut
null_deviance
double
null devianceOut
AIC
double
AICOut
null_degrees_of_freedom
long
null DOFOut
residual_degrees_of_freedom
long
residual DOFOut
r2
double
The R^2 for this scoring run.Out
logloss
double
The logarithmic loss for this scoring run.Out
AUC
double
The AUC for this scoring run.Out
Gini
double
The Gini score for this scoring run.Out
domain
string[]
The class labels of the response.Out
thresholds_and_metric_scores
TwoDimTable
The Metrics for various thresholds.Out
max_criteria_and_metric_scores
TwoDimTable
The Metrics for various criteria.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsBinomialV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
r2
double
The R^2 for this scoring run.Out
logloss
double
The logarithmic loss for this scoring run.Out
AUC
double
The AUC for this scoring run.Out
Gini
double
The Gini score for this scoring run.Out
domain
string[]
The class labels of the response.Out
thresholds_and_metric_scores
TwoDimTable
The Metrics for various thresholds.Out
max_criteria_and_metric_scores
TwoDimTable
The Metrics for various criteria.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsClusteringV3

tot_withinss
double
Within Cluster Sum of Square ErrorIn
totss
double
Total Sum of Square Error to Grand MeanIn
betweenss
double
Between Cluster Sum of Square ErrorIn
centroid_stats
TwoDimTable
Centroid StatisticsIn
model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsGLRMV99

numerr
double
Sum of Squared Error (Numeric Cols)In
caterr
double
Misclassification Error (Categorical Cols)In
numcnt
long
Number of Non-Missing Numeric ValuesIn
catcnt
long
Number of Non-Missing Categorical ValuesIn
model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsListSchemaV3

model
Key
Key of Model of interest (optional)In
frame
Key
Key of Frame of interest (optional)In
reconstruction_error
boolean
Compute reconstruction error (optional, only for Deep Learning AutoEncoder models)In
reconstruction_error_per_feature
boolean
Compute reconstruction error per feature (optional, only for Deep Learning AutoEncoder models)In
deep_features_hidden_layer
int
Extract Deep Features for given hidden layer (optional, only for Deep Learning models)In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
predictions_frame
Key
Key of predictions frame, if predictions are requested (optional)In/Out
model_metrics
ModelMetrics[]
ModelMetricsOut

ModelMetricsMultinomialGLMV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
residual_deviance
double
residual devianceOut
null_deviance
double
null devianceOut
AIC
double
AICOut
null_degrees_of_freedom
long
null DOFOut
residual_degrees_of_freedom
long
residual DOFOut
r2
double
The R^2 for this scoring run.Out
hit_ratio_table
TwoDimTable
The hit ratio table for this scoring run.Out
cm
ConfusionMatrix
The ConfusionMatrix object for this scoring run.Out
logloss
double
The logarithmic loss for this scoring run.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsMultinomialV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
r2
double
The R^2 for this scoring run.Out
hit_ratio_table
TwoDimTable
The hit ratio table for this scoring run.Out
cm
ConfusionMatrix
The ConfusionMatrix object for this scoring run.Out
logloss
double
The logarithmic loss for this scoring run.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsPCAV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsRegressionGLMV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
residual_deviance
double
residual devianceOut
null_deviance
double
null devianceOut
AIC
double
AICOut
null_degrees_of_freedom
long
null DOFOut
residual_degrees_of_freedom
long
residual DOFOut
r2
double
The R^2 for this scoring run.Out
mean_residual_deviance
double
The mean residual deviance for this scoring run.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsRegressionV3

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
r2
double
The R^2 for this scoring run.Out
mean_residual_deviance
double
The mean residual deviance for this scoring run.Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelMetricsSVDV99

model
Key
The model used for this scoring run.In/Out
model_checksum
long
The checksum for the model used for this scoring run.In/Out
frame
Key
The frame used for this scoring run.In/Out
frame_checksum
long
The checksum for the frame used for this scoring run.In/Out
description
string
Optional description for this scoring run (to note out-of-bag, sampled data, etc.)Out
model_category
enum
The category (e.g., Clustering) for the model used for this scoring run.Out
scoring_time
long
The time in mS since the epoch for the start of this scoring run.Out
predictions
Frame
Predictions Frame.Out
MSE
double
The Mean Squared Error of the prediction for this scoring run.Out

ModelOutputSchema

names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

ModelParameterSchemaV3

is_member_of_frames
string[]
For Vec-type fields this is the set of other Vec-type fields which must contain mutually exclusive values; for example, for a SupervisedModel the response_column must be mutually exclusive with the weights_columnIn
is_mutually_exclusive_with
string[]
For Vec-type fields this is the set of Frame-type fields which must contain the named column; for example, for a SupervisedModel the response_column must be in both the training_frame and (if it’s set) the validation_frameIn
name
string
name in the JSON, e.g. “lambda”Out
label
string
label in the UI, e.g. “lambda”Out
help
string
help for the UI, e.g. “regularization multiplier, typically used for foo bar baz etc.”Out
required
boolean
the field is requiredOut
type
string
Java type, e.g. “double”Out
default_value
Polymorphic
default value, e.g. 1Out
actual_value
Polymorphic
actual value as set by the user and / or modified by the ModelBuilder, e.g., 10Out
level
string
the importance of the parameter, used by the UI, e.g. “critical”, “extended” or “expert”Out
values
string[]
list of valid values for use by the front-endOut
gridable
boolean
Parameter can be used in grid callOut

ModelParametersSchema

model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

ModelSchema

model_id
Key
Model keyIn/Out
parameters
Parameters
The build parameters for the model (e.g. K for KMeans).Out
output
Output
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

ModelSchemaBase

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

ModelSynopsisV3

model_id
Key
Model keyIn/Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

ModelsBase

model_id
Key
Name of Model of interestIn
preview
boolean
Return potentially abridged model suitable for viewing in a browserIn
find_compatible_frames
boolean
Find and return compatible frames?In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
models
Iced[]
ModelsOut
compatible_frames
Frame[]
Compatible framesOut

ModelsV3

model_id
Key
Name of Model of interestIn
preview
boolean
Return potentially abridged model suitable for viewing in a browserIn
find_compatible_frames
boolean
Find and return compatible frames?In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
models
Iced[]
ModelsOut
compatible_frames
Frame[]
Compatible framesOut

NaiveBayesGridSearchV99

parameters
NaiveBayesParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

NaiveBayesModelOutputV3

levels
string[]
Categorical levels of the responseIn
apriori
TwoDimTable
A-priori probabilities of the responseIn
pcond
TwoDimTable[]
Conditional probabilities of the predictorsIn
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

NaiveBayesModelV3

model_id
Key
Model keyIn/Out
parameters
NaiveBayesParameters
The build parameters for the model (e.g. K for KMeans).Out
output
NaiveBayesOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

NaiveBayesParametersV3

laplace
double
Laplace smoothing parameterIn
min_sdev
double
Min. standard deviation to use for observations with not enough dataIn
eps_sdev
double
Cutoff below which standard deviation is replaced with min_sdevIn
min_prob
double
Min. probability to use for observations with not enough dataIn
eps_prob
double
Cutoff below which probability is replaced with min_probIn
compute_metrics
boolean
Compute metrics on training dataIn
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

NaiveBayesV3

parameters
NaiveBayesParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

NetworkBenchV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
results
TwoDimTable[]
NetworkBenchResultsOut

NetworkEvent

is_send
boolean
Boolean flag distinguishing between sends (true) and receives(false)In
protocol
string
network protocol (UDP/TCP)In
msg_type
string
UDP type (exec,ack, ackack,…In
from
string
Sending nodeIn
to
string
Receiving nodeIn
data
string
Pretty print of the first few bytes of the msg payload. Contains class name for tasks.In
date
string
Time when the event was recorded. Format is hh:mm:ss:msIn
nanos
long
Time in nanosIn
type
enum
type of recorded eventIn

NetworkTestV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
microseconds_collective
double[]
Collective broadcast/reduce times in microseconds (for each message size)Out
bandwidths_collective
double[]
Collective bandwidths in Bytes/sec (for each message size, for each node)Out
microseconds
double[][]
Round-trip times in microseconds (for each message size, for each node)Out
bandwidths
double[][]
Bi-directional bandwidths in Bytes/sec (for each message size, for each node)Out
nodes
string[]
NodesOut
table
TwoDimTable
NetworkTestResultsOut

NodePersistentStorageEntryV3

category
string
Category nameOut
name
string
Key nameOut
size
long
Size in bytes of valueOut
timestamp_millis
long
Epoch time in milliseconds of when the value was writtenOut

NodePersistentStorageV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
category
string
Category nameIn/Out
name
string
Key nameIn/Out
value
string
ValueIn/Out
configured
boolean
ConfiguredOut
exists
boolean
ExistsOut
entries
Iced[]
List of entriesOut

NodeV3

h2o
string
IPOut
ip_port
string
IP address and port in the form a.b.c.d:eOut
healthy
boolean
(now-last_ping)<HeartbeatThread.TIMEOUTOut
last_ping
long
Time (in msec) of last pingOut
sys_load
float
System load; average #runnables/#coresOut
gflops
double
Linpack GFlopsOut
mem_bw
double
Memory BandwidthOut
total_value_size
long
Data on Node (memory or disk)Out
mem_value_size
long
Data on Node (memory only)Out
num_keys
int
id="local-keys">local keys<Out
free_mem
long
Free heapOut
tot_mem
long
Total heapOut
max_mem
long
Max heapOut
free_disk
long
Free diskOut
max_disk
long
Max diskOut
rpcs_active
int
Active Remote Procedure CallsOut
fjthrds
short[]
F/J Thread count, by priorityOut
fjqueue
short[]
F/J Task count, by priorityOut
tcps_active
int
Open TCP connectionsOut
open_fds
int
Open File DescriptersOut
num_cpus
int
num_cpusOut
cpus_allowed
int
cpus_allowedOut
nthreads
int
nthreadsOut
my_cpu_pct
int
System CPU percentage used by this H2O process in last intervalOut
sys_cpu_pct
int
System CPU percentage used by everything in last intervalOut
pid
string
PIDOut

PCAGridSearchV99

parameters
PCAParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

PCAModelOutputV3

importance
TwoDimTable
Standard deviation and importance of each principal componentIn
eigenvectors
TwoDimTable
Principal components matrixIn
objective
double
Final value of GLRM squared loss functionIn
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

PCAModelV3

model_id
Key
Model keyIn/Out
parameters
PCAParameters
The build parameters for the model (e.g. K for KMeans).Out
output
PCAOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

PCAParametersV3

transform
enum
Transformation of training dataIn
pca_method
enum
Method for computing PCA (Caution: Power and GLRM are currently experimental and unstable)In
k
int
Rank of matrix approximationIn/Out
max_iterations
int
Maximum training iterationsIn/Out
seed
long
RNG seed for initializationIn/Out
use_all_factor_levels
boolean
Whether first factor level is included in each categorical expansionIn/Out
compute_metrics
boolean
Whether to compute metrics on the training dataIn/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

PCAV3

parameters
PCAParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

ParseSetupV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
source_frames
Key[]
Source framesIn/Out
parse_type
enum
Parser typeIn/Out
separator
byte
Field separatorIn/Out
single_quotes
boolean
Single quotesIn/Out
check_header
int
Check header: 0 means guess, +1 means 1st line is header not data, -1 means 1st line is data not headerIn/Out
column_names
string[]
Column namesIn/Out
column_types
string[]
Value types for columnsIn/Out
na_strings
string[][]
NA strings for columnsIn/Out
column_name_filter
string
Regex for names of columns to returnIn/Out
column_offset
int
Column offset to returnIn/Out
column_count
int
Number of columns to returnIn/Out
total_filtered_column_count
int
Total number of columns we would return with no column paginationIn/Out
destination_frame
string
Suggested nameOut
header_lines
long
Number of header lines foundOut
number_columns
int
Number of columnsOut
data
string[][]
Sample dataOut
chunk_size
int
Size of individual parse tasksOut

ParseV3

destination_frame
Key
Final frame nameIn
source_frames
Key[]
Source framesIn
parse_type
enum
Parser typeIn
separator
byte
Field separatorIn
single_quotes
boolean
Single QuotesIn
check_header
int
Check header: 0 means guess, +1 means 1st line is header not data, -1 means 1st line is data not headerIn
number_columns
int
Number of columnsIn
column_names
string[]
Column namesIn
column_types
string[]
Value types for columnsIn
domains
string[][]
Domains for categorical columnsIn
na_strings
string[][]
NA strings for columnsIn
chunk_size
int
Size of individual parse tasksIn
delete_on_done
boolean
Delete input key after parseIn
blocking
boolean
Block until the parse completes (as opposed to returning early and requiring pollingIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
job
Job
Parse jobOut
rows
long
RowsOut

ProfilerNodeEntryV3

stacktrace
string
Stack traceOut
count
int
Profile CountOut

ProfilerNodeV3

node_name
string
Node namesOut
timestamp
long
Timestamp (millis since epoch)Out
entries
Iced[]
Profile entry listOut

ProfilerV3

depth
int
Stack trace depthIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
nodes
Iced[]
(No description available)Out

QuantileParametersV3

probs
double[]
Probabilities for quantilesIn
combine_method
enum
How to combine quantiles for even sample sizesIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

QuantileV3

parameters
QuantileParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

RapidsFrameV3

ast
string
An Abstract Syntax Tree.In
id
string
Key name to assign Frame resultsIn
key
Key
Frame resultOut
num_rows
long
Rows in Frame resultOut
num_cols
int
Columns in Frame resultOut

RapidsFunctionV3

ast
string
An Abstract Syntax Tree.In
id
string
Key name to assign Frame resultsIn
funstr
string
Function resultOut

RapidsNumberV3

ast
string
An Abstract Syntax Tree.In
id
string
Key name to assign Frame resultsIn
scalar
double
Number resultOut

RapidsNumbersV3

ast
string
An Abstract Syntax Tree.In
id
string
Key name to assign Frame resultsIn
scalar
double[]
Number array resultOut

RapidsSchema

ast
string
An Abstract Syntax Tree.In
id
string
Key name to assign Frame resultsIn

RapidsStringV3

ast
string
An Abstract Syntax Tree.In
id
string
Key name to assign Frame resultsIn
scalar
string
String resultOut

RapidsStringsV3

ast
string
An Abstract Syntax Tree.In
id
string
Key name to assign Frame resultsIn
scalar
string[]
String array resultOut

RapidsV99

ast
string
An Abstract Syntax Tree.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
error
string
Parsing error, if anyOut
scalar
double
Scalar resultOut
funstr
string
Function resultOut
string
string
String resultOut
key
Key
Result keyOut
num_rows
long
Rows in Frame resultOut
num_cols
int
Columns in Frame resultOut

RemoveAllV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

RemoveV3

key
Key
Object to be removed.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

RequestSchema

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

RouteBase

http_method
string
(No description available)Out
url_pattern
string
(No description available)Out
summary
string
(No description available)Out
handler_class
string
(No description available)Out
handler_method
string
(No description available)Out
input_schema
string
(No description available)Out
output_schema
string
(No description available)Out
doc_method
string
(No description available)Out
path_params
string[]
(No description available)Out
markdown
string
(No description available)Out

RouteV3

http_method
string
(No description available)Out
url_pattern
string
(No description available)Out
summary
string
(No description available)Out
handler_class
string
(No description available)Out
handler_method
string
(No description available)Out
input_schema
string
(No description available)Out
output_schema
string
(No description available)Out
doc_method
string
(No description available)Out
path_params
string[]
(No description available)Out
markdown
string
(No description available)Out

SVDGridSearchV99

parameters
SVDParameters
Basic model builder parameters.In
hyper_parameters
Map
Grid search parameters.In/Out
grid_id
Key
Destination id for this grid; auto-generated if not specifiedIn/Out
total_models
int
Number of all models generated by grid search.Out
job
Job
Job Key.Out

SVDModelOutputV99

v_key
Key
Frame key of right singular vectorsIn
d
double[]
Singular valuesIn
u_key
Key
Frame key of left singular vectorsIn
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

SVDModelV99

model_id
Key
Model keyIn/Out
parameters
SVDParameters
The build parameters for the model (e.g. K for KMeans).Out
output
SVDOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

SVDParametersV99

transform
enum
Transformation of training dataIn
svd_method
enum
Method for computing SVD (Caution: Power and Randomized are currently experimental and unstable)In
nv
int
Number of right singular vectorsIn
max_iterations
int
Maximum iterationsIn
seed
long
RNG seed for k-means++ initializationIn
keep_u
boolean
Save left singular vectors?In
u_name
string
Frame key to save left singular vectorsIn
use_all_factor_levels
boolean
Whether first factor level is included in each categorical expansionIn/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

SVDV99

parameters
SVDParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

Schema

(No fields)

SchemaMetadataBase

version
int
Version number of the Schema.In
name
string
Simple name of the Schema. NOTE: the schema_names form a single namespace.In
superclass
string
Simple name of the superclass of the Schema. NOTE: the schema_names form a single namespace.In
type
string
Simple name of H2O type that this Schema represents. Must not be changed after creation (treat as final).In
fields
FieldMetadata[]
All the public fields of the schemaOut
markdown
string
Documentation for the schema in Markdown format with GitHub extensionsOut

SchemaMetadataV3

version
int
Version number of the Schema.In
name
string
Simple name of the Schema. NOTE: the schema_names form a single namespace.In
superclass
string
Simple name of the superclass of the Schema. NOTE: the schema_names form a single namespace.In
type
string
Simple name of H2O type that this Schema represents. Must not be changed after creation (treat as final).In
fields
FieldMetadata[]
All the public fields of the schemaOut
markdown
string
Documentation for the schema in Markdown format with GitHub extensionsOut

SharedTreeModelOutputV3

variable_importances
TwoDimTable
Variable ImportancesOut
init_f
double
The Intercept term, the initial model function value to which trees make adjustmentsOut
names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

SharedTreeModelV3

model_id
Key
Model keyIn/Out
parameters
Parameters
The build parameters for the model (e.g. K for KMeans).Out
output
Output
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

SharedTreeParametersV3

ntrees
int
Number of trees.In
max_depth
int
Maximum tree depth.In
min_rows
double
Fewest allowed (weighted) observations in a leaf (in R called ‘nodesize’).In
nbins
int
For numerical columns (real/int), build a histogram of (at least) this many bins, then split at the best pointIn
nbins_top_level
int
For numerical columns (real/int), build a histogram of (at most) this many bins at the root level, then decrease by factor of two per levelIn
nbins_cats
int
For categorical columns (factors), build a histogram of this many bins, then split at the best point. Higher values can lead to more overfitting.In
r2_stopping
double
Stop making trees when the R^2 metric equals or exceeds thisIn
seed
long
Seed for pseudo random number generator (if applicable)In
build_tree_one_node
boolean
Run on one node only; no network overhead but fewer cpus used. Suitable for small datasets.In
sample_rate
float
Row sample rate (from 0.0 to 1.0)In
balance_classes
boolean
Balance training data class counts via over/under-sampling (for imbalanced data).In/Out
class_sampling_factors
float[]
Desired over/under-sampling ratios per class (in lexicographic order). If not specified, sampling factors will be automatically computed to obtain class balance during training. Requires balance_classes.In/Out
max_after_balance_size
float
Maximum relative size of the training data after balancing class counts (can be less than 1.0). Requires balance_classes.In/Out
max_confusion_matrix_size
int
Maximum size (# classes) for confusion matrices to be printed in the LogsIn/Out
max_hit_ratio_k
int
Max. number (top K) of predictions to use for hit ratio computation (for multi-class only, 0 to disable)In/Out
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

SharedTreeV3

parameters
Parameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut

ShutdownV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

SplitFrameV3

dataset
Key
DatasetIn
ratios
double[]
Split ratios - resulting number of split is ratios.length+1In
key
Key
Job KeyIn
description
string
Job descriptionIn
destination_frames
Key[]
Destination keys for each output frame split.In/Out
dest
Key
destination keyIn/Out
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

StreamingSchema

(No fields)

SynonymV3

key
Key
A word2vec model key.In
target
string
The target string to find synonyms.In
cnt
int
Find the top cnt synonyms of the target word.In
synonyms
string[]
The synonyms.Out
cos_sim
float[]
The cosine similarities.Out

TabulateV3

dataset
Key
DatasetIn
nbins_predictor
int
Number of bins for predictor columnIn
nbins_response
int
Number of bins for response columnIn
key
Key
Job KeyIn
description
string
Job descriptionIn
predictor
VecSpecifier
PredictorIn/Out
response
VecSpecifier
ResponseIn/Out
weight
VecSpecifier
Observation weights (optional)In/Out
dest
Key
destination keyIn/Out
count_table
TwoDimTable
Counts tableOut
response_table
TwoDimTable
Response tableOut
status
string
job statusOut
progress
float
progress, from 0 to 1Out
progress_msg
string
current progress status descriptionOut
start_time
long
Start timeOut
msec
long
Runtime in millisecondsOut
exception
string
exceptionOut
messages
ValidationMessage[]
Info, warning and error messages; NOTE: can be appended to while the Job is runningOut
error_count
int
Count of error messagesOut

TimelineV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
now
long
Current time in millis.Out
self
string
This nodeOut
events
Iced[]
recorded timeline eventsOut

TreeStatsV3

min_depth
int
minDepthIn
max_depth
int
maxDepthIn
mean_depth
float
meanDepthIn
min_leaves
int
minLeavesIn
max_leaves
int
maxLeavesIn
mean_leaves
float
meanLeavesIn

TwoDimTableBase

name
string
Table NameOut
description
string
Table DescriptionOut
columns
Iced[]
Column SpecificationOut
rowcount
int
Number of RowsOut
data
Polymorphic[][]
Table Data (col-major)Out

TwoDimTableV3

name
string
Table NameOut
description
string
Table DescriptionOut
columns
Iced[]
Column SpecificationOut
rowcount
int
Number of RowsOut
data
Polymorphic[][]
Table Data (col-major)Out

TypeaheadV3

src
string
training_frameIn
limit
int
limitIn
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
matches
string[]
matchesOut

UnlockKeysV3

_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In

ValidationMessageBase

message_type
string
Type of validation message (ERROR, WARN, INFO, HIDE)Out
field_name
string
Field to which the message appliesOut
message
string
Message textOut

ValidationMessageV3

message_type
string
Type of validation message (ERROR, WARN, INFO, HIDE)Out
field_name
string
Field to which the message appliesOut
message
string
Message textOut

VarImpBase

varimp
float[]
Variable importance of individual variablesOut
names
string[]
Names of variablesOut

VarImpV3

varimp
float[]
Variable importance of individual variablesOut
names
string[]
Names of variablesOut

VecKeyV3

name
string
Name (string representation) for this Key.In/Out
type
string
Name (string representation) for the type of Keyed this Key points to.In/Out
URL
string
URL for the resource that this Key points to, if one exists.In/Out

WaterMeterCpuTicksV3

nodeidx
int
Index of node to query ticks for (0-based)In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
cpu_ticks
long[][]
array of tick counts per coreOut

WaterMeterIoV3

nodeidx
int
Index of node to query ticks for (0-based)In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
persist_stats
Iced[]
array of IO infoOut

Word2VecModelOutputV3

names
string[]
Column namesOut
domains
string[][]
Domains for categorical columnsOut
cross_validation_models
Key[]
Cross-validation models (model ids)Out
cross_validation_predictions
Key[]
Cross-validation predictions (frame ids)Out
model_category
enum
Category of the model (e.g., Binomial)Out
model_summary
TwoDimTable
Model summaryOut
scoring_history
TwoDimTable
Scoring historyOut
training_metrics
ModelMetrics
Training data model metricsOut
validation_metrics
ModelMetrics
Validation data model metricsOut
cross_validation_metrics
ModelMetrics
Cross-validation model metricsOut
status
string
Job statusOut
start_time
long
Start time in millisecondsOut
end_time
long
End time in millisecondsOut
run_time
long
Runtime in millisecondsOut
help
Map
Help information for output fieldsOut

Word2VecModelV3

model_id
Key
Model keyIn/Out
parameters
Word2VecParameters
The build parameters for the model (e.g. K for KMeans).Out
output
Word2VecOutput
The build output for the model (e.g. the cluster centers for KMeans).Out
compatible_frames
string[]
Compatible frames, if requestedOut
checksum
long
Checksum for all the things that go into building the Model.Out
algo
string
The algo name for this Model.Out
algo_full_name
string
The pretty algo name for this Model (e.g., Generalized Linear Model, rather than GLM).Out
response_column_name
string
The response column name for this Model (if applicable). Is null otherwise.Out
data_frame
Key
The Model’s training frame keyOut
timestamp
long
Timestamp for when this model was completedOut

Word2VecParametersV3

vecSize
int
Set size of word vectorsIn
windowSize
int
Set max skip length between wordsIn
sentSampleRate
float
Set threshold for occurrence of words. Those that appear with higher frequency in the training data will be randomly down-sampled; useful range is (0, 1e-5)In
normModel
enum
Use Hierarchical Softmax or Negative SamplingIn
negSampleCnt
int
Number of negative examples, common values are 3 - 10 (0 = not used)In
epochs
int
Number of training iterations to runIn
minWordFreq
int
This will discard words that appear less than timesIn
initLearningRate
float
Set the starting learning rateIn
wordModel
enum
Use the continuous bag of words model or the Skip-Gram modelIn
model_id
Key
Destination id for this model; auto-generated if not specifiedIn/Out
training_frame
Key
Training frameIn/Out
validation_frame
Key
Validation frameIn/Out
nfolds
int
Number of folds for N-fold cross-validationIn/Out
keep_cross_validation_predictions
boolean
Keep cross-validation model predictionsIn/Out
response_column
VecSpecifier
Response columnIn/Out
weights_column
VecSpecifier
Column with observation weightsIn/Out
offset_column
VecSpecifier
Offset columnIn/Out
fold_column
VecSpecifier
Column with cross-validation fold index assignment per observationIn/Out
fold_assignment
enum
Cross-validation fold assignment scheme, if fold_column is not specifiedIn/Out
ignored_columns
string[]
Ignored columnsIn/Out
ignore_const_cols
boolean
Ignore constant columnsIn/Out
score_each_iteration
boolean
Whether to score during each iteration of model trainingIn/Out
checkpoint
Key
Model checkpoint to resume training withIn/Out
stopping_rounds
int
Early stopping based on convergence of stopping_metric. Stop if simple moving average of length k of the stopping_metric does not improve for k:=stopping_rounds scoring events (0 to disable)In/Out
stopping_metric
enum
Metric to use for early stopping (AUTO: logloss for classification, deviance for regression)In/Out
stopping_tolerance
double
Relative tolerance for metric-based stopping criterion Relative tolerance for metric-based stopping criterion (stop if relative improvement is not at least this much)In/Out

Word2VecV3

parameters
Word2VecParameters
Model builder parameters.In
__http_status
int
HTTP status to return for this build.In
_exclude_fields
string
Comma-separated list of JSON field paths to exclude from the result, used like: “/3/Frames?_exclude_fields=frames/frame_id/URL,__meta”In
algo
string
The algo name for this ModelBuilder.Out
algo_full_name
string
The pretty algo name for this ModelBuilder (e.g., Generalized Linear Model, rather than GLM).Out
can_build
enum[]
Model categories this ModelBuilder can build.Out
visibility
enum
Should the builder always be visible, be marked as beta, or only visible if the user starts up with the experimental flag?Out
job
Job
Job KeyOut
messages
ValidationMessage[]
Parameter validation messagesOut
error_count
int
Count of parameter validation errorsOut