Get started with Sparkling Water in a few easy steps
1. Download Spark (if not already installed) from the Spark Downloads Page
Choose Spark release : 1.4.0
Choose a package type: Pre-built for Hadoop 2.4 and later
2. Download Sparkling Water and point it to the existing installation of Spark by setting the SPARK_HOME environment variable:
export SPARK_HOME='/path/to/spark/installation'
3. From your terminal, run:
cd ~/Downloads
unzip sparkling-water-1.4.3.zip
cd sparkling-water-1.4.3
bin/sparkling-shell
4. Create an H2O cloud inside the Spark cluster:
import org.apache.spark.h2o._
val h2oContext = new H2OContext(sc).start()
// Or if you know the number of Spark workers:
// val h2oContext = new H2OContext(sc).start( <number of Spark workers> )
import h2oContext._
5. Follow this demo, which imports airlines and weather data and runs predictions on delays.
Launch Sparkling Water on Hadoop using Yarn.
1. Download Spark (if not already installed) from the Spark Downloads Page.
Choose Spark release : 1.4.0
Choose a package type: Pre-built for Hadoop 2.4 and later
2. Download Sparkling Water and point it to the existing installation of Spark by setting the SPARK_HOME environment variable:
wget http://h2o-release.s3.amazonaws.com/sparkling-water/rel-1.4/3/sparkling-water-1.4.3.zip
export SPARK_HOME='/path/to/spark/installation'
3. Set the HADOOP_CONF_DIR and Spark MASTER environmental variables.
export HADOOP_CONF_DIR=/etc/hadoop/conf
export MASTER="yarn-client"
4. Use spark-submit to launch Sparkling Shell on YARN.
unzip sparkling-water-1.4.3.zip
cd sparkling-water-1.4.3/
bin/sparkling-shell --num-executors 3 --executor-memory 2g --master yarn-client
5. Create an H2O cloud inside the Spark cluster:
import org.apache.spark.h2o._
val h2oContext = new H2OContext(sc).start()
import h2oContext._
Launch H2O on a Standalone Spark Cluster
1. Download Spark (if not already installed) from the Spark Downloads Page.
Choose Spark release : 1.4.0
Choose a package type: Pre-built for Hadoop 2.4 and later
2. Download Sparkling Water and point it to the existing installation of Spark by setting the SPARK_HOME environment variable:
export SPARK_HOME='/path/to/spark/installation'
3. From your terminal, run:
cd ~/Downloads
unzip sparkling-water-1.4.3.zip
cd sparkling-water-1.4.3
bin/launch-spark-cloud.sh
export MASTER="spark://localhost:7077"
bin/sparkling-shell
4. Create an H2O cloud inside the Spark cluster:
import org.apache.spark.h2o._
val h2oContext = new H2OContext(sc).start()
// Or if you know the number of Spark workers:
// val h2oContext = new H2OContext(sc).start( <number of Spark workers> )
import h2oContext._
Gradle-style specification for Maven artifacts
See the h2o-droplets github repository for a working example.
repositories {
mavenCentral()
}
dependencies {
compile "ai.h2o:sparkling-water-core_2.10:1.4.3"
}
See Maven Central for artifact details.
R client
Please, follow installation instructions on H2O-R page.
Python client
Please, follow installation instructions on H2O-Python page.