Location of iced directory for the driver instance.
Location of iced directory for the driver instance.
H2O log leve for client running in Spark driver
H2O log leve for client running in Spark driver
Exact client port to access web UI.
Exact client port to access web UI.
The value -1
means automatic search for free port starting at spark.ext.h2o.port.base
.
Configuration property - name of H2O cloud
Configuration property - name of H2O cloud
Configuration property - timeout for cloud up.
Configuration property - timeout for cloud up.
Configuration property - expected number of workers of H2O cloud.
Configuration property - expected number of workers of H2O cloud. Value -1 means automatic detection of cluster size.
Starting size of cluster in case that size is not explicitelly passed
Starting size of cluster in case that size is not explicitelly passed
Disable GA tracking
Disable GA tracking
Configuration property - multiplication factor for dummy RDD generation.
Configuration property - multiplication factor for dummy RDD generation. Size of dummy RDD is PROP_CLUSTER_SIZE*PROP_DUMMY_RDD_MUL_FACTOR
Enable hash login.
Enable hash login.
Path to Java KeyStore file.
Path to Java KeyStore file.
Password for Java KeyStore file.
Password for Java KeyStore file.
Enable LDAP login.
Enable LDAP login.
Login configuration file.
Login configuration file.
Subnet selector for h2o if IP guess fail - useful if 'spark.
Subnet selector for h2o if IP guess fail - useful if 'spark.ext.h2o.flatfile' is false and we are trying to guess right IP on mi
Location of iced directory for Spark nodes
Location of iced directory for Spark nodes
Limit for number of threads used by H2O, default -1 means unlimited
Limit for number of threads used by H2O, default -1 means unlimited
Configuration property - base port used for individual H2O nodes configuration.
Configuration property - base port used for individual H2O nodes configuration.
Configuration property - number of retries to create an RDD spreat over all executors
Configuration property - number of retries to create an RDD spreat over all executors
Override user name for cluster.
Override user name for cluster.
Configuration property - use flatfile for H2O cloud formation.
Configuration property - use flatfile for H2O cloud formation.
Implicit conversion from Frame to DataFrame
Implicit conversion from RDD[Primitive type] ( where primitive type can be String, Double, Float or Int) to appropriate H2OFrame
Implicit conversion from typed RDD to H2O's DataFrame
Implicit conversion from Spark DataFrame to H2O's DataFrame
Convert given H2O frame into a Product RDD type
Get arguments for H2O client.
Produce arguments for H2O node based on this config.
Produce arguments for H2O node based on this config.
array of H2O launcher command line arguments
Open H2O Flow running in this client.
Initialize Sparkling H2O and start H2O cloud.
Initialize Sparkling H2O and start H2O cloud with specified number of workers.
Stops H2O context.
Stops H2O context.
Calls System.exit() which kills executor JVM.
Transform given Scala symbol to String
Returns a key of given frame
Implicit conversion from RDD[Primitive type] ( where primitive type can be String, Boolean, Double, Float, Int, Long, Short or Byte ) to appropriate H2O's DataFrame
Implicit conversion from typed RDD to H2O's DataFrame
Implicit conversion from Spark DataFrame to H2O's DataFrame
Convert given H2O frame into DataFrame type
Convert given H2O frame into DataFrame type
(Since version Use asDataFrame) 1.3
Convert given H2O frame into a RDD type
Convert given H2O frame into a RDD type
(Since version 0.2.3) Use asRDD instead
Simple H2O context motivated by SQLContext.
It provides implicit conversion from RDD -> H2OLikeRDD and back.