The H2O Explainability Interface is a convenient wrapper to a number of explainabilty methods and visualizations in H2O. The function can be applied to a single model or group of models and returns a list of explanations, which are individual units of explanation such as a partial dependence plot or a variable importance plot. Most of the explanations are visual (ggplot plots). These plots can also be created by individual utility functions as well.
h2o.explain( object, newdata, columns = NULL, top_n_features = 5, include_explanations = "ALL", exclude_explanations = NULL, plot_overrides = NULL, background_frame = NULL )
object | A list of H2O models, an H2O AutoML instance, or an H2OFrame with a 'model_id' column (e.g. H2OAutoML leaderboard). |
---|---|
newdata | An H2OFrame. |
columns | A vector of column names or column indices to create plots with. If specified parameter top_n_features will be ignored. |
top_n_features | An integer specifying the number of columns to use, ranked by variable importance (where applicable). |
include_explanations | If specified, return only the specified model explanations. (Mutually exclusive with exclude_explanations) |
exclude_explanations | Exclude specified model explanations. |
plot_overrides | Overrides for individual model explanations, e.g.
|
background_frame | Optional frame, that is used as the source of baselines for the marginal SHAP. Setting it enables calculating SHAP in more models but it can be more time and memory consuming. |
List of outputs with class "H2OExplanation"
# NOT RUN { library(h2o) h2o.init() # Import the wine dataset into H2O: f <- "https://h2o-public-test-data.s3.amazonaws.com/smalldata/wine/winequality-redwhite-no-BOM.csv" df <- h2o.importFile(f) # Set the response response <- "quality" # Split the dataset into a train and test set: splits <- h2o.splitFrame(df, ratios = 0.8, seed = 1) train <- splits[[1]] test <- splits[[2]] # Build and train the model: aml <- h2o.automl(y = response, training_frame = train, max_models = 10, seed = 1) # Create the explanation for whole H2OAutoML object exa <- h2o.explain(aml, test) print(exa) # Create the explanation for the leader model exm <- h2o.explain(aml@leader, test) print(exm) # }