SHAP explanation shows contribution of features for a given instance. The sum of the feature contributions and the bias term is equal to the raw prediction of the model, i.e., prediction before applying inverse link function. H2O implements TreeSHAP which when the features are correlated, can increase contribution of a feature that had no influence on the prediction.

h2o.shap_explain_row_plot(
  model,
  newdata,
  row_index,
  columns = NULL,
  top_n_features = 10,
  plot_type = c("barplot", "breakdown"),
  contribution_type = c("both", "positive", "negative")
)

Arguments

model

An H2O tree-based model. This includes Random Forest, GBM and XGboost only. Must be a binary classification or regression model.

newdata

An H2O Frame, used to determine feature contributions.

row_index

Instance row index.

columns

List of columns or list of indices of columns to show. If specified, then the top_n_features parameter will be ignored.

top_n_features

Integer specifying the maximum number of columns to show (ranked by their contributions). When plot_type = "barplot", then top_n_features features will be chosen for each contribution_type.

plot_type

Either "barplot" or "breakdown". Defaults to "barplot".

contribution_type

When plot_type == "barplot", plot one of "negative", "positive", or "both" contributions. Defaults to "both".

Value

A ggplot2 object.

Examples

# NOT RUN {
library(h2o)
h2o.init()

# Import the wine dataset into H2O:
f <- "https://h2o-public-test-data.s3.amazonaws.com/smalldata/wine/winequality-redwhite-no-BOM.csv"
df <-  h2o.importFile(f)

# Set the response
response <- "quality"

# Split the dataset into a train and test set:
splits <- h2o.splitFrame(df, ratios = 0.8, seed = 1)
train <- splits[[1]]
test <- splits[[2]]

# Build and train the model:
gbm <- h2o.gbm(y = response,
               training_frame = train)

# Create the SHAP row explanation plot
shap_explain_row_plot <- h2o.shap_explain_row_plot(gbm, test, row_index = 1)
print(shap_explain_row_plot)
# }