te2rules.explainer.ModelExplainer

class te2rules.explainer.ModelExplainer(model: sklearn.ensemble | XGBClassifier, feature_names: List[str], verbose: bool = False)

The te2rules.explainer.ModelExplainer module explains Tree Ensemble models (TE) like XGBoost, Random Forest, trained on a binary classification task, using a rule list. The algorithm used by TE2Rules is based on Apriori Rule Mining. For more details on the algorithm, please check out our paper TE2Rules: Explaining Tree Ensembles using Rules.

__init__(model: sklearn.ensemble | XGBClassifier, feature_names: List[str], verbose: bool = False)

Initialize the explainer with the trained tree ensemble model and feature names used by the model.

Returns a ModelExplainer object

Parameters:
  • model (sklearn.ensemble.GradientBoostingClassifier or sklearn.ensemble.RandomForestClassifier or xgboost.XGBClassifier) – The trained Tree Ensemble model to be explained. The model is expected to be a binary classifier.

  • feature_name (List[str]) – List of feature names used by the model. Only alphanumeric characters and underscores are allowed in feature names.

  • verbose (bool, optional) – Optional boolean value to give more insights on the running of the explanation algorithm. Default = False

Returns:

self – A ModelExplainer object initialized with the model to be explained.

Return type:

te2rules.explainer.ModelExplainer

Raises:
  • ValueError: – when model is not a supported Tree Ensemble Model. Currently, only scikit-learn’s GradientBoostingClassifier, RandomForestClassifier and xgboost’s XGBClassifier are supported.

  • ValueError: – when feature_name list contains a name that has any character other than alphanumeric characters or underscore.

Methods

__init__(model, feature_names[, verbose])

Initialize the explainer with the trained tree ensemble model and feature names used by the model.

explain(X, y[, num_stages, min_precision, ...])

A method to extract rule list from the tree ensemble model.

explain_instance_with_rules(X[, ...])

A method to explain the model output for a list of inputs using rules.

get_fidelity([X, y])

A method to evaluate the rule list extracted by the explain method

predict(X)

A method to apply rules found by the explain() method on a given input data.