Table of Contents

Class InterpretableModelHelper

Namespace
AiDotNet.Interpretability
Assembly
AiDotNet.dll

Provides helper methods for interpretable model functionality.

public static class InterpretableModelHelper
Inheritance
InterpretableModelHelper
Inherited Members

Methods

GenerateTextExplanationAsync<T>(IInterpretableModel<T>, Tensor<T>, Tensor<T>)

Generates a text explanation for a prediction.

public static Task<string> GenerateTextExplanationAsync<T>(IInterpretableModel<T> model, Tensor<T> input, Tensor<T> prediction)

Parameters

model IInterpretableModel<T>

The model to analyze.

input Tensor<T>

The input data.

prediction Tensor<T>

The prediction made by the model.

Returns

Task<string>

A text explanation of the prediction.

Type Parameters

T

The numeric type for calculations.

GetAnchorExplanationAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>, T)

Gets anchor explanation for a given input.

public static Task<AnchorExplanation<T>> GetAnchorExplanationAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input, T threshold)

Parameters

model IInterpretableModel<T>

The model to analyze.

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

input Tensor<T>

The input to explain.

threshold T

The threshold for anchor construction.

Returns

Task<AnchorExplanation<T>>

An anchor explanation.

Type Parameters

T

The numeric type for calculations.

GetCounterfactualAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>, Tensor<T>, int)

Gets counterfactual explanation for a given input and desired output.

public static Task<CounterfactualExplanation<T>> GetCounterfactualAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input, Tensor<T> desiredOutput, int maxChanges = 5)

Parameters

model IInterpretableModel<T>

The model to analyze.

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

input Tensor<T>

The input to analyze.

desiredOutput Tensor<T>

The desired output.

maxChanges int

The maximum number of changes allowed.

Returns

Task<CounterfactualExplanation<T>>

A counterfactual explanation.

Type Parameters

T

The numeric type for calculations.

GetFeatureInteractionAsync<T>(HashSet<InterpretationMethod>, int, int)

Gets feature interaction effects between two features.

public static Task<T> GetFeatureInteractionAsync<T>(HashSet<InterpretationMethod> enabledMethods, int feature1Index, int feature2Index)

Parameters

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

feature1Index int

The index of the first feature.

feature2Index int

The index of the second feature.

Returns

Task<T>

The interaction effect value.

Type Parameters

T

The numeric type for calculations.

GetGlobalFeatureImportanceAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>)

Gets the global feature importance across all predictions.

public static Task<Dictionary<int, T>> GetGlobalFeatureImportanceAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods)

Parameters

model IInterpretableModel<T>

The model to analyze.

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

Returns

Task<Dictionary<int, T>>

A dictionary mapping feature indices to importance scores.

Type Parameters

T

The numeric type for calculations.

GetLimeExplanationAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>, int)

Gets LIME explanation for a specific input.

public static Task<LimeExplanation<T>> GetLimeExplanationAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input, int numFeatures = 10)

Parameters

model IInterpretableModel<T>

The model to analyze.

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

input Tensor<T>

The input to explain.

numFeatures int

The number of features to include in the explanation.

Returns

Task<LimeExplanation<T>>

A LIME explanation.

Type Parameters

T

The numeric type for calculations.

GetLocalFeatureImportanceAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>)

Gets the local feature importance for a specific input.

public static Task<Dictionary<int, T>> GetLocalFeatureImportanceAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input)

Parameters

model IInterpretableModel<T>

The model to analyze.

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

input Tensor<T>

The input to analyze.

Returns

Task<Dictionary<int, T>>

A dictionary mapping feature indices to importance scores.

Type Parameters

T

The numeric type for calculations.

GetModelSpecificInterpretabilityAsync<T>(IInterpretableModel<T>)

Gets model-specific interpretability information.

public static Task<Dictionary<string, object>> GetModelSpecificInterpretabilityAsync<T>(IInterpretableModel<T> model)

Parameters

model IInterpretableModel<T>

The model to analyze.

Returns

Task<Dictionary<string, object>>

A dictionary of model-specific interpretability information.

Type Parameters

T

The numeric type for calculations.

GetPartialDependenceAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Vector<int>, int)

Gets partial dependence data for specified features.

public static Task<PartialDependenceData<T>> GetPartialDependenceAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Vector<int> featureIndices, int gridResolution = 20)

Parameters

model IInterpretableModel<T>

The model to analyze.

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

featureIndices Vector<int>

The feature indices to analyze.

gridResolution int

The grid resolution to use.

Returns

Task<PartialDependenceData<T>>

Partial dependence data.

Type Parameters

T

The numeric type for calculations.

GetShapValuesAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>)

Gets SHAP values for the given inputs.

public static Task<Matrix<T>> GetShapValuesAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> inputs)

Parameters

model IInterpretableModel<T>

The model to analyze.

enabledMethods HashSet<InterpretationMethod>

The set of enabled interpretation methods.

inputs Tensor<T>

The inputs to analyze.

Returns

Task<Matrix<T>>

A matrix containing SHAP values.

Type Parameters

T

The numeric type for calculations.

ValidateFairnessAsync<T>(List<FairnessMetric>)

Validates fairness metrics for the given inputs.

public static Task<FairnessMetrics<T>> ValidateFairnessAsync<T>(List<FairnessMetric> fairnessMetrics)

Parameters

fairnessMetrics List<FairnessMetric>

The fairness metrics to validate.

Returns

Task<FairnessMetrics<T>>

Fairness metrics results.

Type Parameters

T

The numeric type for calculations.