Class InterpretableModelHelper
- Namespace
- AiDotNet.Interpretability
- Assembly
- AiDotNet.dll
Provides helper methods for interpretable model functionality.
public static class InterpretableModelHelper
- Inheritance
-
InterpretableModelHelper
- Inherited Members
Methods
GenerateTextExplanationAsync<T>(IInterpretableModel<T>, Tensor<T>, Tensor<T>)
Generates a text explanation for a prediction.
public static Task<string> GenerateTextExplanationAsync<T>(IInterpretableModel<T> model, Tensor<T> input, Tensor<T> prediction)
Parameters
modelIInterpretableModel<T>The model to analyze.
inputTensor<T>The input data.
predictionTensor<T>The prediction made by the model.
Returns
Type Parameters
TThe numeric type for calculations.
GetAnchorExplanationAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>, T)
Gets anchor explanation for a given input.
public static Task<AnchorExplanation<T>> GetAnchorExplanationAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input, T threshold)
Parameters
modelIInterpretableModel<T>The model to analyze.
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
inputTensor<T>The input to explain.
thresholdTThe threshold for anchor construction.
Returns
- Task<AnchorExplanation<T>>
An anchor explanation.
Type Parameters
TThe numeric type for calculations.
GetCounterfactualAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>, Tensor<T>, int)
Gets counterfactual explanation for a given input and desired output.
public static Task<CounterfactualExplanation<T>> GetCounterfactualAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input, Tensor<T> desiredOutput, int maxChanges = 5)
Parameters
modelIInterpretableModel<T>The model to analyze.
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
inputTensor<T>The input to analyze.
desiredOutputTensor<T>The desired output.
maxChangesintThe maximum number of changes allowed.
Returns
- Task<CounterfactualExplanation<T>>
A counterfactual explanation.
Type Parameters
TThe numeric type for calculations.
GetFeatureInteractionAsync<T>(HashSet<InterpretationMethod>, int, int)
Gets feature interaction effects between two features.
public static Task<T> GetFeatureInteractionAsync<T>(HashSet<InterpretationMethod> enabledMethods, int feature1Index, int feature2Index)
Parameters
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
feature1IndexintThe index of the first feature.
feature2IndexintThe index of the second feature.
Returns
- Task<T>
The interaction effect value.
Type Parameters
TThe numeric type for calculations.
GetGlobalFeatureImportanceAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>)
Gets the global feature importance across all predictions.
public static Task<Dictionary<int, T>> GetGlobalFeatureImportanceAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods)
Parameters
modelIInterpretableModel<T>The model to analyze.
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
Returns
- Task<Dictionary<int, T>>
A dictionary mapping feature indices to importance scores.
Type Parameters
TThe numeric type for calculations.
GetLimeExplanationAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>, int)
Gets LIME explanation for a specific input.
public static Task<LimeExplanation<T>> GetLimeExplanationAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input, int numFeatures = 10)
Parameters
modelIInterpretableModel<T>The model to analyze.
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
inputTensor<T>The input to explain.
numFeaturesintThe number of features to include in the explanation.
Returns
- Task<LimeExplanation<T>>
A LIME explanation.
Type Parameters
TThe numeric type for calculations.
GetLocalFeatureImportanceAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>)
Gets the local feature importance for a specific input.
public static Task<Dictionary<int, T>> GetLocalFeatureImportanceAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> input)
Parameters
modelIInterpretableModel<T>The model to analyze.
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
inputTensor<T>The input to analyze.
Returns
- Task<Dictionary<int, T>>
A dictionary mapping feature indices to importance scores.
Type Parameters
TThe numeric type for calculations.
GetModelSpecificInterpretabilityAsync<T>(IInterpretableModel<T>)
Gets model-specific interpretability information.
public static Task<Dictionary<string, object>> GetModelSpecificInterpretabilityAsync<T>(IInterpretableModel<T> model)
Parameters
modelIInterpretableModel<T>The model to analyze.
Returns
- Task<Dictionary<string, object>>
A dictionary of model-specific interpretability information.
Type Parameters
TThe numeric type for calculations.
GetPartialDependenceAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Vector<int>, int)
Gets partial dependence data for specified features.
public static Task<PartialDependenceData<T>> GetPartialDependenceAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Vector<int> featureIndices, int gridResolution = 20)
Parameters
modelIInterpretableModel<T>The model to analyze.
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
featureIndicesVector<int>The feature indices to analyze.
gridResolutionintThe grid resolution to use.
Returns
- Task<PartialDependenceData<T>>
Partial dependence data.
Type Parameters
TThe numeric type for calculations.
GetShapValuesAsync<T>(IInterpretableModel<T>, HashSet<InterpretationMethod>, Tensor<T>)
Gets SHAP values for the given inputs.
public static Task<Matrix<T>> GetShapValuesAsync<T>(IInterpretableModel<T> model, HashSet<InterpretationMethod> enabledMethods, Tensor<T> inputs)
Parameters
modelIInterpretableModel<T>The model to analyze.
enabledMethodsHashSet<InterpretationMethod>The set of enabled interpretation methods.
inputsTensor<T>The inputs to analyze.
Returns
- Task<Matrix<T>>
A matrix containing SHAP values.
Type Parameters
TThe numeric type for calculations.
ValidateFairnessAsync<T>(List<FairnessMetric>)
Validates fairness metrics for the given inputs.
public static Task<FairnessMetrics<T>> ValidateFairnessAsync<T>(List<FairnessMetric> fairnessMetrics)
Parameters
fairnessMetricsList<FairnessMetric>The fairness metrics to validate.
Returns
- Task<FairnessMetrics<T>>
Fairness metrics results.
Type Parameters
TThe numeric type for calculations.