Namespace AiDotNet.Interpretability
Classes
- AnchorExplanation<T>
Represents an anchor explanation providing rule-based interpretations.
- BasicFairnessEvaluator<T>
Basic fairness evaluator that computes only fundamental fairness metrics. Includes demographic parity (statistical parity difference) and disparate impact. Does not require actual labels.
- BiasDetectionResult<T>
Represents the results of a bias detection analysis.
- BiasDetectorBase<T>
Base class for all bias detectors that identify unfair treatment in model predictions.
- ComprehensiveFairnessEvaluator<T>
Comprehensive fairness evaluator that computes all major fairness metrics. Includes demographic parity, equal opportunity, equalized odds, predictive parity, disparate impact, and statistical parity difference.
- CounterfactualExplanation<T>
Represents a counterfactual explanation showing minimal changes needed for a different outcome.
- DemographicParityBiasDetector<T>
Detects bias using Demographic Parity (Statistical Parity Difference). Measures the difference in positive prediction rates between groups. A difference greater than 0.1 (10%) indicates potential bias.
- DisparateImpactBiasDetector<T>
Detects bias using the Disparate Impact metric (80% rule). Disparate Impact Ratio = (Min Positive Rate) / (Max Positive Rate). A ratio below 0.8 indicates potential bias.
- EqualOpportunityBiasDetector<T>
Detects bias using Equal Opportunity metric (True Positive Rate difference). Requires actual labels to compute TPR for each group. A TPR difference greater than 0.1 (10%) indicates potential bias.
- FairnessEvaluatorBase<T>
Base class for all fairness evaluators that measure equitable treatment in models.
- FairnessMetrics<T>
Represents fairness metrics for model evaluation.
- GroupFairnessEvaluator<T>
Group-level fairness evaluator that focuses on equalized performance across groups. Computes equal opportunity and equalized odds when actual labels are available. Focuses on ensuring similar error rates across demographic groups.
- InterpretabilityMetricsHelper<T>
Provides static utility methods for computing interpretability and fairness metrics.
- InterpretableModelHelper
Provides helper methods for interpretable model functionality.
- LimeExplanation<T>
Represents a LIME (Local Interpretable Model-agnostic Explanations) explanation for a prediction.
- PartialDependenceData<T>
Represents partial dependence data showing how features affect predictions.
Enums
- FairnessMetric
Enumeration of fairness metrics for model evaluation.
- InterpretationMethod
Enumeration of interpretation methods supported by interpretable models.