Class FairnessEvaluatorBase<T>
- Namespace
- AiDotNet.Interpretability
- Assembly
- AiDotNet.dll
Base class for all fairness evaluators that measure equitable treatment in models.
public abstract class FairnessEvaluatorBase<T> : IFairnessEvaluator<T>
Type Parameters
TThe numeric type used for calculations (e.g., double, float).
- Inheritance
-
FairnessEvaluatorBase<T>
- Implements
- Derived
- Inherited Members
Remarks
For Beginners: This is a foundation class that all fairness evaluators build upon.
Think of a fairness evaluator like a comprehensive audit:
- It examines your model's behavior across multiple dimensions of fairness
- It measures various fairness metrics (demographic parity, equal opportunity, etc.)
- It provides a complete picture of how equitably your model treats different groups
Different fairness evaluators might focus on different combinations of metrics, but they all share common functionality. This base class provides that shared foundation.
Constructors
FairnessEvaluatorBase(bool)
Initializes a new instance of the FairnessEvaluatorBase class.
protected FairnessEvaluatorBase(bool isHigherFairnessBetter)
Parameters
isHigherFairnessBetterboolIndicates whether higher fairness scores represent better (more equitable) models.
Remarks
For Beginners: This sets up the basic properties of the fairness evaluator.
Parameters:
- isHigherFairnessBetter: Tells the system whether bigger numbers mean fairer models (depends on which fairness metric is used as the primary measure)
Fields
_isHigherFairnessBetter
Indicates whether higher fairness scores represent better (more equitable) models.
protected readonly bool _isHigherFairnessBetter
Field Value
Remarks
For Beginners: This tells us whether bigger numbers mean fairer models.
For fairness evaluation:
- Some metrics work where higher is better (e.g., disparate impact ratio closer to 1)
- Other metrics work where lower is better (e.g., demographic parity difference closer to 0)
This property indicates the general direction for the evaluator's primary metric.
_numOps
Provides mathematical operations for the specific numeric type being used.
protected readonly INumericOperations<T> _numOps
Field Value
- INumericOperations<T>
Remarks
For Beginners: This is a toolkit that helps perform math operations regardless of whether we're using integers, decimals, doubles, etc.
It allows the evaluator to work with different numeric types without having to rewrite the math operations for each type.
Properties
IsHigherFairnessBetter
Gets a value indicating whether higher fairness scores represent better (more equitable) models.
public bool IsHigherFairnessBetter { get; }
Property Value
Remarks
For Beginners: This property tells you whether bigger numbers mean fairer models.
The interpretation depends on which fairness metric is used:
- For some metrics (like disparate impact ratio), higher values mean more fairness
- For other metrics (like demographic parity difference), lower values mean more fairness
This property indicates the direction for the evaluator's primary or aggregate metric.
Methods
EvaluateFairness(IFullModel<T, Matrix<T>, Vector<T>>, Matrix<T>, int, Vector<T>?)
Evaluates fairness of a model by analyzing its predictions across different groups.
public FairnessMetrics<T> EvaluateFairness(IFullModel<T, Matrix<T>, Vector<T>> model, Matrix<T> inputs, int sensitiveFeatureIndex, Vector<T>? actualLabels = null)
Parameters
modelIFullModel<T, Matrix<T>, Vector<T>>The model to evaluate for fairness.
inputsMatrix<T>The input data containing features and sensitive attributes.
sensitiveFeatureIndexintIndex of the column containing the sensitive feature.
actualLabelsVector<T>Optional actual labels for computing accuracy-based fairness metrics.
Returns
- FairnessMetrics<T>
A FairnessMetrics object containing comprehensive fairness measurements.
Remarks
For Beginners: This method measures how fairly your model treats different groups.
It works by:
- Validating that all required data is provided and properly formatted
- Calling the specific fairness evaluation logic implemented by derived classes
- Returning comprehensive metrics about the model's fairness
The method handles the common validation logic, while the specific fairness calculations are defined in each evaluator that extends this base class.
Exceptions
- ArgumentNullException
Thrown when model or inputs is null.
- ArgumentException
Thrown when sensitiveFeatureIndex is invalid or when actualLabels length doesn't match inputs.
GetFairnessMetrics(IFullModel<T, Matrix<T>, Vector<T>>, Matrix<T>, int, Vector<T>?)
Abstract method that must be implemented by derived classes to perform specific fairness evaluation logic.
protected abstract FairnessMetrics<T> GetFairnessMetrics(IFullModel<T, Matrix<T>, Vector<T>> model, Matrix<T> inputs, int sensitiveFeatureIndex, Vector<T>? actualLabels)
Parameters
modelIFullModel<T, Matrix<T>, Vector<T>>The model to evaluate for fairness.
inputsMatrix<T>The input data containing features and sensitive attributes.
sensitiveFeatureIndexintIndex of the column containing the sensitive feature.
actualLabelsVector<T>Optional actual labels for computing accuracy-based fairness metrics.
Returns
- FairnessMetrics<T>
A FairnessMetrics object containing comprehensive fairness measurements.
Remarks
For Beginners: This is a placeholder method that each specific evaluator must fill in.
Think of it like a template that says "here's where you put your specific fairness evaluation logic." Each evaluator that extends this base class will provide its own implementation of this method, defining exactly how it calculates various fairness metrics.
For example:
- A comprehensive evaluator might calculate all major fairness metrics
- A specialized evaluator might focus on specific metrics like equal opportunity
- A custom evaluator might implement domain-specific fairness measures
IsBetterFairnessScore(T, T)
Determines whether a new fairness score represents better (more equitable) performance than the current best score.
public bool IsBetterFairnessScore(T currentFairness, T bestFairness)
Parameters
currentFairnessTThe current fairness score to evaluate.
bestFairnessTThe best (most equitable) fairness score found so far.
Returns
- bool
True if the current fairness score is better (more equitable) than the best fairness score; otherwise, false.
Remarks
For Beginners: This method compares two fairness scores and tells you which model is more equitable.
It takes into account whether higher scores are better or lower scores are better:
- If higher scores are better, it returns true when the new score is higher
- If lower scores are better, it returns true when the new score is lower
This is particularly useful when:
- Selecting the most fair model from multiple options
- Deciding whether model changes improved fairness
- Tracking fairness improvements during model development