Class BiasDetectorBase<T>
- Namespace
- AiDotNet.Interpretability
- Assembly
- AiDotNet.dll
Base class for all bias detectors that identify unfair treatment in model predictions.
public abstract class BiasDetectorBase<T> : IBiasDetector<T>
Type Parameters
TThe numeric type used for calculations (e.g., double, float).
- Inheritance
-
BiasDetectorBase<T>
- Implements
- Derived
- Inherited Members
Remarks
For Beginners: This is a foundation class that all bias detectors build upon.
Think of a bias detector like an inspector checking for fairness:
- It examines how your model makes predictions for different groups of people
- It identifies when certain groups are being treated unfairly
- It provides metrics that measure the severity of the bias
Different bias detectors might look for different types of unfairness, but they all share common functionality. This base class provides that shared foundation.
Constructors
BiasDetectorBase(bool)
Initializes a new instance of the BiasDetectorBase class.
protected BiasDetectorBase(bool isLowerBiasBetter)
Parameters
isLowerBiasBetterboolIndicates whether lower bias scores represent better (fairer) models.
Remarks
For Beginners: This sets up the basic properties of the bias detector.
Parameters:
- isLowerBiasBetter: Tells the system whether smaller numbers mean fairer models (typically true for bias metrics, where 0 represents perfect fairness)
Fields
_isLowerBiasBetter
Indicates whether lower bias scores represent better (fairer) models.
protected readonly bool _isLowerBiasBetter
Field Value
Remarks
For Beginners: This tells us whether smaller numbers mean fairer models.
For bias detection:
- Lower bias scores typically indicate fairer models (closer to equal treatment)
- A bias score of 0 would indicate perfect fairness
This helps the system know how to compare different models for fairness.
_numOps
Provides mathematical operations for the specific numeric type being used.
protected readonly INumericOperations<T> _numOps
Field Value
- INumericOperations<T>
Remarks
For Beginners: This is a toolkit that helps perform math operations regardless of whether we're using integers, decimals, doubles, etc.
It allows the detector to work with different numeric types without having to rewrite the math operations for each type.
Properties
IsLowerBiasBetter
Gets a value indicating whether lower bias scores represent better (fairer) models.
public bool IsLowerBiasBetter { get; }
Property Value
Remarks
For Beginners: This property tells you whether smaller numbers mean fairer models.
For most bias metrics:
- IsLowerBiasBetter is true (0 bias means perfect fairness)
- Lower values indicate the model treats different groups more equally
This helps you interpret the scores correctly when comparing different models.
Methods
DetectBias(Vector<T>, Vector<T>, Vector<T>?)
Detects bias in model predictions by analyzing predictions across different groups.
public BiasDetectionResult<T> DetectBias(Vector<T> predictions, Vector<T> sensitiveFeature, Vector<T>? actualLabels = null)
Parameters
predictionsVector<T>The model's predictions.
sensitiveFeatureVector<T>The sensitive feature (e.g., race, gender) used to identify groups.
actualLabelsVector<T>Optional actual labels for computing additional bias metrics.
Returns
- BiasDetectionResult<T>
A result object containing bias detection metrics and analysis.
Remarks
For Beginners: This method checks if your model treats different groups fairly.
It works by:
- Validating that all required data is provided and properly formatted
- Calling the specific bias detection logic implemented by derived classes
- Returning detailed results about any bias found
The method handles the common validation logic, while the specific bias detection algorithm is defined in each detector that extends this base class.
Exceptions
- ArgumentNullException
Thrown when predictions or sensitiveFeature is null.
- ArgumentException
Thrown when predictions and sensitiveFeature have different lengths.
GetBiasDetectionResult(Vector<T>, Vector<T>, Vector<T>?)
Abstract method that must be implemented by derived classes to perform specific bias detection logic.
protected abstract BiasDetectionResult<T> GetBiasDetectionResult(Vector<T> predictions, Vector<T> sensitiveFeature, Vector<T>? actualLabels)
Parameters
predictionsVector<T>The model's predictions.
sensitiveFeatureVector<T>The sensitive feature used to identify groups.
actualLabelsVector<T>Optional actual labels for computing additional bias metrics.
Returns
- BiasDetectionResult<T>
A result object containing bias detection metrics and analysis.
Remarks
For Beginners: This is a placeholder method that each specific detector must fill in.
Think of it like a template that says "here's where you put your specific bias detection logic." Each detector that extends this base class will provide its own implementation of this method, defining exactly how it detects and measures bias.
For example:
- A disparate impact detector would check if positive outcomes are equally distributed
- An equal opportunity detector would check if qualified individuals have equal chances
- A demographic parity detector would check for balanced outcomes across groups
IsBetterBiasScore(T, T)
Determines whether a new bias score represents better (fairer) performance than the current best score.
public bool IsBetterBiasScore(T currentBias, T bestBias)
Parameters
currentBiasTThe current bias score to evaluate.
bestBiasTThe best (fairest) bias score found so far.
Returns
- bool
True if the current bias score is better (fairer) than the best bias score; otherwise, false.
Remarks
For Beginners: This method compares two bias scores and tells you which model is fairer.
It takes into account whether higher scores are better or lower scores are better:
- If lower scores are better (typical for bias), it returns true when the new score is lower
- If higher scores are better (less common), it returns true when the new score is higher
This is particularly useful when:
- Selecting the fairest model from multiple options
- Deciding whether model changes improved fairness
- Tracking fairness improvements during model development