Class InterpretabilityMetricsHelper<T>
- Namespace
- AiDotNet.Interpretability
- Assembly
- AiDotNet.dll
Provides static utility methods for computing interpretability and fairness metrics.
public static class InterpretabilityMetricsHelper<T>
Type Parameters
TThe numeric type for calculations.
- Inheritance
-
InterpretabilityMetricsHelper<T>
- Inherited Members
Remarks
For Beginners: This is a collection of reusable helper methods for fairness and bias analysis.
These methods handle common tasks like:
- Identifying unique groups in data (e.g., different age groups, genders)
- Computing metrics like positive rates, true positive rates, etc.
- Extracting subsets of data for specific groups
By centralizing these methods here, we avoid code duplication and ensure consistent calculations across all bias detection and fairness evaluation tools.
Methods
ComputeFalsePositiveRate(Vector<T>, Vector<T>)
Computes the False Positive Rate (FPR).
public static T ComputeFalsePositiveRate(Vector<T> predictions, Vector<T> actualLabels)
Parameters
predictionsVector<T>The prediction vector (binary: 0 or 1).
actualLabelsVector<T>The actual label vector (binary: 0 or 1).
Returns
- T
The proportion of actual negatives that were incorrectly predicted as positive.
Remarks
For Beginners: This method calculates how often the model incorrectly predicts positive.
FPR = (False Positives) / (False Positives + True Negatives)
For example, if there are 10 actual negative cases and the model incorrectly called 2 of them positive, the FPR would be 2/10 = 0.2 (20%).
In fairness analysis, we want the FPR to be similar across all groups. If the model makes more false positive errors for one group than another, that's a form of bias.
ComputePositiveRate(Vector<T>)
Computes the positive prediction rate (proportion of positive predictions).
public static T ComputePositiveRate(Vector<T> predictions)
Parameters
predictionsVector<T>The prediction vector (binary: 0 or 1).
Returns
- T
The proportion of positive predictions (predictions equal to 1).
Remarks
For Beginners: This method calculates what fraction of predictions are positive (1).
For example, if predictions are [1, 0, 1, 1, 0], the positive rate would be 3/5 = 0.6 (60% of predictions are positive).
This is a key metric for fairness - if one group has a much higher positive rate than another, it might indicate bias in the model.
ComputePrecision(Vector<T>, Vector<T>)
Computes the Precision (Positive Predictive Value).
public static T ComputePrecision(Vector<T> predictions, Vector<T> actualLabels)
Parameters
predictionsVector<T>The prediction vector (binary: 0 or 1).
actualLabelsVector<T>The actual label vector (binary: 0 or 1).
Returns
- T
The proportion of positive predictions that were actually correct.
Remarks
For Beginners: This method calculates how accurate the positive predictions are.
Precision = (True Positives) / (True Positives + False Positives)
For example, if the model made 10 positive predictions and 8 of them were correct, the precision would be 8/10 = 0.8 (80%).
In fairness analysis, we want precision to be similar across all groups. If positive predictions are more reliable for one group than another, that's a form of bias.
ComputeTruePositiveRate(Vector<T>, Vector<T>)
Computes the True Positive Rate (TPR) or Recall.
public static T ComputeTruePositiveRate(Vector<T> predictions, Vector<T> actualLabels)
Parameters
predictionsVector<T>The prediction vector (binary: 0 or 1).
actualLabelsVector<T>The actual label vector (binary: 0 or 1).
Returns
- T
The proportion of actual positives that were correctly predicted as positive.
Remarks
For Beginners: This method calculates how good the model is at identifying positive cases.
TPR = (True Positives) / (True Positives + False Negatives)
For example, if there are 10 actual positive cases and the model correctly identified 8 of them, the TPR would be 8/10 = 0.8 (80%).
In fairness analysis, we want the TPR to be similar across all groups. If the model is better at identifying positive cases for one group than another, that's a form of bias.
GetGroupIndices(Vector<T>, T)
Gets the indices of all samples belonging to a specific group.
public static List<int> GetGroupIndices(Vector<T> sensitiveFeature, T groupValue)
Parameters
sensitiveFeatureVector<T>The sensitive feature vector.
groupValueTThe group value to search for.
Returns
Remarks
For Beginners: This method finds the positions of all members of a specific group.
For example, if your sensitive feature is [1, 0, 1, 0, 1] and groupValue is 1, this method would return [0, 2, 4] (the positions where the value is 1).
This allows us to isolate data for a specific group so we can analyze how the model treats that group separately.
GetSubset(Vector<T>, List<int>)
Extracts a subset of a vector based on specified indices.
public static Vector<T> GetSubset(Vector<T> vector, List<int> indices)
Parameters
Returns
- Vector<T>
A new vector containing only the elements at the specified indices.
Remarks
For Beginners: This method creates a smaller vector containing only specific elements.
For example, if you have a vector [10, 20, 30, 40, 50] and indices [0, 2, 4], this method would return a new vector [10, 30, 50].
This is useful for extracting predictions or labels for a specific group after you've identified which indices belong to that group.
GetUniqueGroups(Vector<T>)
Identifies all unique groups in the sensitive feature.
public static List<T> GetUniqueGroups(Vector<T> sensitiveFeature)
Parameters
sensitiveFeatureVector<T>The sensitive feature vector (e.g., race, gender, age group).
Returns
- List<T>
A list of unique group values found in the sensitive feature.
Remarks
For Beginners: This method finds all the different categories in your sensitive feature.
For example, if your sensitive feature is gender with values [1, 0, 1, 0, 1], this method would return a list containing [0, 1] (the two unique groups).
This is the first step in fairness analysis - we need to know which groups exist before we can compare how they're treated.