Interface IRAGMetric<T>
- Namespace
- AiDotNet.Interfaces
- Assembly
- AiDotNet.dll
Defines the contract for RAG evaluation metrics.
public interface IRAGMetric<T>
Type Parameters
TThe numeric data type used for relevance scoring.
Remarks
A RAG metric evaluates the quality of retrieval-augmented generation systems by comparing generated answers against ground truth or analyzing specific aspects of the generation process. Metrics help developers understand system performance and guide improvements.
For Beginners: Metrics are like test scores for your RAG system.
Think of it like grading an exam:
- The metric looks at the AI's answer
- Compares it to what the answer should be (or checks quality)
- Gives a score (0-1, where 1 is perfect)
Different metrics measure different things:
- Faithfulness: Does the answer stick to the source documents?
- Similarity: How close is the answer to the ground truth?
- Coverage: Does the answer address all parts of the question?
Use metrics to:
- Compare different RAG configurations
- Track improvements over time
- Identify weak points in your system
Properties
Description
Gets the description of what this metric measures.
string Description { get; }
Property Value
Name
Gets the name of this metric.
string Name { get; }
Property Value
Methods
Evaluate(GroundedAnswer<T>, string?)
Evaluates a grounded answer and returns a score.
T Evaluate(GroundedAnswer<T> answer, string? groundTruth = null)
Parameters
answerGroundedAnswer<T>The grounded answer to evaluate.
groundTruthstringThe expected/correct answer (null for reference-free metrics).
Returns
- T
A score between 0 and 1, where 1 is perfect.