Class PromptMetrics
- Namespace
- AiDotNet.PromptEngineering.Analysis
- Assembly
- AiDotNet.dll
Contains metrics and analysis results for a prompt.
public class PromptMetrics
- Inheritance
-
PromptMetrics
- Inherited Members
Remarks
This class encapsulates all the measurements and analysis data produced when analyzing a prompt. It includes token counts, cost estimates, complexity scores, and detected patterns that help developers understand and optimize their prompts.
For Beginners: This is a report card for your prompt.
When you analyze a prompt, you get back this object with all the measurements:
- Token count: How many "words" the AI sees (affects cost)
- Estimated cost: How much this prompt will cost in API fees
- Complexity score: How complicated the prompt is (0-1)
- Variable count: How many {placeholders} are in the prompt
- Detected patterns: What type of prompt this is (question, instruction, etc.)
Example usage:
var metrics = analyzer.Analyze("Translate {text} from English to Spanish");
Console.WriteLine($"Tokens: {metrics.TokenCount}"); // e.g., 8
Console.WriteLine($"Cost: ${metrics.EstimatedCost}"); // e.g., $0.0001
Console.WriteLine($"Variables: {metrics.VariableCount}"); // e.g., 1
Console.WriteLine($"Patterns: {string.Join(", ", metrics.DetectedPatterns)}");
// e.g., "translation, instruction"
Properties
AnalyzedAt
Gets or sets the timestamp when this analysis was performed.
public DateTime AnalyzedAt { get; set; }
Property Value
Remarks
The UTC timestamp of when the analysis was performed. Useful for caching and tracking when metrics might be stale.
For Beginners: When this analysis was done. Useful if you cache metrics and need to know if they're outdated.
CharacterCount
Gets or sets the character count of the prompt.
public int CharacterCount { get; set; }
Property Value
Remarks
The raw character count of the prompt string. Useful as a quick metric and for character-limited contexts.
For Beginners: How many letters/characters in the prompt. Similar to what you see when you check length in a text editor.
ComplexityScore
Gets or sets the complexity score of the prompt (0.0 to 1.0).
public double ComplexityScore { get; set; }
Property Value
Remarks
A normalized score indicating how complex the prompt is, considering factors like sentence structure, vocabulary diversity, nesting depth, and instruction count.
For Beginners: How complicated your prompt is (0 = simple, 1 = complex).
Low complexity (0.0-0.3):
- "What is 2+2?" (simple question)
- "Say hello" (simple instruction)
Medium complexity (0.3-0.7):
- "Summarize this article focusing on key points"
- "Translate and then explain the translation"
High complexity (0.7-1.0):
- "Analyze this code, identify bugs, suggest fixes, and explain your reasoning"
- Multi-step instructions with conditions
Complex prompts may need more capable models or clearer structure.
DetectedPatterns
Gets or sets the detected prompt patterns or types.
public IReadOnlyList<string> DetectedPatterns { get; set; }
Property Value
Remarks
A list of patterns or prompt types detected in the prompt, such as "instruction", "question", "translation", "summarization", "chain-of-thought", etc.
For Beginners: What kind of prompt this is.
Examples of detected patterns:
- "instruction": Tells the AI to do something
- "question": Asks the AI something
- "translation": Asks for language translation
- "summarization": Asks for a summary
- "chain-of-thought": Asks AI to think step-by-step
- "few-shot": Contains examples
- "system-prompt": Sets AI behavior/role
Knowing the pattern helps:
- Choose the right model
- Optimize the prompt structure
- Apply appropriate preprocessing
EstimatedCost
Gets or sets the estimated API cost for this prompt.
public decimal EstimatedCost { get; set; }
Property Value
Remarks
The estimated cost in USD for processing this prompt with the target model. Based on current API pricing and the token count.
For Beginners: How much money this prompt will cost.
Example rates (as of 2024):
- GPT-4: ~$0.03 per 1K tokens input
- GPT-3.5: ~$0.001 per 1K tokens input
- Claude: ~$0.008 per 1K tokens input
A 500-token prompt on GPT-4 ≈ $0.015 This helps you budget and optimize costs.
ExampleCount
Gets or sets the count of examples included in the prompt (for few-shot prompts).
public int ExampleCount { get; set; }
Property Value
Remarks
The number of few-shot examples detected in the prompt. Higher example counts generally improve quality but increase token usage.
For Beginners: How many examples are included in your prompt.
Few-shot prompts include examples to teach the AI what you want: "Translate English to Spanish:
- Hello -> Hola
- Goodbye -> Adios Now translate: Good morning"
ExampleCount = 2 (the Hello and Goodbye examples)
More examples = better quality but more tokens (cost).
ModelName
Gets or sets the name of the model used for token counting.
public string ModelName { get; set; }
Property Value
Remarks
Different models use different tokenizers, so token counts can vary. This property records which model's tokenizer was used for this analysis.
For Beginners: Which AI model's counting method was used. GPT-4 and Claude count tokens differently, so this tells you which was used.
TokenCount
Gets or sets the total token count of the prompt.
public int TokenCount { get; set; }
Property Value
Remarks
The number of tokens in the prompt as counted by the relevant tokenizer. Token count directly affects API costs and context window limits.
For Beginners: Tokens are like "word pieces" that AI models understand.
Examples:
- "Hello" = 1 token
- "Hello, world!" = 3 tokens
- "antidisestablishmentarianism" = 4+ tokens (long words split up)
Why it matters:
- API pricing is per-token
- Models have maximum token limits (e.g., 4K, 8K, 128K)
- More tokens = more cost and slower processing
VariableCount
Gets or sets the number of template variables in the prompt.
public int VariableCount { get; set; }
Property Value
Remarks
The count of variable placeholders (e.g., {variable}) found in the prompt. Useful for validating that all expected variables are present.
For Beginners: How many {blanks} need to be filled in.
Example: "Translate {text} from {source_language} to {target_language}" VariableCount = 3
This helps you:
- Verify your template has the right number of variables
- Catch typos in variable names
- Document template requirements
WordCount
Gets or sets the word count of the prompt.
public int WordCount { get; set; }
Property Value
Remarks
The approximate word count of the prompt. Note that token count is more relevant for API costs, but word count is useful for human understanding.
For Beginners: How many words in the prompt. A rough estimate - tokens are what actually matters for AI. Typically, 1 word ≈ 1.3 tokens on average.