Understanding how Microscope.ai scores your AI presence is essential for interpreting results and tracking progress. This guide explains each metric, how scores are calculated, and what they mean.

Overall Score
What It Is
The primary metric representing your overall AI presence performance across all prompts and models.
How It's Calculated
The overall score aggregates multiple factors:
- Mention frequency (40% weight)
- Response quality and accuracy (30% weight)
- Positioning and ranking (20% weight)
- Sentiment and tone (10% weight)
Exact weighting may vary based on your industry and configuration.
Score Ranges
- 90-100% - Excellent: Strong, consistent AI presence
- 75-89% - Good: Solid performance with minor gaps
- 60-74% - Fair: Moderate presence, needs improvement
- 40-59% - Poor: Significant gaps in AI representation
- 0-39% - Critical: Minimal or problematic AI presence
What Impacts Overall Score
- Number of prompts where you're mentioned
- Accuracy of information provided
- Position in AI responses (earlier is better)
- Favorability of mentions
- Consistency across AI models
Core Metrics
Mention Rate
Definition
Percentage of prompts where your brand or products are mentioned in AI responses.
Calculation
Mention Rate = (Prompts with mentions / Total prompts) × 100
Example
- Total prompts: 30
- Prompts with mentions: 24
- Mention Rate: 80%
Interpretation
- Above 70% - Strong visibility
- 50-70% - Moderate visibility
- Below 50% - Limited visibility, improvement needed
What It Tells You
- How aware AI models are of your brand
- Which categories you're visible in
- Gaps in AI knowledge about you
Average Position
Definition
When mentioned, where you typically appear in AI responses.
Calculation
Average of your position in all responses that mention you. Position 1 = first mentioned, Position 2 = second mentioned, etc.
Example
- Mentioned first in 10 responses: Position 1
- Mentioned second in 8 responses: Position 2
- Mentioned third in 6 responses: Position 3
- Average Position: (10×1 + 8×2 + 6×3) / 24 = 1.83
Interpretation
- 1.0-1.5 - Excellent: Usually top-of-mind
- 1.5-2.5 - Good: Among top mentions
- 2.5-3.5 - Fair: Competing for attention
- Above 3.5 - Needs work: Often an afterthought
What It Tells You
- Your priority in AI recommendations
- Competitive positioning
- Brand strength perception
Accuracy Rate
Definition
Percentage of mentions where AI provides correct information about you.
Calculation
Accuracy Rate = (Accurate mentions / Total mentions) × 100
What Constitutes Accuracy
- Correct product names and descriptions
- Accurate features and specifications
- Current pricing and availability
- Correct brand attributes
- Valid URLs and contact information
Example
- Total mentions: 24
- Accurate mentions: 21
- Inaccurate mentions: 3
- Accuracy Rate: 87.5%
Interpretation
- 95-100% - Excellent: Highly accurate
- 85-94% - Good: Mostly accurate
- 70-84% - Fair: Notable inaccuracies
- Below 70% - Critical: Major accuracy issues
What It Tells You
- Quality of AI knowledge about you
- Content accuracy and freshness
- Need for fact correction
Sentiment Score
Definition
How favorably or unfavorably AI models represent you.
Calculation
Analyzes tone, language, and context of mentions to determine positive, neutral, or negative sentiment.
Scale
- 80-100% - Very positive mentions
- 60-79% - Mostly positive
- 40-59% - Neutral or mixed
- 20-39% - Leaning negative
- 0-19% - Predominantly negative
What It Tells You
- How favorably AI positions you
- Reputation in AI responses
- Potential trust issues
Category-Level Metrics
Each prompt category has its own set of scores:
Brand Category Score
Measures:
- Brand awareness and recognition
- Brand positioning vs. competitors
- Brand attribute accuracy
- Brand reputation
Product Category Score
Measures:
- Product recommendation frequency
- Product description accuracy
- Product-use case matching
- Product competitiveness
Technical Category Score
Measures:
- Technical information accuracy
- Specification completeness
- Compatibility information
- Implementation guidance quality
Trust Category Score
Measures:
- Credibility indicators
- Customer satisfaction representation
- Awards and recognition mentions
- Review and rating accuracy
Model-Specific Metrics
Each AI model provides individual metrics:
Per-Model Scores
- GPT mention rate, position, accuracy
- Gemini mention rate, position, accuracy
- Claude mention rate, position, accuracy
- Perplexity mention rate, position, accuracy
Model Comparison Metrics
- Range - Difference between highest and lowest model scores
- Consistency - How similar scores are across models
- Best performer - Which model represents you best
- Worst performer - Which model needs most improvement
Advanced Metrics
Competitive Comparison
When competitive prompts are included:
- Relative mention rate - How often you're mentioned vs. competitors
- Head-to-head win rate - Direct comparisons in your favor
- Competitive positioning - Average position vs. competitors
- Share of voice - Your proportion of total mentions
Trend Metrics
When historical data exists:
- Week-over-week change - Recent trend direction
- Month-over-month change - Medium-term trends
- Since baseline - Total improvement from first analysis
- Velocity - Rate of improvement or decline
Benchmarks and Targets
Industry Benchmarks
Microscope.ai may provide industry-specific benchmarks:
- Average scores for your industry
- Top performer benchmarks
- Percentile rankings
- Category-specific norms
Setting Your Own Targets
Recommended targets to set:
- Mention rate goal - e.g., 75% or higher
- Accuracy goal - e.g., 95% or higher
- Position goal - e.g., under 2.0 average
- Category goals - Specific targets per category
- Model goals - Specific targets per AI model
Score Interpretation Examples
Example 1: Strong Performance
- Overall Score: 85%
- Mention Rate: 80%
- Average Position: 1.6
- Accuracy: 95%
- Sentiment: 88%
Interpretation: Excellent AI presence. Strong visibility, accurate information, favorable positioning. Focus on maintaining and slight optimization.
Example 2: Mixed Performance
- Overall Score: 68%
- Mention Rate: 70%
- Average Position: 2.8
- Accuracy: 78%
- Sentiment: 75%
Interpretation: Moderate presence with room for improvement. Good mention rate but weaker positioning and accuracy. Priority: Fix inaccuracies, improve positioning.
Example 3: Weak Performance
- Overall Score: 42%
- Mention Rate: 45%
- Average Position: 3.5
- Accuracy: 65%
- Sentiment: 60%
Interpretation: Poor AI presence needing significant work. Limited visibility, low position, many inaccuracies. Priority: Comprehensive content strategy and optimization.
Using Metrics to Drive Action
Low Mention Rate
Actions to take:
- Increase online presence and content
- Improve SEO and discoverability
- Create comprehensive product information
- Build authority in your space
Poor Average Position
Actions to take:
- Strengthen competitive positioning
- Highlight unique value propositions
- Create comparison content
- Increase brand authority signals
Low Accuracy Rate
Actions to take:
- Update outdated information
- Correct factual errors
- Ensure consistency across web properties
- Create authoritative reference content
Negative Sentiment
Actions to take:
- Address reputation issues
- Respond to negative reviews
- Highlight positive customer experiences
- Improve trust signals
Metrics Best Practices
- Track consistently - Same metrics over time
- Focus on trends - Direction matters more than absolute values
- Set realistic goals - Incremental improvements add up
- Prioritize by impact - Fix what matters most first
- Celebrate wins - Acknowledge improvements
- Be patient - Scores improve gradually, not overnight
- Combine metrics - Look at the full picture, not single metrics
Common Metric Misinterpretations
Myth: Higher is Always Better
Reality: Context matters. 100% mention rate might mean you're over-optimized or in a tiny niche. 80-90% is often optimal.
Myth: Overall Score is All That Matters
Reality: Category and model-specific scores reveal actionable insights. Don't ignore the details.
Myth: One Bad Score Means Failure
Reality: Few businesses excel in all areas. Identify priorities and improve systematically.
Myth: Scores Should Improve Every Week
Reality: Fluctuations are normal. Look for overall trends over months, not weekly changes.
Next Steps
With metrics mastered, you're ready to:
- Explore detailed prompt-by-prompt results
- Compare results across executions
- Set performance targets and KPIs
- Build optimization strategies based on metrics