The results dashboard is your command center for understanding AI performance. This guide explains how to read and interpret each component to quickly grasp your AI presence.

Accessing Analysis Results
- Navigate to Project Analyses
- Find your executed analysis
- Click on the analysis name
- The results dashboard opens
Or access directly from execution completion notification.

Dashboard Overview
The results dashboard typically displays several key sections:
- Overall Performance Summary - High-level scores and metrics
- Category Breakdown - Performance by prompt category
- Model Comparison - How different AI models perform
- Detailed Results Table - Prompt-by-prompt results
- Trends - Historical comparison (if available)
- Insights and Recommendations - AI-generated guidance
Overall Performance Summary
Overall Score
The primary metric representing your AI presence:
- Typically displayed as percentage or 0-100 scale
- Aggregates performance across all prompts and models
- Higher score = better AI representation
- Color-coded: Green (good), Yellow (moderate), Red (needs work)
What the Overall Score Measures
- Mention frequency - How often you're mentioned
- Response quality - Accuracy and completeness of mentions
- Positioning - How favorably you're presented
- Competitive standing - Performance vs. competitors
The overall score provides a quick health check. Dive into details for actionable insights.
Key Metrics at a Glance
Additional summary metrics typically include:
Mention Rate
- Percentage of prompts where you were mentioned
- Example: "Mentioned in 75% of responses"
- Benchmark: 70%+ is strong
Average Position
- When mentioned, where you appear in responses
- Example: "Average position: 2.3"
- Lower is better (1st position is best)
Positive Sentiment
- Percentage of mentions that are favorable
- Example: "85% positive mentions"
- Benchmark: 80%+ is excellent
Accuracy Rate
- How often AI information about you is correct
- Example: "90% accuracy"
- Target: 95%+ accuracy
Category Breakdown
Performance segmented by prompt category:
Category Scores
- Brand - Brand awareness and positioning score
- Product - Product recommendation performance
- Technical - Technical accuracy score
- Trust - Credibility and reputation score
Interpreting Category Performance
Strong Category
- Score above 75%
- Consistent mentions
- Accurate information
- Good positioning
Moderate Category
- Score 50-75%
- Inconsistent mentions
- Some inaccuracies
- Room for optimization
Weak Category
- Score below 50%
- Rarely mentioned
- Significant inaccuracies
- Priority for improvement
Category Comparison
Understanding relative category performance:
- Identify strongest categories to leverage
- Spot weakest categories for improvement
- Understand where to focus content efforts
- Recognize patterns (e.g., strong product, weak brand)
Model Comparison
Performance across different AI models:
Per-Model Scores
- GPT score
- Gemini score
- Claude score
- Perplexity score
Model Performance Patterns
Consistent Performance
- Similar scores across all models
- Indicates strong overall presence
- Less model-specific optimization needed
Variable Performance
- Significant differences between models
- Some models represent you better
- Opportunities for model-specific optimization
- May reflect training data differences
Model-Specific Insights
- Which models mention you most frequently
- Which provide most accurate information
- Where you have best competitive positioning
- Which models need optimization focus
Trend Indicators
If historical data exists:
Trend Arrows
- Up arrow - Performance improved vs. previous
- Down arrow - Performance declined
- Flat - No significant change
- Usually shown next to scores
Percentage Change
- Example: "+5% vs. last week"
- Quantifies improvement or decline
- Green for positive, red for negative
Trend Graphs
- Line chart showing score over time
- Spot patterns and trends
- Identify impact of optimization efforts
- Seasonal or event-driven changes
Insights and Recommendations
AI-Generated Insights
Microscope.ai analyzes results and provides:
- Key findings - Most important takeaways
- Strengths - What you're doing well
- Weaknesses - Areas needing attention
- Opportunities - Where to focus efforts
- Threats - Competitive or accuracy issues
Actionable Recommendations
Specific guidance on next steps:
- Content to create or update
- Inaccuracies to correct
- Competitive gaps to address
- Optimization priorities
Using Insights Effectively
- Read insights before diving into details
- Prioritize recommendations by impact
- Share insights with content team
- Track which recommendations you implement
- Measure impact of actions taken
Quick Actions
Common dashboard actions:
- View Details - Drill into prompt-by-prompt results
- Export Results - Download data for reporting
- Compare - Compare to previous executions
- Share - Share dashboard with team
- Run Again - Execute analysis again
- Schedule - Set up recurring execution
Understanding Dashboard Colors
Score Color Coding
Green
- Score: 75-100%
- Meaning: Strong performance
- Action: Maintain and leverage
Yellow/Orange
- Score: 50-74%
- Meaning: Moderate performance
- Action: Optimize and improve
Red
- Score: 0-49%
- Meaning: Weak performance
- Action: Urgent attention needed
Status Indicators
- Green check - All good, no issues
- Yellow warning - Minor issues or alerts
- Red error - Significant problems
- Blue info - Informational messages
Dashboard Customization
Some plans allow customizing dashboard views:
- Choose which metrics to display
- Reorder dashboard sections
- Set custom thresholds for color coding
- Create custom views for different audiences
- Save dashboard configurations
Reading the Dashboard Strategically
Top-Down Approach
- Start with overall score
- Review category breakdown
- Check model comparison
- Read insights
- Then drill into details
Bottom-Up Approach
- Start with detailed results table
- Identify specific issues
- See how they aggregate to category scores
- Understand impact on overall score
- Validate with insights
Comparative Approach
- Look at trends first
- Compare to previous executions
- Identify what changed
- Understand drivers of change
- Assess optimization impact
Common Dashboard Scenarios
First-Time Analysis
- No trends available yet
- Establishes baseline
- Focus on absolute scores
- Identify biggest gaps
- Set improvement priorities
Post-Optimization Check
- Compare to pre-optimization baseline
- Look for improvements in targeted areas
- Assess if changes worked
- Identify next optimization targets
Regular Monitoring
- Focus on trends
- Quick scan for anomalies
- Monitor key metrics
- Track toward goals
- Flag issues early
Competitive Analysis
- Focus on positioning metrics
- Compare mention rates
- Analyze competitive prompts
- Identify differentiation opportunities
Dashboard Best Practices
- Review regularly - Don't just run analyses, review them
- Share widely - Ensure stakeholders see results
- Track trends - Focus on direction, not just absolute scores
- Act on insights - Dashboard is useless without action
- Document findings - Keep notes on what you learn
- Celebrate wins - Acknowledge improvements
- Stay objective - Don't ignore bad news
What the Dashboard Doesn't Show
Important limitations to understand:
- Doesn't show exact AI responses (see detailed view)
- Doesn't explain why scores are what they are (requires analysis)
- Doesn't provide context about your industry or competitors
- Doesn't automatically identify root causes
- Doesn't prescribe exact content to create
Next Steps
Now that you understand the dashboard, you're ready to:
- Learn about specific metrics and scoring
- Explore the detailed results table
- Compare results across executions
- Use insights to drive optimization