Comparing analysis results over time is essential for understanding trends, measuring optimization impact, and validating your AI strategy. This guide explains how to effectively compare executions and interpret changes.

Why Compare Results

Key Benefits
- Track progress toward goals
- Validate optimization efforts
- Identify trends and patterns
- Measure ROI of AI monitoring
- Spot anomalies or issues early
- Build data-driven strategies
What Comparisons Reveal
- Whether your AI presence is improving or declining
- Which optimizations are working
- Seasonal or cyclical patterns
- Impact of content changes
- Competitive dynamics shifts
- AI model behavior changes
Types of Comparisons
Sequential Comparison
Compare most recent execution to the previous one:
- Shows immediate impact of recent changes
- Identifies short-term trends
- Quick validation of optimizations
- Best for weekly or frequent monitoring
Baseline Comparison
Compare current results to initial baseline:
- Shows total progress since starting
- Measures cumulative improvement
- Validates overall strategy
- Best for quarterly reviews
Period Comparison
Compare specific time periods:
- Month-over-month
- Quarter-over-quarter
- Year-over-year
- Before/after campaign or optimization
Multi-Point Trend Analysis
Compare multiple executions to see trends:
- View progression over 5-10 executions
- Identify patterns and trends
- Smooth out anomalies
- Best for long-term strategy
Accessing Comparison Views
From Analysis Detail Page
- Open your analysis
- View Execution History
- Select executions to compare
- Click Compare button
From Dashboard
- Navigate to Project Analyses
- Select an analysis
- Choose Trends or Compare view
- Select time range or specific executions
Comparison Interface
Typical comparison views include:
- Side-by-side score comparison
- Trend line graphs
- Change indicators (arrows, percentages)
- Detailed metric comparisons
- Prompt-level differences
Reading Comparison Data
Overall Score Trends
Upward Trend
- Scores increasing over time
- Green indicators
- Positive percentage changes
- Example: 65% → 72% → 78%
Interpretation: Optimizations are working. Continue current strategy.
Downward Trend
- Scores decreasing over time
- Red indicators
- Negative percentage changes
- Example: 78% → 72% → 65%
Interpretation: Performance declining. Investigate causes and adjust strategy.
Stable/Flat Trend
- Scores roughly constant
- Minor fluctuations
- Example: 72% → 71% → 73%
Interpretation: Maintaining position. May need new optimizations for growth.
Volatile Trend
- Large swings up and down
- Inconsistent performance
- Example: 65% → 80% → 62% → 77%
Interpretation: Unstable. Investigate causes of volatility.
Category-Level Comparisons
- Compare each category (Brand, Product, Technical, Trust)
- Identify which categories are improving
- Spot category-specific issues
- Prioritize optimization focus
Example Pattern
- Brand: 60% → 68% → 75% (Strong improvement)
- Product: 75% → 74% → 76% (Stable, strong)
- Technical: 55% → 52% → 50% (Declining)
- Trust: 70% → 71% → 72% (Slight improvement)
Action: Prioritize Technical category optimization.
Model-Specific Comparisons
- Track performance per AI model
- Identify model-specific trends
- See which models are improving/declining
- Validate model-specific optimizations
Example Pattern
- GPT: 70% → 75% → 80% (Improving)
- Gemini: 65% → 66% → 67% (Slow improvement)
- Claude: 72% → 70% → 68% (Declining)
- Perplexity: 60% → 62% → 64% (Improving)
Action: Investigate why Claude performance is declining.
Understanding Change Indicators
Absolute Change
Direct point difference:
- Example: 65% → 72% = +7 points
- Simple to understand
- Good for small changes
Percentage Change
Relative change:
- Example: 65% → 72% = +10.8% improvement
- Better for comparing different metrics
- Standard in reporting
Rate of Change
How fast change is occurring:
- Example: +7 points over 4 weeks = +1.75 points/week
- Helps predict future performance
- Identifies acceleration or deceleration
Focus on trends over time, not single-execution changes. One bad execution doesn't make a trend.
Identifying Meaningful Changes
Statistical Significance
Not all changes are meaningful:
Meaningful Changes
- Large magnitude - 5+ point changes
- Consistent direction - Same direction over multiple executions
- Multiple metrics - Change visible across several metrics
- Explainable - Correlates with actions taken
Noise/Fluctuation
- Small magnitude - 1-2 point changes
- Inconsistent - Up one execution, down the next
- Single metric - Only one metric shows change
- Random - No clear cause
Establishing Baselines
- First 2-3 executions establish baseline
- Expect some fluctuation
- Don't overreact to initial variations
- Look for sustained trends after baseline
Comparing Specific Elements
Prompt-Level Comparison
Compare individual prompt performance:
- Which prompts improved most
- Which prompts declined
- New prompts performing well/poorly
- Consistent performers vs. volatile prompts
Use Cases
- Validate prompt-specific optimizations
- Identify underperforming prompts to remove
- Find successful prompts to replicate
- Track custom prompt effectiveness
Mention Rate Trends
- Are you mentioned more or less frequently?
- Which prompts now mention you that didn't before?
- Where did you lose mentions?
- New competitive mentions?
Position Trends
- Are you moving up or down in rankings?
- Position improvements in key prompts
- Position declines to address
- Competitive positioning changes
Accuracy Trends
- Are inaccuracies being corrected?
- New inaccuracies appearing?
- Improvement after content updates
- Persistent accuracy issues
Correlating Results with Actions
Tracking Optimization Impact
To measure optimization effectiveness:
- Note date of content changes or optimizations
- Run analysis before and after optimization
- Compare results to measure impact
- Attribute improvements to specific actions
Example Correlation
Timeline:
- Week 1: Baseline execution - Product category score: 60%
- Week 2: Updated all product descriptions
- Week 3: Post-optimization execution - Product category: 68%
- Week 4: Follow-up execution - Product category: 72%
Conclusion: Product description updates drove 12-point improvement.
Attribution Challenges
- Multiple optimizations happening simultaneously
- External factors (competitor changes, AI model updates)
- Time lag between optimization and AI model learning
- Seasonal effects
Best Practice: Change one thing at a time when possible, or clearly document all changes.
Visualization and Reporting
Trend Line Charts
- Overall score over time
- Category scores over time
- Model-specific trends
- Key metrics (mention rate, position, accuracy)
Before/After Comparisons
- Side-by-side dashboards
- Highlight improvements in green
- Highlight declines in red
- Clear visual impact
Progress Reports
- Executive summaries
- Key wins and improvements
- Areas needing attention
- ROI of optimization efforts
Common Comparison Scenarios
Validating Optimization
Question: Did our content updates work?
- Compare execution before and after update
- Focus on affected categories or prompts
- Look for score improvements
- Check accuracy rate changes
- Verify mention rate increases
Quarterly Business Review
Question: How has our AI presence evolved this quarter?
- Compare first and last executions of quarter
- Show trend lines
- Highlight biggest improvements
- Identify ongoing challenges
- Set goals for next quarter
Campaign Performance
Question: Did our campaign improve AI awareness?
- Compare pre-campaign baseline
- Track during campaign
- Assess post-campaign
- Measure sustained impact
- Calculate campaign ROI
Competitive Benchmarking
Question: Are we gaining ground on competitors?
- Compare competitive positioning over time
- Track relative mention rates
- Monitor head-to-head prompt performance
- Identify competitive trends
Best Practices for Comparisons
- Compare like to like - Same analysis, same prompts
- Consistent timing - Same intervals for fair comparison
- Look at multiple metrics - Don't rely on overall score alone
- Consider context - External events, seasonality
- Document changes - Note what you changed between executions
- Be patient - AI presence improves gradually
- Celebrate wins - Acknowledge improvements
- Learn from declines - Investigate causes
Common Pitfalls
- Comparing different analyses (different prompts)
- Reacting to single-execution changes
- Ignoring external factors
- Not documenting what changed
- Focusing only on overall score
- Expecting immediate results
- Comparing inconsistent time intervals
Advanced Comparison Techniques
Cohort Analysis
- Group prompts by type or category
- Track cohort performance over time
- Identify successful prompt patterns
Moving Averages
- Smooth out volatility
- See clearer trends
- Filter noise from signal
Benchmark Indexing
- Set baseline = 100
- Express all future scores as index
- Easier to visualize relative progress
Next Steps
With comparison skills mastered, you're ready to:
- Build long-term monitoring strategies
- Set data-driven optimization priorities
- Report progress to stakeholders
- Use insights to guide content strategy