Quality assurance evaluation is a critical component of contact center success. Still, as a recent Brightmetrics webinar highlighted, many organizations struggle to extract meaningful insights from their native Genesys QA reporting tools.

 

The Challenge with Native Genesys QA Reporting

While Genesys offers a QA evaluation module, according to Peter Hornberger, VP of Sales and Customer Success at Brightmetrics, the reporting capabilities tend to be “lackluster.” One of the most common requests they receive is for more granular filtering and analysis capabilities within evaluations.

 

Question-Level Analytics

The webinar demonstrated how drilling down to question-level analysis can reveal specific areas needing attention. For example, in one case study, analyzing individual questions revealed that “internal notes documentation” (Question 2.5) scored significantly lower at 91.69% compared to other metrics consistently in the high 90s.

 

Deep-Dive Analysis Capabilities

When examining question-level performance, managers can:

  • View scores by individual questions and question groups
  • Track points possible versus points scored (demonstrated with an example of 9 out of 18 possible points)
  • Identify whether issues are isolated to specific agents or widespread
  • Access detailed evaluation records, including call recordings

 

Enhanced Filtering Options

The platform offers several critical filtering capabilities:

  • Filter by evaluator
  • Filter by work team or management group
  • Filter by conversation date versus evaluation date (a distinction not available in native Genesys reporting)
  • View evaluations by specific forms or question sets

 

Visualization and Performance Tracking

The webinar showcased five types of visualizations:

  1. Overall evaluation score trending by week (90-day view)
  2. Evaluation scores by question group
  3. Agent leaderboards showing comparative performance
  4. Individual question tracking over time
  5. Question group breakdowns

 

Practical Applications

The webinar explored several real-world uses:

  • Identifying coaching opportunities by spotting consistently low-scoring questions
  • Comparing evaluator scoring patterns to ensure consistency
  • Tracking improvement trends over time
  • Creating performance visibility through leaderboards and dashboards

 

Evaluation Access and Integration

Key features highlighted include:

  • Ability to drill down from metrics to specific evaluations
  • Access to complete evaluation forms and scores
  • Direct links to call recordings (for authenticated users)
  • Cradle-to-grave reporting of customer interactions, including IVR paths

 

Key Takeaways

As emphasized in the webinar’s closing thoughts:

  1. Question-level evaluation reporting provides essential insights for targeted coaching
  2. The ability to filter by both conversation and evaluation dates offers important context
  3. Team and evaluator filtering helps reduce noise in larger contact centers
  4. Visual dashboards and leaderboards can drive performance improvements
  5. Easy access to supporting documentation streamlines the coaching process

 

The webinar demonstrated that while organizations invest significant time and manpower into QA evaluations, maximizing the value of this investment requires tools that can provide deeper, more actionable insights than standard platform reporting.

 

Note: This blog is based on a Brightmetrics webinar presentation by Peter Hornberger, VP of Sales and Customer Success at Brightmetrics, focusing on their Genesys QA evaluation reporting capabilities.

Sign Up for Newsletter