CXHub - QA (Quality Assurance AI Review) Setup Guide

Written By Lance Quimby (Administrator)

Updated at January 13th, 2026

Who is this Guide For?

  • Crexendo Admins and Super Users responsible for configuring QA (Quality Assurance)

Goal

  • Setup and configure QA to automatically evaluate recorded voice calls using Parameters, Categories, Templates, and Snapshots
  • Enable dashboards and reporting for call quality visibility at scale

Prerequisites

  • Admin login access to QA
  • Customer call recordings are being captured and available for evaluation (voice channel)
  • Campaigns, Skills, Agents, Dispositions you want to capture have been setup and call data exists

Overview

QA is an AI-driven quality monitoring tool that analyzes customer calls and scores them against predefined criteria. It is designed for QA Managers, Supervisors, and Operations Leaders who want to reduce manual auditing, improve consistency, and identify coaching opportunities through structured scoring and reporting.

Key Value Propositions

  • Automated QA: Calls are evaluated against configurable criteria and scoring.
  • Flexible Configuration: Parameters, Categories, Templates, and Snapshots are customizable per customer/team.
  • Actionable Insights: Dashboards and reports highlight strengths, gaps, and trends.
  • Audit Transparency: Standardized scoring helps keep evaluation consistent and fair.

Key Concepts and Definitions

  • Parameter: An individual QA check (example: Greeting, Empathy, ID Verification, Mandatory Disclosure).
  • Category: A group of related Parameters (example: Communication, Compliance, Process Adherence).
  • Template: A complete QA evaluation form made of Categories and/or Parameters (example: “Support L1 – L3”).
  • Snapshot: A saved audit scope that selects which calls will be evaluated and which Template is applied.
  • Fatal: A Parameter option where failure can force the final “Score With Fatal” to become 0.
  • NA (Not Applicable): A condition where a Parameter does not apply and is excluded from scoring.

QA Dashboard

The QA Dashboard is the entry point for monitoring overall call quality performance. It provides a consolidated view of scoring trends across campaigns, skills, and agents.

Key Widgets

  • QA Details Tiles: audits completed, score bands, fatal/flagged calls
  • Agent Performance: top performers and needs-improvement
  • Parameter Performance: strongest areas and opportunity areas
  • Score Distribution: by Skill / Campaign / Parameter / Disposition
  • Trends Over Time: score trends and audit volumes
  • QA Agent Details Table: agent, calls audited, duration, key parameter hits
  • Category Performance: (optional) category-level scoring trends

Filters

Filter dashboard results by Date range, Skill, Snapshot, Campaign, Evaluator, Language, and Disposition to narrow results and identify trends within a specific team or process.

Example Use Case: An Operations Leader filters by Campaign to confirm whether low compliance is isolated to one process or is trending across multiple teams.


QA Configuration

QA Configuration defines what gets audited and how scores are calculated. Configuration is made up of four building blocks, and they’re best created in this order: Parameters → Categories → Templates → Snapshots.

Recommended build order matters. Parameters are your “raw ingredients,” Categories organize them, Templates define the form, and Snapshots apply the form to real calls.


Step 1: Create Parameters

Parameters are the smallest unit of QA scoring. Each Parameter represents an individual check you want the system to evaluate (example: Greeting, Empathy, Verified Customer ID, Disclosure Read).

Procedure

  1. Navigate to QA Tool → QA Configuration → Parameter.
  2. Click Add Parameter.
  3. Enter Parameter Name (concise and action-oriented).
  4. Enter Parameter Description(s) with the specific criteria to evaluate (NOTE: The information must be data that can be understood from listening to the call, Information like “Did they update the CRM” is not something the AI can determine.
  5. Use Add Description to include sub-points (example: “Asked for DOB”, “Showed Empathy on the call”).
  6. Set Fatal to Yes if failure should auto-fail the audit (Score With Fatal becomes 0).
  7. (Optional) Configure NA Conditions to exclude the Parameter when it does not apply.
  8. Click Save.

Scoring Behavior

  • Fatal = Yes: failing the parameter drives “Score With Fatal” to 0.
  • NA: parameter is excluded from score calculation.
  • Weighting: controlled at the Template level based on which parameters are included.

Example: Add a Parameter “Verified Customer ID” and mark it Fatal to ensure calls without ID verification are flagged.


Step 2: Create Categories

Categories group related Parameters into logical sections (example: Communication, Compliance, Process Adherence). This keeps audits structured and improves reporting by showing performance at a meaningful level.

Procedure

  1. Navigate to QA Tool → QA Configuration → Category.
  2. Enter Category Name (lowercase, numbers, underscores only).
  3. From the Parameters panel (right side), search and add Parameters into the Category.
  4. Provide a score for each Parameter
  5. Click Save.

Category Examples

  • communication_skills → Greeting, Clarity, Tone
  • compliance → Customer Verification, Mandatory Disclosures
  • process_adherence → Following SOP, Logging Case Notes

 

Download Examples Template


Step 3: Create Templates

Templates are predefined QA evaluation frameworks. A Template combines Categories and/or Parameters into a complete QA form that defines how calls will be evaluated for a specific team or use case.

Procedure

  1. Navigate to QA Tool → QA Configuration → Template.
  2. Enter Template Name (example: “Support L1 – L3”).
  3. Toggle Group By Categories to On to keep the form structured.
  4. From the Categories panel (right side), add Categories and/or individual Parameters into the Template.
  5. Review the Total Parameters and Total Categories counters.
  6. Click Save.

Example Use Case:

  • Sales Template → pitch quality, objection handling, closing steps, required compliance
  • Support Template → empathy, resolution accuracy, process adherence, required compliance

Templates allow you to standardize audits while tailoring the scoring criteria per team.


Step 4: Create Snapshots (Apply QA to Calls)

Snapshots define which calls will be evaluated and which Template will be applied. Think of a Snapshot as your “ready-to-run” QA setup that ties together call selection + evaluation form.

  • [Customer/Dept] - [Campaign/Skill]
  • Example: ACME - Billing

Snapshot Types

  • One-Time: Runs from a specified Date/Time Range and allows a max number of calls to be set
  • Recurring: Setup a Daily, Weekly or Monthly Recurring Snapshot and allows a max number of calls to be set
  • Ongoing: Continous run to capture calls that meet certain criteria cocontinuousntinuous

Snapshot Fields

  • Name: Clear, searchable name
  • Language: Select the language used in the calls
  • Flow ID: This field is only used for custom configurations outside the normal use cases (Leave Blank)
  • Date Range: Specify Start and End Date (Only for One-Time Snapshots)
  • Limit: Maximum calls included (Only for One-Time and Recurring Snapshots)
  • Components (Filters): Campaigns, Skills, Agents, Dispositions
  • Logical Operators:
    • AND = all conditions must be true
    • OR = any condition can be true

Procedure

  1. Go to Snapshots.
  2. Select Create Snapshot.
  3. Select the Type of Snapshot you wish to create.
  4. Set Name, Language, Date Range, and Limit based on the type of Snapshot you are creating.
  5. Add one or more filters using Campaign, Skill, Agent, and/or Disposition.
  6. Apply AND/OR logic to match the business requirement.
  7. Click Save.

Example: Ongoing Snapshot with Calls from a Campaign that were Answered and Talk Time is over 60 Seconds

 

 

 

Common Snapshot Mistake: Selecting filters that don’t match how the call system tags interactions (campaign/skill/disposition mismatches). Check CDR Report with the same filters, if report shows 0 calls, validate your filters first.


QA Reporting

QA Call Data Report

The QA Call Data Report provides detailed, call-level results of automated evaluations. Use this report to audit outcomes, validate configuration, and export results for leadership or customers.

Common Columns

  • Audio
  • Call Date / Start Time
  • Agent Name
  • Snapshot Name
  • Region/Skill (if enabled)
  • Score Without Fatal / Score With Fatal
  • Comments / Flags (where applicable)

Filters & Export

  • Filter by date range, agent, snapshot, campaign/skill, disposition
  • Search by monitor UCID and/or Caller ID (where available)
  • Export to CSV/XLS for offline review

Troubleshooting

Dashboard shows no data or very low audit volume

  • Confirm calls exist for the date range and the correct Snapshot is selected in filters.
  • Validate Snapshot configuration (component filters, date range, and limits).
  • Confirm the Snapshot is assigned to the intended Template and Language.

Scores look unexpectedly low (or Score With Fatal is frequently 0)

  • Review Fatal Parameters—ensure they are truly required and correctly defined.
  • Validate Parameter descriptions are specific and align with agent scripts/process.
  • Use a small, controlled One-Time Snapshot to test changes before rolling to Ongoing.

Parameters are missing from the Template

  • Confirm the Parameter exists and is saved successfully.
  • Confirm it has been added to a Category (if you are organizing by categories).
  • Re-open the Template and add the Category/Parameter from the right-side panel.

FAQ

What order should I configure QA in?

Configure in this order: Parameters → Categories → Templates → Snapshots. This keeps your setup modular and reusable across multiple teams and customers.

When should I use Fatal vs NA?

  • Fatal: Use when the requirement is mandatory and failure must fail the audit.
  • NA: Use when the requirement does not apply to certain call types and should be excluded from scoring.