Skip to main content

Test web chat agents with Assessor

Updated yesterday

This guide explains how to use the Assessor's web chat testing capability to automatically evaluate text-based chat agents. The AssessorChatFlow conducts end-to-end conversations with your AI Employee over web chat, verifying responses, conversation flows, and business logic.

How web chat testing works

The AssessorChatFlow simulates a customer interacting with your AI Employee through web chat. The flow:

  1. Initiates a chat session with your AI Employee using the assessor_chat connector.

  2. Role-plays as a customer based on the generated test scenario (e.g., a customer named John asking about appointment availability).

  3. Sends messages and evaluates responses turn by turn, reasoning about each response before replying.

  4. Scores the conversation based on whether the agent completed the required steps and reached the expected CTA.

Prerequisites

Before running web chat assessments, confirm:

  • Your AI Employee is configured with an Intent Type Map (ITM) that defines the intents to test.

  • The agent has a web chat channel enabled.

  • The Assessor module is active for your account.

Run a web chat assessment

  1. Navigate to your AI Employee's assessment configuration.

  2. Verify that the agent's Intent Type Map includes the intents you want to test.

  3. Trigger an assessment run with the chat channel selected.

  4. The AssessorChatFlow initiates a web chat session for each test scenario.

  5. Monitor the assessment progress as conversations are executed.

  6. Review the results in the assessment report.

Assessment flow details

The web chat assessment follows this sequence for each test scenario:

  1. StartChat: Opens a new chat session with the AI Employee.

  2. ReceiveConversation: Listens for the agent's initial greeting or response.

  3. Reason turn: The Assessor's LLM analyzes the agent's message and determines the appropriate customer response based on the test scenario.

  4. Reply: Sends the customer response back to the AI Employee.

  5. Repeat: Steps 2–4 continue until the conversation reaches a natural conclusion or the session times out.

  6. Evaluate: The Assessor scores the conversation based on step completion and CTA achievement.

Differences from voice testing

Aspect

Voice testing

Web chat testing

Channel

Phone call via pooled numbers

Text-based chat session

Resource management

Requires phone number allocation from Assessor Server

No phone numbers needed β€” uses chat connectors directly

Conversation format

Spoken dialogue (speech-to-text)

Written messages

Concurrency

Limited by available phone numbers in the pool

Can run many chat sessions in parallel

πŸ—’οΈ NOTE

Web chat assessments typically run faster than voice assessments because they do not require phone number allocation or speech-to-text processing.

Review results

Assessment results for web chat tests include:

  • Intent scores: 0–100 score for each tested intent.

  • Conversation transcripts: Full text of the chat conversation between the Assessor (as customer) and your AI Employee.

  • Step completion: Which required steps the agent completed or missed.

  • CTA achievement: Whether the agent reached the expected Call-To-Action.

  • QA analysis: For failed intents, a detailed explanation of what went wrong.

Results are saved in the agent's Knowledge Base with the label report alongside voice assessment results.

Did this answer your question?