Test your Bot

Glossary

The following list provides definitions of commonly used terms in conversation testing. Dynamic Text Marking The dynamic text annotation feature allows you to annotate a section of the text. During test execution, the annotated portion of the text is ignored by the platform for text assertion. To know more, see…

Test Case Execution Summary

The Test Case Execution Summary allows you to view the test case results, identify the failed test cases, and resolve the flow of the virtual assistant. It gives complete details of the overall test results and the defects found. The following sections explain the options available on the Conversation Testing…

Test Case Assertion

A test case assertion is a statement specifying a condition that must be satisfied for a test case to be considered successful. In the context of conversational systems, test case assertions can be used to validate various aspects of the conversation, such as the correctness of the response to a…

Test Editor

You can use the Test Editor to view the test cases and their metadata. This section explains the steps to access the test editor and use the available options: On the Conversation Testing page, click on any Test Suite to go to the Test Editor. In the Test Editor, the…

Create a Test Suite

A Test Suite contains a collection of test cases grouped to simulate a specific conversation between the user and the bot and used anytime for test execution. You can know the execution status and determine and analyze the results in a test suite. In Conversation testing, you can create the…

Conversation Testing Overview

Conversation Testing enables you to simulate end-to-end conversational flows to evaluate the dialog task execution or perform regression. You can create Test Suites to capture various business scenarios and run them at a later time to validate the assistant’s performance. The Conversation testing framework tracks the transition coverage and determines…

Test and Debug Overview

Once you have built and trained your assistant, it is recommended that you conduct testing, to make sure everything works as expected. Even though it takes additional effort and resources, testing ensures that you are finding and fixing problems before they reach your users.  The Kore.ai XO Platform provides an…

Talk to Bot

After you have defined your assistant and configured one or more tasks, you should test your settings before you publish your NLP-enabled assistant. Bot owners and developers can chat with the assistant in real-time to test recognition, performance, and flow as if it were a live session. Testing a Virtual…

Batch Testing

Once you have built and trained your bot, the most important question that arises is how good is your bot’s learning model? So, evaluating your bot’s performance is important to delineate how good your bot understands the user utterances. The Batch Testing feature helps you discern the ability of your…

Utterance Testing

To make sure your assistant responds to user utterances with related tasks, it is important that you test it with a variety of user inputs. Evaluating a VA with a large sample of expected user inputs not only provides insights into its responses but also gives you a great opportunity…
Menu