Test your Bot

Test and Debug Overview

Once you have built and trained your assistant, it is recommended that you conduct testing, to make sure everything works as expected. Even though it takes additional effort and resources, testing ensures that you are finding and fixing problems before they reach your users.  The Kore.ai XO Platform provides an…

Conversation Testing

Conversation Testing enables you to simulate end-to-end conversational flows to evaluate the dialog task or perform regression. You can create Test Cases to capture various business scenarios and run them at a later point in time to validate the performance of the assistant.  Test Cases Overview Test Cases consist of…

Talk to Bot

After you have defined your assistant and configured one or more tasks, you should test your settings before you publish your NLP-enabled assistant. Bot owners and developers can chat with the assistant in real-time to test recognition, performance, and flow as if it were a live session. Testing a Virtual…

Batch Testing

Once you have built and trained your bot, the most important question that arises is how good is your bot’s learning model? So, evaluating your bot’s performance is important to delineate how good your bot understands the user utterances. The Batch Testing feature helps you discern the ability of your…

Utterance Testing

To make sure your assistant responds to user utterances with related tasks, it is important that you test it with a variety of user inputs. Evaluating a VA with a large sample of expected user inputs not only provides insights into its responses but also gives you a great opportunity…
Menu