1. Home
  2. Kore.ai Conversational Platform
  3. Bot Builder Tool
  4. Getting Started with Building Bots
  5. Talk to Your Bot

Talk to Your Bot

After you have defined your bot and configured one or more tasks, you should test your settings before you deploy your NLP-enabled bot. Bot owners and developers can chat with the bot in real-time to test recognition, performance, and flow as if it were a live session.
To test your tasks in a messaging window, click the Talk to Bot Run Bot Dialog icon located on the lower right corner in Bot Builder

A messaging window for the bot is displayed and connected to the NLP interpreter as shown in the following illustration for the Asana Test Bot.

Note: The Talk to Bot Run Bot Dialog icon is not enabled until at least one task is created.

When you first open the window, the Bot Setup Confirmation Message field definition for the bot is displayed, if defined. In the Message section, enter text to begin testing your custom bot, for example, Book a flight. The NLP interpreter begins processing the task, verifying authentication with the user and the web service, and then prompting for required task field information. When all the required task fields are collected, it executes the task.

While testing your bot, try different variations of user prompts and ensure the NLP interpreter is processing the synonyms (or lack of synonyms) properly. If the bot returns unexpected results, consider adding or modifying synonyms for your tasks and task field names as required. For more information, see Natural Language Processing.

Troubleshooting Your Bot

You can open a debug window to view the natural language processing, bot logging, and session context and variables of the chat. To open the debug, click the Debug icon   located on the top right-hand side of the Talk to Bot chat window. It consists of the following tabs: Debug Log, NL Analysis, Session Context & Variables.

  • NL Analysis: Describes the bot tasks loading status, and for each utterance presents a task name analysis and recognition scores. For more information, see the Working with Debug Log section in the Training Your Bot topic.
  • Debug Log: Lists the processing or processed Dialog task components along with a date timestamp as described in the next section.
  • Dialog Session Context & Variables: Shows both context object and session variables used in the dialog task processing. For more information, see Using Session and Context Variables in Tasks and Context Object.

Debug Log

For additional troubleshooting at the bot level, you can open the Debug Log window. It provides the sequential progression of a dialog task and context and session variables captured at every node. The Debug log supports the following statuses:

  • processing: The Bots Platform begins processing of the node
  • processed:  The node and node connections are processed, the following node is found but the dialog has not yet moved to that node.
  • waitingForUserInput: The user was prompted for input
  • pause: The current dialog task is paused while another task is started.
  • resume: The current dialog with paused status continues at the same point in the flow after completion of another task that was started.
  • waitingForServerResponse: The server request is pending an asynchronous response.
  • error: An error occurred, for example, the loop limit is reached, a server or script node execution fails.
  • end: The dialog reached the end of the dialog flow.

NL Analysis

NL Analysis tab shows the task name analysis and recognition scores of each user utterance.  It presents a detailed tone analysis, intent detection, and entity detection performed by the Kore.ai NLP engine. As a part of intent detection, the NL Analysis tab shows the outcomes of Machine Learning, Fundamental Meaning, and Knowledge Graph engines. For more information about Natural Language Processing, see Natural Language Processing.

Session Context and Variables

You can also view the Dialog flow in the Session Context & Variables tab that displays dynamically populated Context object and session variables updated at each processed component in Dialog Builder. The following is an example of the Session & Context Variables panel in Debug Log. For more information about the parameters, see Using Session and Context Variables in Tasks and Context Object.

Record Conversations

While building your bot, there will be multiple scenarios that you would be testing. Chances are that there will be a successful scenario(s) which needs to be checked with every change/update you make to the bot. Recording such successful conversations with the bot will help in future regression testing scenarios.

An option to record your conversation with the bot is provided on the “Talk to bot” window.

This record option will record the user input and bot response. You can start and stop the recording anytime during your conversation with the bot. The entire conversation is stored in a JSON file which will be available for download once the recording ends. This file can be used with the Chatbot Test Runner provided at the Kore Github repository to test the bot with the same conversation. The Testing Tool will converse with the bot using the user input from the JSON file and compare the bot response with the recorded bot responses. The results are recorded in a Test Results spreadsheet providing a comprehensive report of the test run, with the developer input, actual and expected results and the status of the test run (passed or failed). You can use this spreadsheet to report, keep a track of, and re-test the failed test cases. For more details on the Test Runner refer to the Kore Github.

Next Steps

For more detailed testing and performance enhancement of your bot, you can evaluate and refine how the interpreter interacts with users and your bot on the Natural Language Tab. For more information, see Optimizing Bots for Natural Language Processing.

Was this article helpful to you? Yes 4 No