Chatbot Overview
Conversational Bots
Intents & Entities
Intelligent Bots
Kore.ai's Approach
Kore.ai Conversational Platform
Bot Concepts and Terminology
Natural Language Processing (NLP)
Bot Types
Bot Tasks
Starting with Kore.ai Platform
How to Access Bot Builder
Working with Kore.ai Bot Builder
Building your first Bot
Getting Started with Building Bots
Using the Dialog Builder Tool
Creating a Simple Bot
Release Notes
Latest Updates
Older Releases
Deprecations
Bot Builder
Creating a Bot
Design
Develop
Storyboard
Dialog Task
User Intent Node
Dialog Node
Entity Node
Supported Entity Types
Composite Entities
Supported Time Zones
Supported Colors
Supported Company Names
Form Node
Logic Node
Message Nodes
Confirmation Nodes
Service Node
Custom Authentication
2-way SSL for Service nodes
Script Node
Agent Transfer Node
WebHook Node
Grouping Nodes
Connections & Transitions
Managing Dialogs
User Prompts
Alert Tasks
Alert Tasks
Ignore Words and Field Memory
Digital Forms
Digital Views
Knowledge Graph
Terminology
Building
Generation
Importing and Exporting
Analysis
Knowledge Extraction
Small Talk
Action & Information Task
Action Tasks
Information Tasks
Establishing Flows
Natural Language
Overview
Machine Learning
ML Model
Fundamental Meaning
NLP Settings and Guidelines
Knowledge Graph Training
Traits
Ranking and Resolver
NLP Detection
Advanced NLP Configurations
Bot Intelligence
Overview
Context Management
Session and Context Variables
Context Object
Dialog Management
Sub-Intents
Amend Entity
Multi-Intent Detection
Sentiment Management
Tone Analysis
Sentiment Management
Default Conversations
Default Standard Responses
Channel Enablement
Test & Debug
Talk to Bot
Utterance Testing
Batch Testing
Record Conversations
Publishing your Bot
Analyzing your Bot
Overview
Dashboard
Custom Dashboard
Conversation Flows
Bot Metrics
Advanced Topics
Bot Authorization
Language Management
Collaborative Development
IVR Integration
Data Table
Universal Bots
Defining
Creating
Training
Customizing
Enabling Languages
Smart Bots
Defining
Sample Bots
Github
Asana
Travel Planning
Flight Search
Event Based Bot Actions
koreUtil Libraries
Bot Settings
Bot Functions
General Settings
PII Settings
Customizing Error Messages
Manage Sessions
Bot Management
Bot Versioning
Using Bot Variables
API Guide
API Overview
API List
API Collection
SDKs
SDK Overview
SDK Security
SDK App Registration
Web SDK Tutorial
Message Formatting and Templates
Mobile SDK Push Notification
Widget SDK Tutorial
Widget SDK – Message Formatting and Templates
Web Socket Connect & RTM
Using the BotKit SDK
Installing
Configuring
Events
Functions
BotKit SDK Tutorial – Agent Transfer
BotKit SDK Tutorial – Flight Search Sample Bot
Using an External NLP Engine
Bot Administration
Bots Admin Console
Dashboard
User Management
Managing Users
Managing Groups
Managing Role
Bots Management
Enrollment
Inviting Users
Bulk Invites
Importing Users
Synchronizing Users from AD
Security & Compliance
Using Single Sign-On
Security Settings
Cloud Connector
Analytics
Billing
How Tos
Creating a Simple Bot
Creating a Banking Bot
Transfer Funds Task
Update Balance Task
Context Switching
Using Traits
Schedule a Smart Alert
Configure Digital Forms
Add Form Data into Data Tables
Configuring Digital Views
Add Data to Data Tables
Update Data in Data Tables
Custom Dashboard
Custom Tags to filter Bot Metrics
Patterns for Intents & Entities
Build Knowledge Graph
Global Variables
Content Variables
Using Bot Functions
Configure Agent Transfer
  1. Home
  2. Docs
  3. Bots
  4. Bot Building
  5. Dialog Task
  6. Voice Call Properties

Voice Call Properties

You can enable voice interaction with your virtual assistant, i.e., users can talk to the virtual assistant. For this, you need to enable one of the voice channels like IVR, Twilio, IVR-AudioCodes, etc and publish the bot on those channels.

There are some Voice Properties you can configure to streamline the user experience across the above-mentioned channels. These configurations can be done at multiple levels:

  • Bot level – at the time of channel enablement;
  • Component level – once you enable the voice properties at the bot level, then you can define the behavior for various components like:
    • Entity Node
    • Message Node
    • Confirmation Node
    • Standard Responses
    • Welcome Message

This document details the voice call properties and how they vary across various channels.

Channel Settings

Field Description Applicable
to
Channel
IVR Data Extraction Key Specify the syntax to extract the filled data IVR
End of Conversation Behavior
(post ver7.1)

This property can be used to define the bot behavior at the end of the conversation. The options are:

  • Trigger End of Conversation Behavior and configure the Task, Script or Message to be initiated. See here for details.
  • Terminate the call.
IVR,
Twilio,
IVR-AudioCodes
Call Termination Handler Select the name of the Dialog task that you want to use as the call termination handler when the call ends in error. IVR,
Twilio,
IVR-AudioCodes
VXML Properties

Click Add Property. Enter property names and values to use in defining the VXML definition.

Note: You should these properties and values in the VXML files for all call flows in the IVR system.
IVR
ASR Confidence Threshold
Threshold Key This is the variable where the ASR confidence levels are stored. This field is pre-populated, do not change it unless you are aware of the internal working of VXML. IVR
Define ASR threshold confidence In the range between 0 to 1.0 which defines when the IVR system hands over the control to the Bot. IVR
Timeout Prompt Enter the default prompt text to play when the user doesn’t provide the input within the timeout period. If you do not specify a Timeout Prompt for any node, this prompt takes its place. IVR,
Twilio,
IVR-AudioCodes
Grammar

Define the grammar that should be used to detect user’s utterance

  • The input type can be Speech or DTMF
  • Source of grammar can be Custom or Link
    • For Custom, write VXML grammar in the textbox.
    • For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime

See below for a detailed configuration for Grammar syntax.
Note: If the Enable Transcription option is enabled for the bot along with specifying the source of the transcription engine, defining grammar isn’t mandatory.

IVR
No Match Prompt Enter the default prompt text to play when user input is not present in the defined grammar. If you do not specify a No Match Prompt for any node, this prompt takes its place. IVR
Barge-In Select whether you want to allow a user input while a prompt is in progress. If you select no, the user cannot provide input until IVR completes the prompt. IVR,
Twilio,
IVR-AudioCodes
Timeout Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds. IVR,
Twilio,
IVR-AudioCodes
No. of Retries Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries. IVR,
Twilio,
IVR-AudioCodes
Log Select Yes if you want to send the chat log to the IVR system. IVR

Dialog Node Settings

On the Voice Call Properties panel for a node, you can enter node-specific prompts, grammar, as well as parameters for call-flow behavior such as time-out and retries.

Voice Call Properties apply only for the following nodes and message types:

  • Entity Node
  • Message Node
  • Confirmation Node
  • Standard Responses
  • Welcome Message
Note: Most settings are the same for all nodes, with a few exceptions.

Voice Call Settings Field Reference
The following sections provide detailed descriptions of each IVR setting, including descriptions, applicability to nodes, default values, and other key information.

Notes on Prompts: 

  • You can enter prompts in one of these formats: Plain text, Script, File location of an audio file. If you want to define JavaScript or attach an audio file, click the icon before the prompt text message box and select a mode. By default, it is set to Text mode.
  • You can enter more than one prompt messages of different types. You can define their order of sequence by dragging and dropping them.
  • Multiple prompts are useful in scenarios where the prompt has to be played more than once, to avoid repetition, since the prompts are played in order.
Field Description Applicable
to
Nodes
Applicable
to
Channel
Initial Prompts Prompts that are played when the IVR first executes the node. If you do not enter a prompt for a node, the default user prompt for the node plays by default. If you do not enter a prompt for Standard Responses and Welcome Message, the default Standard Response and Welcome Message are played by default. Entity,
Confirmation,
Message nodes;
Standard Responses and
Welcome Message
IVR,
Twilio,
AudioCodes
Timeout Prompts Prompts that are played on the IVR channel when the user has not given any input within the specified time. If you do not enter a prompt for a node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default Timeout Prompt that plays if you don’t define No Match Prompts. Entity,
Confirmation;
Standard Responses
and Welcome Message
IVR,
Twilio,
AudioCodes
No Match Prompts Prompts that are played on the IVR channel when the user’s input has not matched any value in the defined grammar. If you do not enter a prompt here or select No Grammar option for an Entity or Confirmation node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default No Match Prompt that plays if you do not enter it. Entity,
Confirmation;
Standard Responses and
Welcome Message
IVR
Error Prompts Prompts that are played on the IVR channel when user input is an invalid Entity type. If you do not enter a prompt here, the default Error Prompt of the node is played. Entity,
Confirmation;
IVR,
Twilio,
AudioCodes
Grammar

Define the grammar that should be used to detect a user’s utterance

  • The input type can be Speech or DTMF
  • Source of grammar can be Custom or Link
    • For Custom, write VXML grammar in the textbox.
    • For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime

See below for a detailed configuration for Grammar syntax.
Note: If the Enable Transcription option is enabled for the bot along with specifying the source of the transcription engine, defining grammar isn’t mandatory.

Confirmation;
Standard Responses and
Welcome Message
IVR,
Twilio,
AudioCodes
Advanced Controls
These properties override the properties set in the Bot IVR Settings page.
Timeout Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds. The default value is the same as defined in the Bot IVR Settings page. N/A IVR,
Twilio,
AudioCodes
No. of Retries Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries.
The default value is the same as defined in the Bot IVR Settings page.
N/A IVR,
Twilio,
AudioCodes
Behavior on Exceeding Retries
(applies only to entity node)

Define behavior when either the timeout or number of retry attempts exceed the specified limit. Options include:

  • Invoke CallTermination Handler
  • Initiate Dialog: Select a Dialog task from the list of bot tasks.
  • Jump to a specific node in the current task: Select a node from the list of nodes in the current Dialog task.

Post v7.3, this feature has been enhanced so that on exceeding entity error count, the platform will trigger the Behavior on Exceeding Retries behavior, when the transcription is enabled.

N/A IVR,
Twilio,
AudioCodes
Barge-In Select whether you want to allow a user input while a prompt is in progress. If you select no, the user input is not considered until the prompt is completed. The default value is No. N/A IVR,
Twilio,
AudioCodes
VXML Properties Click Add Property. Enter property names and values to use in defining the VXML definition. These values defined for a node or a standard response override the global VXML properties defined in the Bot IVR settings page. N/A IVR
Log Select Yes if you want to send the chat log to the IVR system. The default value is No. N/A IVR
Recording Define the state of recording to be initiated. The default value is Stop. N/A IVR

Configuring Grammar

You will need to define at least one Speech Grammar to the IVR system.
There is no default Grammar that will be considered by the system. In this section, we will walk you through the steps needed to configure a Grammar system for the bot to function on the IVR system.

Typically for an IVR enabled bot, the speech utterance of the user will be vetted and parsed by the Grammar syntax at the IVR system before being diverted to the Bot.

Kore.ai supports the following Grammar systems:

  • Nuance
  • Voximal
  • UniMRCP

Each one requires its own configuration.

Nuance

In case you want to use grammar syntax rules from Nuance Speech Recognition System, you need to get a license for the same. Once you register and obtain a license from Nuance, you will be given access to two files – dlm.zip & nle.zip. Ensure that the path to this VXML is accessible to the bot.

Configurations:

  1. Set Enable Transcription to no
  2. In the Grammar section:
    • Select the Speech or DTMF option as per your requirement.
    • In the text box to define vxml enter the vxml path to dlm.zip file. The url will be of the format: http://nuance.kore.ai/downloads/kore_dlm.zip?nlptype=krypton&dlm_weight=0.2&lang=en-US
    • Replace the above path according to your setup
    • The language code “lang=en-US” will be based on your setup
  3. Add Grammar to add another path to nle.zip. Follow the above-mentioned steps.
  4. Save the settings.

Voximal/UniMRCP

In case you want to use grammar syntax rules from Voximal or UniMRCP, you need to specify the transcription source.

Configurations:

  1. Set Enable Transcription to yes
  2. In the Transcription engine source text box that appears:
    • for Voximal, enter “builtin:grammar/text”
    • for UniMRCP, enter “builtin:grammar/transcribe”
  3. You can leave the Grammar section blank, the above transcription source uri will handle the syntax and grammar vetting of the speech.
  4. Save the settings.
Menu