GETTING STARTED
Kore.ai XO Platform
Virtual Assistants Overview
Natural Language Processing (NLP)
Concepts and Terminology
Quick Start Guide
Accessing the Platform
Working with the Builder
Building a Virtual Assistant
Using Workspaces
Release Notes
Current Version
Previous Versions
Deprecations

CONCEPTS
Design
Storyboard
Dialog Tasks
Overview
Dialog Builder
Node Types
Intent Node
Dialog Node
Entity Node
Form Node
Confirmation Node
Message Nodes
Logic Node
Bot Action Node
Service Node
Webhook Node
Script Node
Group Node
Agent Transfer
User Prompts
Voice Call Properties
Dialog Task Management
Connections & Transitions
Component Transition
Context Object
Event Handlers
Knowledge Graph
Introduction
Knowledge Extraction
Build Knowledge Graph
Add Knowledge Graph to Bot
Create the Graph
Build Knowledge Graph
Add FAQs
Run a Task
Build FAQs from an Existing Source
Traits, Synonyms, and Stop Words
Manage Variable Namespaces
Update
Move Question and Answers Between Nodes
Edit and Delete Terms
Edit Questions and Responses
Knowledge Graph Training
Knowledge Graph Analysis
Knowledge Graph Import and Export
Importing Knowledge Graph
Exporting Knowledge Graph
Creating a Knowledge Graph
From a CSV File
From a JSON file
Auto-Generate Knowledge Graph
Alert Tasks
Small Talk
Digital Skills
Digital Forms
Views
Introduction
Panels
Widgets
Feedback Survey
Train
Introduction
ML Engine
Introduction
Model Validation
FM Engine
KG Engine
Traits Engine
Ranking and Resolver
NLP Configurations
NLP Guidelines
Intelligence
Introduction
Contextual Memory
Contextual Intents
Interruption Management
Multi-intent Detection
Amending Entities
Default Conversations
Sentinment Management
Tone Analysis
Test & Debug
Talk to Bot
Utterence Testing
Batch Testing
Conversation Testing
Deploy
Channels
Publish
Analyze
Introduction
Conversations Dashboard
Performance Dashboard
Custom Dashboards
Introduction
Meta Tags
Dashboards and Widgets
Conversations History
Conversation Flows
Feedback Analytics
NLP Metrics
Containment Metrics
Usage Metrics
Smart Bots
Universal Bots
Introduction
Universal Bot Definition
Universal Bot Creation
Training a Universal Bot
Universal Bot Customizations
Enabling Languages
Store
Manage Assistant
Plan & Usage
Overview
Usage Plans
Support Plans
Invoices
Authorization
Multilingual Virtual Assistants
Masking PII Details
Variables
IVR Settings
General Settings
Assistant Management
Data Table
Table Views
App Definitions
Sharing Data Tables or Views

HOW TOs
Build a Flight Status Assistant
Design Conversation Skills
Create a Sample Banking Assistant
Create a Transfer Funds Task
Create a Update Balance Task
Create a Knowledge Graph
Set Up a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Add Data to Data Tables
Update Data in Data Tables
Add Data from Digital Forms
Train the Assistant
Use Traits
Use Patterns for Intents & Entities
Manage Context Switching
Deploy the Assistant
Configure an Agent Transfer
Use Assistant Functions
Use Content Variables
Use Global Variables
Web SDK Tutorial
Widget SDK Tutorial
Analyze the Assistant
Create a Custom Dashboard
Use Custom Meta Tags in Filters

APIs & SDKs
API Reference
API Introduction
API List
API Collection
koreUtil Libraries
SDK Reference
SDK Introduction
SDK Security
SDK Registration
Web Socket Connect and RTM
Using the BotKit SDK
BotKit SDK Tutorial - Blue Prism

ADMINISTRATION
Introduction
Assistant Admin Console
Administration Dashboard
User Management
Add Users
Manage Groups
Manage Roles
Assistant Management
Enrollment
Invite Users
Send Bulk Invites
Import User Data
Synchronize Users from AD
Security & Compliance
Using Single-Sign On
Security Settings
Cloud Connector
Analytics
Billing
  1. Home
  2. Docs
  3. Virtual Assistants
  4. Builder
  5. Dialog Task
  6. Voice Call Properties10 min read

Voice Call Properties10 min read

You can enable voice interaction with your virtual assistant, i.e., users can talk to the virtual assistant. For this, you need to enable one of the voice channels like IVR, Twilio, IVR-AudioCodes, etc and publish the VA on those channels.

There are some Voice Properties you can configure to streamline the user experience across the above-mentioned channels. These configurations can be done at multiple levels:

  • VA level – at the time of channel enablement;
  • Component level – once you enable the voice properties at the VA level, then you can define the behavior for various components like:
    • Entity Node
    • Message Node
    • Confirmation Node
    • Standard Responses
    • Welcome Message

This document details the voice call properties and how they vary across various channels.

Channel Settings

Field Description Applicable
to
Channel
IVR Data Extraction Key Specify the syntax to extract the filled data

For Entity and Confirmation nodes, you can define the extraction rule overriding the channel level setting. This is particularly helpful with ASR engines that provide transcription results in a different format based on the input type. For example, VXML can contain the word format of the credit card in one key and the number format in another key

IVR
End of Conversation Behavior
(post ver7.1)
This property can be used to define the VA behavior at the end of the conversation. The options are:

  • Trigger End of Conversation Behavior and configure the Task, Script or Message to be initiated. See here for details.
  • Terminate the call.
IVR,
Twilio,
IVR-AudioCodes
Call Termination Handler Select the name of the Dialog task that you want to use as the call termination handler when the call ends in error. IVR,
Twilio,
IVR-AudioCodes
Call Control Parameters Click Add Parameter. Enter property names and values to use in defining the call behavior.

Note: You should use these properties and values in the VXML files for all call flows in the IVR system and Session Parameters in AudioCodes channel.
IVR,
IVR-AudioCodes
ASR Confidence Threshold
Threshold Key This is the variable where the ASR confidence levels are stored. This field is pre-populated, do not change it unless you are aware of the internal working of VXML. IVR
Define ASR threshold confidence In the range between 0 to 1.0 which defines when the IVR system hands over the control to the VA. IVR
Timeout Prompt Enter the default prompt text to play when the user doesn’t provide the input within the timeout period. If you do not specify a Timeout Prompt for any node, this prompt takes its place. IVR,
Twilio,
IVR-AudioCodes
Grammar Define the grammar that should be used to detect user’s utterance

  • The input type can be Speech or DTMF
  • Source of grammar can be Custom or Link
    • For Custom, write VXML grammar in the textbox.
    • For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime

See below for a detailed configuration for Grammar syntax.
Note: If the Enable Transcription option is enabled for the VA along with specifying the source of the transcription engine, defining grammar isn’t mandatory.

IVR
No Match Prompt Enter the default prompt text to play when user input is not present in the defined grammar. If you do not specify a No Match Prompt for any node, this prompt takes its place. IVR
Barge-In Select whether you want to allow a user input while a prompt is in progress. If you select no, the user cannot provide input until IVR completes the prompt. IVR,
Twilio,
IVR-AudioCodes
Timeout Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds. IVR,
Twilio,
IVR-AudioCodes
No. of Retries Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries. IVR,
Twilio,
IVR-AudioCodes
Log Select Yes if you want to send the chat log to the IVR system. IVR

Dialog Node Settings

On the Voice Call Properties panel for a node, you can enter node-specific prompts, grammar, as well as parameters for call-flow behavior such as time-out and retries.

Voice Call Properties apply only for the following nodes and message types:

  • Entity Node
  • Message Node
  • Confirmation Node
  • Standard Responses
  • Welcome Message
Note: Most settings are the same for all nodes, with a few exceptions.

Voice Call Settings Field Reference
The following sections provide detailed descriptions of each IVR setting, including descriptions, applicability to nodes, default values, and other key information.

Notes on Prompts: 

  • You can enter prompts in one of these formats: Plain text, Script, File location of an audio file. If you want to define JavaScript or attach an audio file, click the icon before the prompt text message box and select a mode. By default, it is set to Text mode.
  • You can enter more than one prompt message of different types. You can define their order of sequence by dragging and dropping them.
  • Multiple prompts are useful in scenarios where the prompt has to be played more than once, to avoid repetition, since the prompts are played in order.
Field Description Applicable
to
Nodes
Applicable
to
Channel
Initial Prompts Prompts that are played when the IVR first executes the node. If you do not enter a prompt for a node, the default user prompt for the node plays by default. If you do not enter a prompt for Standard Responses and Welcome Message, the default Standard Response and Welcome Message are played by default. Entity,
Confirmation,
Message nodes;
Standard Responses and
Welcome Message
IVR,
Twilio,
AudioCodes
Timeout Prompts Prompts that are played on the IVR channel when the user has not given any input within the specified time. If you do not enter a prompt for a node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default Timeout Prompt that plays if you don’t define No Match Prompts. Entity,
Confirmation;
Standard Responses
and Welcome Message
IVR,
Twilio,
AudioCodes
No Match Prompts Prompts that are played on the IVR channel when the user’s input has not matched any value in the defined grammar. If you do not enter a prompt here or select No Grammar option for an Entity or Confirmation node, the default Error Prompt of the node is played. Standard Responses and Welcomes have a default No Match Prompt that plays if you do not enter it. Entity,
Confirmation;
Standard Responses and
Welcome Message
IVR
Error Prompts Prompts that are played on the IVR channel when user input is an invalid Entity type. If you do not enter a prompt here, the default Error Prompt of the node is played. Entity,
Confirmation;
IVR,
Twilio,
AudioCodes
Grammar Define the grammar that should be used to detect a user’s utterance

  • The input type can be Speech or DTMF
  • Source of grammar can be Custom or Link
    • For Custom, write VXML grammar in the textbox.
    • For Link, enter the URL of the grammar. Ideally, the URL should be accessible to the IVR system so that the resource can be accessed while executing the calls at runtime

See below for a detailed configuration for Grammar syntax.
Note: If the Enable Transcription option is enabled for the VA along with specifying the source of the transcription engine, defining grammar isn’t mandatory.

Confirmation;
Standard Responses and
Welcome Message
IVR,
Twilio
Advanced Controls
These properties override the properties set in the VA IVR Settings page.
Timeout Select the maximum wait time to receive user input from the drop-down list, from 1 second up to 60 seconds. The default value is the same as defined in the VA IVR Settings page. N/A IVR,
Twilio,
AudioCodes
No. of Retries Select the maximum number of retries to allow. You can select from just 1 retry up to 10 retries.
The default value is the same as defined in the VA IVR Settings page.
N/A IVR,
Twilio,
AudioCodes
Behavior on Exceeding Retries
(applies only to entity node)
Define behavior when either the timeout or number of retry attempts exceed the specified limit. Options include:

  • Invoke CallTermination Handler
  • Initiate Dialog: Select a Dialog task from the list of VA tasks.
  • Jump to a specific node in the current task: Select a node from the list of nodes in the current Dialog task.

Post v7.3, this feature has been enhanced so that on exceeding entity error count, the platform will trigger the Behavior on Exceeding Retries behavior, when the transcription is enabled.

N/A IVR,
Twilio,
AudioCodes
Barge-In Select whether you want to allow a user input while a prompt is in progress. If you select no, the user input is not considered until the prompt is completed. The default value is No. N/A IVR,
Twilio,
AudioCodes
Call Control Parameters Click Add Property. Enter property names and values to use in defining the VXML definition in the IVR system and Session Parameters in AudioCodes channel. These values defined for a node or a standard response override the global Call Control Parameters defined in the VA IVR /AudioCodes settings page. N/A IVR,
AudioCodes
Log Select Yes if you want to send the chat log to the IVR system. The default value is No. N/A IVR
Recording Define the state of recording to be initiated. The default value is Stop. N/A IVR

Configuring Grammar

You will need to define at least one Speech Grammar to the IVR system.
There is no default Grammar that will be considered by the system. In this section, we will walk you through the steps needed to configure a Grammar system for the VA to function on the IVR system.

Typically for an IVR-enabled VA, the speech utterance of the user will be vetted and parsed by the Grammar syntax at the IVR system before being diverted to the VA.

Kore.ai supports the following Grammar systems:

  • Nuance
  • Voximal
  • UniMRCP

Each one requires its own configuration.

Nuance

In case you want to use grammar syntax rules from Nuance Speech Recognition System, you need to get a license for the same. Once you register and obtain a license from Nuance, you will be given access to two files – dlm.zip & nle.zip. Ensure that the path to this VXML is accessible to the VA.

Configurations:

  1. Set Enable Transcription to no
  2. In the Grammar section:
    • Select the Speech or DTMF option as per your requirement.
    • In the text box to define vxml enter the vxml path to dlm.zip file. The url will be of the format: http://nuance.kore.ai/downloads/kore_dlm.zip?nlptype=krypton&dlm_weight=0.2&lang=en-US
    • Replace the above path according to your setup
    • The language code “lang=en-US” will be based on your setup
  3. Add Grammar to add another path to nle.zip. Follow the above-mentioned steps.
  4. Save the settings.

Voximal/UniMRCP

In case you want to use grammar syntax rules from Voximal or UniMRCP, you need to specify the transcription source.

Configurations:

  1. Set Enable Transcription to yes
  2. In the Transcription engine source text box that appears:
    • for Voximal, enter “builtin:grammar/text”
    • for UniMRCP, enter “builtin:grammar/transcribe”
  3. You can leave the Grammar section blank, the above transcription source URL will handle the syntax and grammar vetting of the speech.
  4. Save the settings.
Menu