GETTING STARTED
Kore.ai XO Platform
Virtual Assistants Overview
Natural Language Processing (NLP)
Concepts and Terminology
Help & Learning Resources
Quick Start Guide
Accessing the Platform
Navigating the Kore.ai XO Platform
Building a Virtual Assistant
Using Workspaces
Release Notes
Current Version
Previous Versions
Deprecations
Request a Feature
CONCEPTS
Design
Storyboard
Overview
FAQs
Conversation Designer
Overview
Dialog Tasks
Mock Scenes
Dialog Tasks
Overview
Navigate Dialog Tasks
Build Dialog Tasks
Nodes & Connections
Overview
Node Types
Intent Node
Dialog Node
Entity Node
Entity Rules
Form Node
Confirmation Node
Message Nodes
Logic Node
Bot Action Node
Service Node
Webhook Node
Script Node
Process Node
Agent Transfer
Node Connections Setup
Sub-Intent Scoping
User Prompts
Voice Call Properties
Dialog Task Management
Supported Entity Types
Supported Company Names
Supported Colors
Knowledge Graph
Introduction
Knowledge Extraction
Build Knowledge Graph
Create Node Structure
Build the Graph
Add FAQs
Add FAQs from an Existing Source
Run a Task
Traits, Synonyms, and Stop Words
Manage Variable Namespaces
Update Knowledge Graph
Introduction
Move Question and Answers Between Nodes
Edit and Delete Terms
Edit Questions and Responses
Knowledge Graph Analysis
Knowledge Graph Import and Export
Prepare Data for Import
From a CSV File
From a JSON File
Importing Knowledge Graph
Exporting Knowledge Graph
Auto-Generate Knowledge Graph
Alert Tasks
Small Talk
Digital Skills
Overview
Digital Forms
Digital Views
Introduction
Widgets
Panels
Session and Context Variables
Context Object
Train
NLP Optimization
ML Engine
Overview
Model Validation
FM Engine
KG Engine
Traits Engine
Ranking and Resolver
Training Validations
NLP Configurations
NLP Guidelines
Intelligence
Introduction
Event Handlers
Default Standard Responses
Contextual Memory
Contextual Intents
Interruption Management
Multi-intent Detection
Amending Entities
Default Conversations
Conversation Driven Dialog Builder
Sentinment Management
Tone Analysis
Test & Debug
Overview
Talk to Bot
Utterance Testing
Batch Testing
Conversation Testing
Health and Monitoring
Deploy
Channels
Publishing
Versioning
Analyze
Introduction
Overview Dashboard
Conversations Dashboard
Users Dashboard
Performance Dashboard
Custom Dashboards
Introduction
Custom Meta Tags
Create Custom Dashboard
NLP Insights
Conversations History
Conversation Flows
Analytics Dashboard Filters
Usage Metrics
Containment Metrics
Smart Bots
Universal Bots
Introduction
Universal Bot Definition
Universal Bot Creation
Training a Universal Bot
Universal Bot Customizations
Enabling Languages
Store
Manage Assistant
Plan & Usage
Overview
Usage Plans
Support Plans
Invoices
Authorization
Multilingual Virtual Assistants
Get Started
Supported Components & Features
Manage Languages
Manage Translation Services
Multiingual Virtual Assistant Behavior
Masking PII Details
Variables
Collections
IVR Settings
General Settings
Assistant Management
Manage Namespace
Data as Service
Data Table
Table Views
App Definitions
Sharing Data Tables or Views
HOW TOs
Build a Travel Planning Assistant
Travel Assistant Overview
Create a Travel Virtual Assistant
Design Conversation Skills
Create an ‘Update Booking’ Task
Create a Change Flight Task
Build a Knowledge Graph
Schedule a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Train the Assistant
Use Traits
Use Patterns
Manage Context Switching
Deploy the Assistant
Configure Agent Transfer
Use Bot Functions
Use Content Variables
Use Global Variables
Use Web SDK
Build a Banking Assistant
Design Conversation Skills
Create a Sample Banking Assistant
Create a Transfer Funds Task
Create a Update Balance Task
Create a Knowledge Graph
Set Up a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Add Data to Data Tables
Update Data in Data Tables
Add Data from Digital Forms
Train the Assistant
Composite Entities
Use Traits
Use Patterns for Intents & Entities
Manage Context Switching
Deploy the Assistant
Configure an Agent Transfer
Use Assistant Functions
Use Content Variables
Use Global Variables
Intent Scoping using Group Node
Analyze the Assistant
Create a Custom Dashboard
Use Custom Meta Tags in Filters
APIs & SDKs
API Reference
API Introduction
API List
API Collection
koreUtil Libraries
SDK Reference
SDK Introduction
SDK Security
SDK Registration
Web Socket Connect and RTM
Using the BotKit SDK
BotKit SDK Tutorial - Blue Prism
Widget SDK Tutorial
Web SDK Tutorial
ADMINISTRATION
Introduction to Admin Console
Administration Dashboard
User Management
Add Users
Manage Groups
Manage Roles
Assistant Management
Enrollment
Invite Users
Send Bulk Invites
Import User Data
Synchronize Users from AD
Security & Control
Using Single-Sign On
Security Settings
Cloud Connector
Analytics
Billing
  1. Home
  2. Docs
  3. Virtual Assistants
  4. Natural Language
  5. Fundamental Meaning

Fundamental Meaning

Fundamental Meaning is a computational linguistics approach that is built on ChatScript. The model analyzes the structure of a user’s utterance to identify each word by meaning, position, conjugation, capitalization, plurality, and other factors.

FM Engine Overview

The Fundamental Meaning model is a deterministic model that uses semantic rules and language context to determine the intent match. This engine can be trained using synonyms, built-in and custom concepts, as well as patterns.

The FM model scores user utterances using various semantic rules which include:

  • Grammar;
  • Parts of speech;
  • Word Match, Word Coverage across the sentence, Word Position;
  • Sentence structure.

The FM Process

The FM model uses the following processes in training

  1. Tokenization (word segmentation) is the process of breaking up the given text into units called tokens. Hyphenated words are retained (might be subjected to spell correction later); digits with a hyphen are split, eg “2-3” becomes “2 – 3”. Tokenization is not done for known concepts like Dates, Currency, etc.
  2. Substitution is the process of expanding interjections, abbreviations, texting shorthand, and contractions using system concepts. Like ~emohello for all greeting-related expressions, ~yes for confirmation, ~no for rejection, and much more.
  3. Merging is the process of combining a sequence of words that are obviously a single word, numbers, or dates. E.g. “credit card” or “twenty five” or “twenty-five” merged into a single word.
  4. Spell Check is the process of replacing unknown words with known words (if any) and involves case conversion. The Platform refers to WordNet and VA Defined Terms for spell check. E.g. “I wantt  to pai for my booking”   becomes “I want to pay for my booking”
  5. Lemmatization – The XO Platform uses the WordNet database to look up for lemmas of the words in a given text.
  6. Gleaning to identify sections of utterances and mark them as special. This includes
    1. Marking polite phrases and treating them as noise, like “can you please….”. 
    2. Language constructs that indicate multiple intents, like “and then”, “and after that”, “but first” cause the sentence to be split into two and do multiple intent detection.
    3. Identifying and normalizing numbers and from other related entities: e.g. “seven one three triple five double zero eighty four” => 7135550084, which is probably a phone number.
    4. Identifying System entities like Percentages – “sixty six percent”; Units of measurement – “five sq km”, “12 stone 7 pounds”; Currencies – “twenty bucks”, “six lakh rupees”; Dates and times – “last day of next month”, “10 o’clock tonight”.
  7. Parts of Speech (POS) tagging is the process of marking up a word in a corpus to a corresponding part of a speech tag, based on its context and definition. Part of Speech Tags is useful for building parse trees, which are used in extracting relations between words. POS Tagging is also essential for building lemmatizers which are used to reduce a word to its root form.  Each word is assigned a part of the speech tag, and possibly a role (subject/verb/object) from VA definition data.
  8. Marking is the process of assigning concepts to each word. POS tagging and parsing are abstract, dealing with nouns and verbs while Marking applies meanings to the words. For example, “book a flight” – book can be a noun or verb, in this context, it is marked as a verb.

Key Components

Synonyms, Concepts & Patterns are the cornerstones of the FM engine. These are used in intent detection and also by Ranking & Resolver when trying to choose among the multiple possible intents.

  • Synonyms need to be used when the words used to identify an intent/entity can be used interchangeably like. The Platform comes with a built-in library, it can be augmented by adding domain words that will be used to build a dictionary for your assistant.
  • Concepts are a predefined set of choices that are defined once and used in multiple places. The Platform has a large set of inbuilt concepts that developers can use to define a pattern like ~world_country, ~asian_country. You can create your own custom concepts which are applicable to your use case, you can also create hierarchical concepts.
  • Patterns used mainly for intent detection in the FM engine. Can be used to define metaphors or other idiomatic expressions for task names. Concepts can be used in defining patterns.

FM Output

The FM Engine collects information on a word in a given user input depending on:

  • Position of the matching word in the sentence,
  • Whether the matching word is a Noun or verb,
  • Role of the matching word – Main Subject, Main Verb, Main Object,
  • Exact word match or Synonym,
  • Tense of the matching word – present/future/past.

A series of individual scores are calculated from the set of matched words.

  • The goal is to prefer tasks that match the most likely words in the earliest sentence in the input.
  • Preference is given to words when they are close together, towards the start of the sentence, and in the same order as the task label.
  • It is undesirable to have several phrases before the task name or a conjunction in the middle of the task label.

Preference is given to tasks in phrases in the present or future tense.

Training the FM Engine

Training the FM Engine involves the following:

  • Synonyms and Concepts management,
  • Managing Patterns and Rules,
  • Negative Patterns.
  • Thresholds configuration,

Let us discuss these topics in detail, below.

Manage Synonyms

 

How Synonyms Work

Users utter the same request in varied ways.

For example, an intent like Make a Booking  could be expressed with alternatives such as Create a Booking or Make a Reservation. .

As a developer, you must limit the name of a task to only two or three words, yet accommodate the alternative ways in which it could be asked.

Synonyms are defined for both the task name and a task field name. To optimize the NLP interpreter accuracy in recognizing the correct task and task field provided by the user, you can add synonyms for words used in the names of your tasks and task fields.

Make – add, create, generate, etc.

Booking – reservation, ticket, trip, etc.

Also consider misspellings, such as:

  • Create – crate, creeate, etc.
  • Booking – boking, bookin, etc.

When you add synonyms for a word, those synonyms are also used for other tasks or task fields. In other terms, synonyms for a word defined for a task are also applicable to the same word in another task name. For example, synonyms defined for the word make in the Make a Booking  task are also used for the Make a Payment task.

For more information about best practices and tips for naming tasks and task fields, see the Natural Language Processing Guide.

 

Add Synonyms

The Platform provides a set of default synonyms, pre-listed within the dedicated section (Training > Synonyms). These are only available in English, German, Spanish, French; Traditional and Simplified Chinese.

Adding Synonyms involves using the Natural Language Training section of the XO Platform, where you can add synonyms for VA, task and entity node names. 

The following sections show you how to add these synonyms.

Navigate to Synonyms

  1. Open the VA for which you want to add synonyms and select the Build tab from the top menu.
  2. From the left navigation menu and click Natural Language -> Training.
  3. On the Training page, click one of the following tabs:
    1. Bot Synonyms – Add and edit synonyms for the words in the bot or task names.
    2. Entities – Add and edit synonyms for any words in your dialog task Entity node names.

 

Add Bot Synonyms

On the Training page under the Bot Synonyms tab, you can view and add synonyms for words in task names. Synonyms defined on the Bot Synonym tab apply bot-wide to all task names, field names, and so forth.

To add a bot synonym, follow the below steps:

  1. On the Training page, select the Bot Synonyms tab.

  2. Click New Synonym or use the edit icon against an existing synonym, to open the synonym page.
  3. In the Name field, enter the name of the word (keyword) for which you want to create synonyms.
  4. In the Synonyms field, enter one or more synonyms for the keyword. Press Enter/Return after entering each synonym.
  5. Added Synonyms are automatically saved once you press Enter/Return.

 

Add Entity Synonyms

On the Training page, on the Entities tab, each dialog task entity is listed with any defined synonyms for that entity name.

To add synonyms for a dialog task entity, follow the below steps:

  1. On the Training page, select the Entities tab.
  2. Click the Edit icon against the entity for which you want to add synonyms.
  3. Enter one or more synonyms for the entity and then press Enter after entering each synonym. They will be saved automatically.

 

Use Patterns Instead of Synonyms

Adding synonyms can help the NLP interpreter select the intended task. But what if the user input does not contain any of the words we used as synonyms?

For example, for a Make a Booking task in a Travel Assistant , if the user inputs Get me a ticket , which means the same as Make a Booking , but does not match any of the words or synonyms for the task name. In this case, you can create patterns for a group of words that have the same meaning.  

Manage Concepts

Concepts are clusters of related and synonymous terms that can be considered as a group identified by a single term.

For example, yes can be expressed as ok, I agree or all right etc.

In addition, Concepts can be very useful when you want to categorize terms. For example, you might have fruit as a concept, under which you can classify terms such as  apple, orange, banana, etc.

 

Concept Types

There are two types of Concepts within the XO Platform:

  • System Concepts The Kore.ai platform provides a list of default concepts that cover the most common scenarios like yes, no, help etc.
  • Custom Concepts: You might want to define concepts catering to your business requirements. For example, you might want to use the term fruit to imply either apple or orange or banana. If you want to add to the list of default words, you are advised to create a new concept and add words there before mapping the pattern, instead of adding more words to the default concepts like ~emogoodbye or ~emohello.

 

Define Custom Concepts

You can define a new concept the same way as you would define a Bot Synonym, but you need to start the word with a ~. For more details on the naming convention, refer here.

You can also define custom concepts using emojis.

Concepts are used to define patterns. For example, the pattern buy ~fruit is used to capture buy orange or buy banana.

 

Emoji Support

The NLP engine can recognize emojis present in the user utterance. You can include these in the concept definition and give a better user experience. For example, a thumbs-up emoticon (:thumbs up:) from the user can be considered as an affirmative, and a frown emoticon (:frowning2:) can be considered as an anger sentiment and appropriate action can be taken.

You can import emojis into your VA’s system concepts so that it can conduct the conversations when emojis are present in the user’s utterances. For this, follow the below steps:

  1. Under the Build top menu option, from the left menu, click Natural Language -> Training
  2. On the Intents tab, click the more/ellipses icon and select Import Emojis into Concepts.
  3. This adds all the default emojis to the system concepts for the current language. As you can see emojis are updated for the system concepts like this: ~emohappy, ~emohello, etc.

  4. After importing, you may choose to review and remove any emojis that you may not want to support.
  5. If you are trying to re-import the emojis, then the current emojis are retained and any additional emojis are added.

Manage Patterns and Rules

To optimize the accuracy and recognition of the NLP interpreter, you can create patterns for task names and task fields.

Using synonyms is great for words used in the name. Sometimes; however, users may refer to a task using slang, metaphors, or other idiomatic expressions.

For example, a task name is Get Current Weather, but the user inputs, what’s happening with today’s rain situation. In this case, none of the words used in the task name are used, yet the input has the same meaning as Get Current Weather.

In these cases, using patterns can help to improve NLP interpreter accuracy.

When the NLP interpreter matches a synonym to one task or field, and a pattern to a different task or field, the pattern match is prioritized and used for positive recognition over the synonym match.

To learn more about best practices and tips for optimizing NLP, refer to Natural Language Processing Guide.

For a quick guide towards the usage of patterns, refer to How to use Patterns.

 

Navigate to the Patterns Tab

  1. Open the VA for which you want to add patterns and select the Build tab from the top menu.
  2. From the left menu click Natural Language -> Training.
  3. On the Training page, click one of the following tabs:
    1. Intents – Add or edit patterns or rules for dialog task User Intent nodes.
    2. Entities – Add or edit patterns for dialog task Entity nodes.

 

Add Patterns for Intents

You can define a pattern as a sentence that represents the task but does not actually contain any of the words in the name of the task.

Also, you can define where words must be ignored by the NLP interpreter to improve accuracy for recognizing the correct task.

For example, you can build a pattern like find *testable* value to recognize in the user input a sequence of words in order, where the word find is followed by any number of words, followed by the word testable, and again followed by any number of words, which are finally followed by the word value. For this pattern, the user input can be Find me any testable integers corresponding to the value, which can match the pattern.

You can also use concepts to build patterns. For example, the pattern is defined as buy ~fruit and the user input that matches this pattern can be buy orange.

In the Intents section, you can view, add, and delete patterns for dialog tasks.

To add an intent pattern, follow the below steps:

  1. On the Training page, click the Patterns/+Pattern against the intent name you want to enter the patterns for.
  2. In the Intent pane, enter one or more sentence patterns, and press Enter between patterns.
  3. You can reorder, edit or delete patterns using the appropriate icons.

 

Add Rules for Intents

You can use traits or context tags to define intent identification rules for better intent detection. You can add Traits or Context Tags and conditional rules using AND, OR operators.

To add an intent rule, follow the below steps:

  1. On the Training page, click the Rules/+Rule against the intent name you want to enter the patterns for.
  2. In the Intent pane, enter one or more traits/context tags as conditional rules using AND OR operators, and press Enter between traits/tags.

Refer here to know more about Traits, Trait Association Rules and Context Tags.

 

Add Patterns for Entities

In the Entities section of the Training page, you can view, add, and delete patterns for dialog task entity node names.

To add an entity pattern, follow the below steps:

  1. On the Training page, click the Entities tab.
  2. Click the edit icon against the entity to enter the patterns to it.
  3. In the Entity Training page, select the Patterns tab.
  4. Enter one or more sentence patterns, and press ENTER between patterns.
  5. You can reorder, edit or delete patterns using the appropriate icons.

 

Negative Patterns

On the Kore.ai XO Platform, a natural language engine attempts to identify user input and match that user input to a task. You can modify additional advanced settings to enable negative patterns for intent detection.

This is useful in filtering matched intents that are part of user utterance but should not trigger that intent.

Use Case: For example, a user says I was trying to Book a Flight when I faced an issue. Though the machine identifies the intent as Book a Flight, that is not what the user wants to do. In such a case, defining was trying to as a negative pattern, would ensure that the matched intent is ignored.

Negative patterns are used to eliminate intents detected by the Fundamental Meaning or Machine Learning models.

To enable Negative Patterns, follow the below steps:

  1. Under the Build tab of the menu option, from the left menu, click Natural Language -> Advanced Settings.
  2. Enable Negative Patterns. By default, this option is disabled.
  3. You can now see the Negative Patterns section. 

  4. For each of the intents in your VA, you can define negative patterns similar to how you define intent patterns. 
    1. Go to Natural Language > Training > Intents and select the Intent to which you want to add the Negative Pattern.
    2. In the Intent training window, select the Negative Patterns tab.
    3. Add negative patterns in the Add a Pattern field, then press Enter/Return. Your new patterns are automatically saved.
  5. Any intents identified by FM or ML engines, if identified as one of the configured negative patterns, will be ignored and not sent to Ranking & Resolver.
  6. Once added, these patterns can be reordered, edited, and deleted.

Thresholds & Configurations

To train and improve the performance, Threshold and Configurations can be specified for all three NLP engines – FM, KG, and ML. You can access these settings under Build -> Natural Language > Thresholds & Configurations.

The settings for the FM engine are discussed in detail in the following sections.

Note: If your assistant is multilingual, you can set the Thresholds differently for different languages. If not set, the Default Settings will be used for all languages. This feature is available from v7.0.

 

To set up Thresholds and Configurations for the FM Engine, please follow the steps below:

  1. Open the assistant for which you want to configure thresholds.
  2. Select the Build tab from the top menu.
  3. From the left menu, click Natural Language -> Thresholds & Configurations.
  4. The Fundamental Meaning section allows you to set the threshold for the FM engine:
    1. Intent Detection using Task Name Words can be disabled in case your use case requires it. This is particularly useful if you have too many tasks named – place order, cancel order, request order, duplicate order. All these tasks will be matched as intents if the user utterance has the word “order” in it. Instead of trying, guessing, and training for all possible utterances, disabling this option does the trick. Disabling it will not affect the intent detection using patterns. Also, an implicit pattern using the words in the Task Name will be automatically added. The implicit pattern identifies this intent only when the user input starts and ends with the exact task name. By default, it is enabled.
    2. FM Threshold can be used to limit the number of results sent from the FM Engine to Ranking and Resolver, so as to reduce the possibility of presenting a low confidence match to the end-user. Enable this configuration and choose the threshold percentage as a value between 0% and 20%. Default is 2%. Only the intent matches within the set % of the top-scoring intent will be considered and all other intents will be eliminated. This configuration is available only for English, Spanish, French and German languages.

Suggested Reading

Know more about the guidelines in naming Intents, using Patterns and more, refer here.

Menu