Chatbot Overview
Conversational Bots
Intents & Entities
Intelligent Bots
Kore.ai's Approach
Kore.ai Conversational Platform
Bot Concepts and Terminology
Natural Language Processing (NLP)
Bot Types
Bot Tasks
Starting with Kore.ai Platform
How to Access Bot Builder
Working with Kore.ai Bot Builder
Building your first Bot
Getting Started with Building Bots
Using the Dialog Builder Tool
Creating a Simple Bot
Release Notes
Latest Updates
Older Releases
Deprecations
Bot Builder
Creating a Bot
Design
Develop
Storyboard
Dialog Task
User Intent Node
Dialog Node
Entity Node
Supported Entity Types
Composite Entities
Supported Time Zones
Supported Colors
Supported Company Names
Form Node
Logic Node
Message Nodes
Confirmation Nodes
Service Node
Custom Authentication
2-way SSL for Service nodes
Script Node
Agent Transfer Node
WebHook Node
Grouping Nodes
Connections & Transitions
Managing Dialogs
User Prompts
Alert Tasks
Alert Tasks
Ignore Words and Field Memory
Digital Forms
Digital Views
Knowledge Graph
Terminology
Building
Generation
Importing and Exporting
Analysis
Knowledge Extraction
Small Talk
Action & Information Task
Action Tasks
Information Tasks
Establishing Flows
Natural Language
Overview
Machine Learning
ML Model
Fundamental Meaning
NLP Settings and Guidelines
Knowledge Graph Training
Traits
Ranking and Resolver
NLP Detection
Advanced NLP Configurations
Bot Intelligence
Overview
Context Management
Session and Context Variables
Context Object
Dialog Management
Sub-Intents
Amend Entity
Multi-Intent Detection
Sentiment Management
Tone Analysis
Sentiment Management
Default Conversations
Default Standard Responses
Channel Enablement
Test & Debug
Talk to Bot
Utterance Testing
Batch Testing
Record Conversations
Publishing your Bot
Analyzing your Bot
Overview
Dashboard
Custom Dashboard
Conversation Flows
Bot Metrics
Advanced Topics
Bot Authorization
Language Management
Collaborative Development
IVR Integration
Data Table
Universal Bots
Defining
Creating
Training
Customizing
Enabling Languages
Smart Bots
Defining
Sample Bots
Github
Asana
Travel Planning
Flight Search
Event Based Bot Actions
koreUtil Libraries
Bot Settings
Bot Functions
General Settings
PII Settings
Customizing Error Messages
Manage Sessions
Bot Management
Bot Versioning
Using Bot Variables
API Guide
API Overview
API List
API Collection
SDKs
SDK Overview
SDK Security
SDK App Registration
Web SDK Tutorial
Message Formatting and Templates
Mobile SDK Push Notification
Widget SDK Tutorial
Widget SDK – Message Formatting and Templates
Web Socket Connect & RTM
Using the BotKit SDK
Installing
Configuring
Events
Functions
BotKit SDK Tutorial – Agent Transfer
BotKit SDK Tutorial – Flight Search Sample Bot
Using an External NLP Engine
Bot Administration
Bots Admin Console
Dashboard
User Management
Managing Users
Managing Groups
Managing Role
Bots Management
Enrollment
Inviting Users
Bulk Invites
Importing Users
Synchronizing Users from AD
Security & Compliance
Using Single Sign-On
Security Settings
Cloud Connector
Analytics
Billing
How Tos
Creating a Simple Bot
Creating a Banking Bot
Transfer Funds Task
Update Balance Task
Context Switching
Using Traits
Schedule a Smart Alert
Configure Digital Forms
Add Form Data into Data Tables
Configuring Digital Views
Add Data to Data Tables
Update Data in Data Tables
Custom Dashboard
Custom Tags to filter Bot Metrics
Patterns for Intents & Entities
Build Knowledge Graph
Global Variables
Content Variables
Using Bot Functions
Configure Agent Transfer
  1. Home
  2. Docs
  3. Bots
  4. Natural Language
  5. Advanced NLP Configurations

Advanced NLP Configurations

Using the Thresholds & Configurations section under Natural Language -> Training, you can fine-tune intent detection for each of the languages enabled for your bot by customizing

Apart from these, under Advanced NLP Configurations, there are advanced settings that you can use for specific use cases and requirements.

Warning: The default settings for these configurations are ideal for most use cases. Do not change these settings unless you are fully acquainted with the functionality you are setting, as they might have a detrimental effect on the bot performance if not done properly.

 

The following table gives the details of the various configurations that can be set from this section:

Configuration Description Affected NLP Engine Valid Inputs Notes
Split Compound Words The setting enables splitting of the compound words into multiple stems and then processing the individual stem. ML Enable,
Disable (default)
Supported only for German language bots
None Intent Once enabled, a dummy, placeholder intent is created which reduces the chances of getting a false positive for an intent match using the ML engine. ML Enable (default),
Disable
Cosine similarity dampening Avoid penalty on short length questions using Cosine Similarity Dampening KG Enable (default),
Disable
Network Type Neural  Networks available for intent training ML Standard (default),
MLP-BOW,
MLP-WordEmbeddings,
LSTM,
CNN,
Transformer
Epochs Number iterations for training the neural network. ML Between 20 and 300,
increments of 10
(default setting 20)
Valid only when Network Type is set to MLP-BOW, MLP-WordEmbeddings, LSTM, CNN
Batch Size Number of training samples used for each batch while training ML Between 10 and 30,
increments of 5
(default setting 10)
Valid only when Network Type is set to MLP-BOW, MLP-WordEmbeddings, LSTM, CNN
Learning rate A hyper-parameter to control how much the weights of the network are adjusted with respect to the loss gradient ML Between 1e-4 and 1e-3,
increments of 1e-2
(default setting 1.00E-03)
Valid only when Network Type is set to MLP-BOW, MLP-WordEmbeddings, LSTM, CNN
Dropout Regularization parameter to avoid overfitting of the model ML Between 0 and 0.8,
increments of 0.1
(default setting 0)
Valid only when Network Type is set to MLP-BOW, MLP-WordEmbeddings, LSTM, CNN
Vectorizer Feature extraction technique on training data ML count (default),
tfidf
Valid only when Network Type is set to MLP-BOW
Maximum sequence length Length of the training sample or user input ML Between 10 and 30,
increments of 5
(default setting 20)
Valid only when Network Type is set to MLP-WordEmbeddings, LSTM, CNN
Embeddings Type Feature extraction technique on training data ML generated,
random (default)
Valid only when Network Type is set to MLP-WordEmbeddings, LSTM, CNN
Embeddings Dimensions Embeddings Dimensions to be used in featurization ML Between 100 and 400,
increments of 50
(default setting 300)
Valid only when Network Type is set to MLP-WordEmbeddings, LSTM, CNN
K Fold Cross-Validation kfold parameter for Cross-validation ML Between 2 and 10,
increments of 1
(default setting 2)
FAQ Name as Intent Name To use the Primary Question of the FAQ as the intent name even when the FAQ is linked to a Dialog KG Enable,
Disable (default)
Fuzzy Match This setting enables the use of the fuzzy matching algorithm for intent identification ML Enable (default),
Disable
Handle Negation This setting enables the handling of negated words in intent identification ML Enable (default),
Disable
Ignore Multiple Occurences Once enabled, the frequency of the words are disregarded for vectorization ML Enable (default),
Disable
Valid only when Network Type is set to MLP-BOW
Entity Placeholders in User Utterances Enable to replace entities present in user utterances with corresponding placeholders ML Enable (default),
Disable
Valid only when Network Type is set to MLP-BOW

Split Compound Words

Compound words are formed when two or more words are joined together to create a new word that has an entirely new meaning. This is particularly the case with the German language, where two (or more) words can be combined to form a compound, leading to an infinite amount of new compounds. For example, the components can be connected with a transitional element, as the -er in Bilder | buch (‘picture book’); or parts of the modifier can be deleted, for example, Kirch | turm (‘church tower’), where the final -e of the lemma Kirche is deleted. Often the compound words might mean something entirely different from the stem words, for example, Grunder (‘founder’) with stem words grun | der (‘green|the’). From an NLP perspective, it is important to understand when the NLP engines should split the words and process and when the entire word should be processed.

This setting can be used to choose how the compound words should be processed. Once enabled the compound words present in the user utterance would be split into its stem words and then considered for Intent Detection.

None Intent

The Machine Learning (ML) engine uses the training utterances to build a model to evaluate user utterances based on its training. The ML model tries to classify user input into one of these inputs. However, when there is an out of vocabulary word, ML tries to classify that too and this might hamper intent over an entity in some cases. For example, a person’s name at the entity node shouldn’t trigger any intent.

Adding an extra None Intent will ensure classifying random input to these intents in the bot. Once enabled, the ML Model will be tuned to identify these none intents when a user utterance contains the words that are not used in the bot’s training i.e. bot vocabulary.

Cosine similarity dampening

FAQ identification is done based on word match. The problem with this approach is that a user utterance with fewer words than the corresponding trained utterance is scored poorly. This scoring can cause failure in Intent Identification.

When the Cosine Similarity Dampening configuration is enabled, the user utterances that have fewer words than the trained utterances (i.e. Primary and Alternate Questions) will result in a higher “match score” than when the configuration is disabled.

Externalization of ML Engine

In machine learning, a hyperparameter is a parameter whose value is used to control the learning process. The hyperparameters provide you with additional customization options for your bots. The following are the ML configurations that can be customized.

Network Type

You can choose the Neural Network that you would like to use. You can choose between:

  • Standard;
  • MLP-BOW – The bag-of-words model is a simplifying representation used in natural language processing and information retrieval. In this model, a text is represented as the bag of its words, disregarding grammar and even word order but keeping multiplicity.
  • MLP-WordEmbeddings – Word embedding is the collective name for a set of language modeling and feature learning techniques in natural language processing where words or phrases from the vocabulary are mapped to vectors of real numbers.
  • LSTM (Long Short-Term Memory) is an artificial recurrent neural network (RNN) architecture used in the field of deep learning. LSTM has feedback connections and hence has the ability to capture long-term dependencies for texts of any length and is well suited for longer texts.
  • CNN (convolutional neural networks) is a class of deep neural networks in deep learning most commonly applied to analyzing visual imagery. It makes use of the word order for a specific region size and has achieved remarkable results on various text classification tasks.
  • Transformers uses uses Universal Sentence encoder in the vectorization stage of the Training pipeline. The output of sentence encoder is fed to a Multi Layer perceptron network for training. SentenceEncoder has an inbuilt capability of understanding the semantic similarity between sentences taking into account the synonyms and various usage patterns of the same sentence.
    The Universal Sentence Encoder encodes text into high-dimensional vectors that can be used for text classification, semantic similarity, clustering, and other natural language tasks. The model is trained and optimized for greater-than-word length text, such as sentences, phrases, or short paragraphs. It is trained on a variety of data sources and a variety of tasks with the aim of dynamically accommodating a wide variety of natural language understanding tasks. The input is the variable-length English text and the output is a 512-dimensional vector.

Epochs

In terms of artificial neural networks, an epoch refers to one cycle through the full training dataset. To get a good performance on non-training data it usually (but not always) takes more than one pass over the training data. The number of epochs is a hyperparameter that controls the number of complete passes through the training dataset.

Batch Size

Batch size is a term used in machine learning and refers to the number of training examples utilized in one iteration. It controls the accuracy of the estimate of the error gradient when training neural networks. The batch size is a hyperparameter that controls the number of training samples to work through before the model’s internal parameters are updated.

Learning rate

In machine learning and statistics, the learning rate is a tuning parameter in an optimization algorithm that determines the step size at each iteration while moving toward a minimum of a loss function. It can be thought of as a parameter for controlling the weight update in the neural network based on the loss.

Dropout

The term dropout refers to dropping out units (both hidden and visible) in a neural network. Simply put, dropout refers to ignoring units (i.e. neurons) during the training phase of a certain set of neurons which is chosen at random. It is a regularization technique to prevent overfitting of data.

Vectorizer

Vectorization is a way to optimize algorithms by using vector operations for computations instead of element-by-element operations. It can be used to determine the Feature extraction technique on training data. It can be set to one of the following:

  • Count Vectorizer is used to convert the given text documents to a vector of term/token counts based on the frequency (count) of each word occurrence in the text. This is helpful when there are multiple texts, and each word in the text needs to be converted into vectors for use in further text analysis. It enables the ​pre-processing of text data prior to generating the vector representation.
  • TFIDF Vectorizer is a statistical measure that evaluates how relevant a word is to a document in a collection of documents. This is done by multiplying two metrics: how many times a word appears in a document (Term Frequency TF), and the Inverse Document Frequency (IDF) of the word across a set of documents.

Maximum Sequence length

When processing a sentence (for training or prediction), the length of the sequence is the number of words in the sentence. The maximum sequence length parameter is the maximum number of words to be considered for training. If the user input or training phrase sentence sequence length is more than the maximum sentence length it is trimmed to this length and if it is less than then the sentence is padded with special tokens.

Embeddings Type

A (word) embedding is a vector representation of a word or phrase in an input/training text. Words with similar meaning will have similar vector representations in n-dimensional space and the vector values are learned in a way that resembles a neural network.

Embeddings Type can be set to one of the following:

  • random (default setting): At first all the words are assigned random embeddings, then the embeddings are optimized for the given training data while training.
  • generated: Word Embeddings are generated just before the training starts. Word2Vec model is used for generating word embeddings. These generated embeddings are used while training. These generated word embeddings are optimized for the given training data while training.

Embeddings Dimensions

The embedding dimension defines the size of the embedding vector. If the word embeddings are random or generated, any number can be used as an embedding dimension.

K Fold Cross-Validation

Cross-validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The procedure has a single parameter called k that refers to the number of groups that a given data sample is to be split into. This setting allows you to configure the K parameter. Refer here for more on cross-validation.

FAQ Name as Intent Name

This option controls whether you see the Primary Question or Dialog Task name in the following scenarios:

  • where intent names are present to the user
    • Disambiguation flow
    • Follow-ups
  • Utterance testing
  • Batch testing
  • NLP Analysis
  • Analytics (Dashboards, Custom Dashboards, Conversation Flows, and Metrics)
  • Intent detection – ranking flows

Fuzzy Match

Fuzzy matching is an approximate string matching technique that helps the system identify non-exact matches. The ML Engine uses the fuzzy matching logic to identify definitive matches. The fuzzy match algorithm assigns a ‘Fuzzy Search score’ to the intents based on its similarity with the user utterance. Any intent with a fuzzy match score of 95 or higher (on a scale of 0-100) is identified as a definitive match.

However, fuzzy matching can produce false positives when there are words with similar spellings but different meanings, for example, possible vs. impossible or available vs. unavailable. This behavior can be problematic in some cases. You can disable this option and discourage the ML engine from using this matching algorithm.

Negation Handling

This setting can be configured to choose the ML engine’s behavior when negated words are present in the user utterance. When the Negation Handling configuration is enabled, the intent’s ML score would be penalized if there are any negated predilection words present in the user utterance.

Ignore Multiple Occurences

Sometimes the intent identification gets skewed if multiple occurrences of the same word are present in the user utterance. When the Ignore Multiple Occurrences configuration is enabled, then multiple occurrences of the same word present in the user utterance are discarded. The repeated word would be considered only once for the vectorization and the subsequent intent matching.

Entity Placeholders in User Utterances

Sometimes you might want the system to replace the entity values present in the user utterance with ‘entity placeholders’ so that the intent detection can be improved. Note that the entities that are not resolved by the NER model would not be used for replacement, so if you enable this option we strongly urge that you annotate all the training utterances. These entities would be replaced in user utterance in End-user interactions, Batch testing, Utterance testing, Conversation testing.

Menu