Natural Language

Training Validations

NLP models play a significant role in providing natural conversational experiences for your customers and employees. Improving the accuracy of the NLP models is a continuous journey and requires fine-tuning, as you add new use cases to your virtual assistant. The Kore.ai XO Platform proactively validates the NLP training provided…

Advanced NLP Configurations

You can fine-tune intent detection for each language enabled for your Virtual Assistant (VA). To perform this action, follow the below steps: On the left pane, click Natural Language > Training > Thresholds & Configurations. Under the Thresholds & Configurations section, you can perform by customizing The Fundamental Meaning model – see…

Model Validation

Once you have built your virtual assistant and trained it, the Kore.ai platform builds an ML model mapping user utterance with intents (click here for more info). Once created, it is recommended to validate the model to understand and estimate an unbiased generalization performance of the ML model. The XO…

Traits

In natural conversations, it is very common that a user provides background/relevant information while describing a specific scenario. Traits are specific entities, attributes, or details that the users express in their conversations. The utterance may not directly convey any specific intent, but the traits present in the utterance are used…

Ranking and Resolver

The Kore.ai NLP engine uses Machine Learning, Fundamental Meaning, and Knowledge Graph (if any) models to match intents. All the three Kore.ai engines finally deliver their findings to the Kore.ai Ranking and Resolver component as either exact matches or probable matches. Ranking and Resolver determines the final winner of the…

Knowledge Graph Training

Training your Assistant is not restricted to the Machine Learning and Fundamental Meaning engines. You must also train the Knowledge Graph (KG) engine, too. The Ontology-based Knowledge Graph turns static FAQ text into an intelligent, personalized conversational experience. It uses domain terms and relationships thus reducing the training needs. It…

Fundamental Meaning

Fundamental Meaning is a computational linguistics approach that is built on ChatScript. The model analyzes the structure of a user’s utterance to identify each word by meaning, position, conjugation, capitalization, plurality, and other factors. FM Engine Overview The Fundamental Meaning model is a deterministic model that uses semantic rules and…

Improving VA Performance – NLP Optimization

A chatbot’s ability to consistently understand and interact with a user is dictated by the robustness of the Natural Language Processing (NLP) that powers the conversation. Kore.ai’s platform uses a unique Natural Language Processing strategy, combining Fundamental Meaning and Machine Learning engines for maximum conversation accuracy with little upfront training.…

NLP Settings and Guidelines

This article provides you with some essential guidelines to help you optimize your workflow with the XO Platform’s NLP, and thus improve your VA’s performance. For this purpose, we also recommend that you read NLP Overview and Optimizing NLP. Intent Naming Guidelines Follow the below guidelines when naming your task…

The Machine Learning Engine

Developers need to provide sample utterances for each intent (task) the assistant needs to identify, to train the machine learning model. The Kore.ai XO Platform Machine Learning (ML) engine will build a model that will try to map a user utterance to one of the VA’s intents. Kore.ai’s XO Platform…
Menu