Effective context management is important because it allows bots to interact with users in a way that is easier, quicker, and more helpful – and less robotic and scripted. Contextual data helps users complete tasks faster and allows you to create more natural, human-like back and forth conversations.
Take for example the following conversation:
User: What are the annual charges for a credit card?
Bot: First year is free and after that, it's $xxx
User: Sounds great, I would like to apply for one.
In the above conversation, the “apply” is in the context of “credit card”. The Bot should not be asking the user whether they would like to apply for a credit card or a debit card. The context from the previous intent, FAQ – credit card annual charges, should be passed to the intent, Apply for a card.
Kore.ai Bots Platform allows you to capture and reuse contextual data for a large variety of scenarios, so you can create more complex use cases and redefine the enterprise customer experience. The following are examples of a few such scenarios:
- Sharing context across intents, FAQs: As seen from the above example, maintaining context for all intents i.e. dialog tasks, FAQs, makes it easy to customize the user experience
- Context-driven FAQs: Certain intents (tasks or FAQs) can be made available only when certain other intents (tasks or FAQs) are in the context.
Example: FAQ intent ‘What are the meal options available’ should be available only when ‘Book a flight’ task is in the context
- Follow-up Intent: Context of the current intent can be used to identify subsequent intents from the user utterances.
Example: User utterance ‘what are the charges’ should be responded using FAQ intent ‘what are the charges for Platinum credit card’ if the user’s previous intent was ‘what are the benefits of Platinum credit card’
- Sharing Entity Values acoss Intents: Entity values or conversation flows can be driven using the previous intent’s context information.
Example: ‘City Name’ entity in ‘Check weather’ intent can be pre-populated if the user has executed ‘Check flight status’ intent and has provided value for ‘Destination City’ entity.
This document talks about the concepts behind the implementation context management in the Kore.ai Bots platform. For a detailed step-by-step example refer here.
User: When is my flight to Singapore?
Bot: Your flight from New York to Singapore is confirmed for Jun 20th.
User: Do I need a Visa?
Bot: Yes, you need a visa to visit Singapore for business or tourism
User: I would like to apply for one
Bot: Sure I can help with Visa to Singapore. Let me know the duration of the stay
To achieve the above conversation the context object can be used as follows:
- Flight Booking Enquiry would emit the destination city entity value
- Visa FAQ would use the entity value emitted by the Booking Enquiry Intent
- Visa Application would consume the terms Visa and Singapore from the Visa FAQ
This article will help you in achieving the above scenario.
Context management involves the following steps:
- Output Context to define tags that indicate the current intent being executed
- Intent Preconditions to extract the output context tags for scoping the subsequent intent detection
- Intent Detection Rules for detecting the relevant intents
- KC – Contextual Intent Detection using context tags to identify the terms/nodes from the FAQs
- Conversation Flows to customize the flows
Context tags can be generated and stored in the context object to be used for managing the Bot behavior and user experience. The platform creates a context object for every user intent, like dialog tasks and FAQs (refer here for more on Context Object).
Default Contexts Tags:
Intent name, Entity names, and FAQ Term/Node names are emitted by default.
Custom Context Tags:
Additionally, the following can be defined to be included in the Context Object:
- Context tags – You can add context tags from the NLP settings for Dialog, Action, Alert, and Info Tasks, and Entities.
- Entity Value – You set an option to indicate whether entity values captured should be emitted or not for each entity node.
- Use context tags for finding FAQ – You can indicate whether KG paths to be shortlisted using context tags
The platform supports emitting details of a dialog task when executed by the user:
- Intent name is emitted as a contextual tag for all dialog tasks when the task execution is initiated.
- You can add any additional tags from the NLP Properties tab of the Dialog task.
Action, Alert and Info Tasks
The platform supports emitting details of the action, alert and info tasks when executed by the user:
- Task name is emitted as a contextual tag for all action, alert and info tasks when the task execution is initiated.
- You can add any additional tags at the time of Task creation under More Options.
or from the General Settings
- You can also emit output context tags from pre-processors or post-processors using the
Entity values captured from end users can be emitted based on the:
- Auto emit the entity values captured switch. Entity Value Tags will be emitted as shown in the following section.
- You have the option to add any additional tags.
- Node/term name is emitted as a contextual tag for all mandatory and optional terms present in the path qualified when a question is answered.
- You can add any additional tags per term from the Settings page for the term/node.
- You can also emit output context tags from advanced prompts using the
Intent Preconditions can be used in defining the intent detection scope for intents and FAQs. These are a set of conditions that need to be fulfilled for the intent/FAQ to be detected and executed.
Intent preconditions for dialog intents can be set to define when a dialog should be detected i.e. making a dialog available for detection only when specific tags are available in the context
- You can add one or more intent pre-conditions for making a dialog intent available
- Dialog intents with preconditions will be detected only if the defined preconditions are met
- The intent with a precondition set will be treated as a sub-intent and will be part of the Linked Task Exception behavior from the Dialog level Hold and Resume settings.
Action, Alert and Info Tasks
Intent preconditions for action, alert and info tasks can be set to define when a task should be detected i.e. making a task available for detection only when specific tags are available in the context
- You can add one or more intent pre-conditions for making a task intent available
- Task intents with preconditions should be detected only if the defined preconditions are met
Intent preconditions for KC can be associated with terms
Contextual intent detection helps in detecting relevant intents using the output context set by previously executed intents.
Dialog, Action, Alert and Info Tasks
You can define ‘Rules’ for identifying contextually relevant intents by using output context tags same as traits (refer here for more).
The platform consumes the output context tags and uses them for improving intent detection in KG engine based on the flag set by the developer. This flag will ensure that the context tags are used to qualify paths in Knowledge Collection. Context tags are used to extract terms and these terms are clubbed with any other terms present in user utterance. The consolidated list of terms are used for qualifying the path.
You can set this configuration by:
- From the Left Navigation, Natural Language -> Training, select Thresholds & Configurations
- Click Knowledge Collection
- Locate Qualify Contextual Paths and set it to Yes
The context tags available in the context object can be used to customize the conversation flows.
These can be used:
- to pre-populate entity values
- to define transitions conditions
- for custom conversation flow: Script to access the tags would be
- from the current context:
- from the previous context:
- from the current context: