Kore.ai’s NLP engine breaks conversations down to their essence, identifies and follows-up on multiple action items, or intents, from a single message. The virtual assistant can then execute tasks in a sequential and logical manner.
You can allow the NLP engine to detect and execute multiple intents identified in a single user utterance. By default, this setting is disabled and you can enable it. Once enabled, the NLP engine identifies multiple intents in a user utterance based upon the presence of keywords like:
- later
- following that
- second
- then
- before that
- first
- beforehand
- prior to that
The sequence in which these intents are executed is based on the standard phrases like before that, after that, and then, etc. in the user utterance. If no order is specified or identified by the NLP engine, then the intents are executed in the order they are present in the original utterance.
After the execution of each intent, the Platform will automatically trigger the next intent in the identified order.
If a task fails to get executed, then the Platform will not initiate the subsequent tasks (identified from the utterance).
Following are the steps to enable multi-intent detection:
- Open the VA for which you want to add multiple intents.
- Select the Build tab from the top menu.
- From the left menu, click Intelligence > Multi Intent Detection.
- Turn on the toggle to enable it.