Kore.ai records and presents all the information as part of the Analyze section. Developers can gain in-depth insights into their bot’s performance at identifying and executing tasks. It lets you view necessary information for user utterances that matched and didn’t match with intents.
The Analyze > NLP Metrics section contains the following sections:
- Intent Found: Contains all the user utterances that were successfully mapped to a trained intent, including the dialog tasks triggered by KG intents. The utterances are grouped together based on similarity
- You can filter the information based on various criteria such as User Utterances, Intent, user (Kore user id or channel-specific unique id), date-period, channel of use, and language. Records can also be filtered on multiple custom tags.
- Complete meta information is stored for later analysis including the original user utterance, the channel of communication, entities extracted if any, custom tags applied, detailed NLP analysis with scores returned from each engine, and the ranking and resolver scores.
- Ability to view the chat transcript to the point of the user utterance. This also gives the option to view the user profile and the details for that user’s conversation sessions.
- You have an option to train the utterance and once trained the utterance will be marked.
- Any important record you want to mark and/or track later can be pinned. These will appear in the Pinned tab.
- Intent not found: Contains all the user utterances that the platform was not able to map to a bot intent/FAQ. These are grouped together based on similarity for the developer to train based on occurrence count.
- You can filter information based on various criteria such as user (Kore user id or channel-specific unique id), date-period, channel of use, and language. Records can also be filtered on Multiple custom tags.
- Complete meta information is stored for later analysis including the original user utterance, the channel of communication, system entities extracted if any, custom tags applied, detailed NLP analysis with scores returned from each engine, and the ranking and resolver scores.
- Ability to view the chat transcript to the point of the user utterance. This also gives the option to view the user profile and the details for that user’s conversation sessions.
- You have an option to train the utterance and once trained the utterance will be marked. You can also filter based on trained / untrained utterances.
- Any important record you want to mark and/or track later can be pinned. These will appear in the Pinned tab.
- Failed Task: All the user utterances that were successfully identified to intent, but the task could not be completed are listed under this section. You can group based on task and failure types to analyze and solve issues with the bot.
- The supported platform failure types are:
- Task aborted by user
- Alternate task initiated
- Chat Interface refreshed
- Human-agent transfer
- Authorization attempt failure – Max attempts reached
- Incorrect entity failure – Max attempts reached
- Script failure
- Service failure
- You can filter information based on various criteria such as task name, user (Kore user id or channel-specific unique id), date-period, and language.
- Complete meta information is stored for later analysis including the original user utterance, the channel of communication, system entities extracted if any, custom tags applied, detailed NLP analysis with scores returned from each engine, and the ranking and resolver scores.
- Ability to view the chat transcript to the point of the user utterance. This also gives the option to view the user profile and the details for that user’s conversation sessions.
- Any important record you want to mark and/or track later can be pinned. These will appear in the Pinned tab.
- The supported platform failure types are:
- Performance: Developers can monitor all the scripts and API services across the bot tasks from a single window. The platform stores the following meta-information:
- Node name, type and task name
- Total number of runs
- Success %
- The total number of calls with 200 responses and the total number of calls with a non-200 response. The actual response code can be viewed from the details page which opens when the service row is clicked.
- Average Response times
- Appropriate alerts if a script or a service is failing consecutively
- Pinned: You can view all the records that you have marked as important or pinned from the identified/unidentified intents and failed tasks in this tab.
To open the Metrics page, from the top menu select Analyze > NLP Metrics.
To facilitate an easier review of the Bot’s performance, the end-user utterances are grouped based on similarity.
Filter Criteria
You can filter the information on the Metrics page using the following criteria. You can save the entered filter criteria and set it as a default filter using the Save as Default Filter.
Criteria | Description |
---|---|
User ID |
The UserID of the end-user related to the conversation. You can choose to filter based on the
You can select the user id from the drop-down that would be presented once you have entered the first three alphabets of the user id. You can choose to either Include or Exclude the selected user id. Note: Channel-specific ids are shown only for the users who have interacted with the bot during the selected period. |
Date period | The page shows the conversations from the last 7 days by default. To filter the conversation to just the ones from the last 24 hours, click 24 Hrs. To switch back to the sessions from the last 7 days, click the Last 7 days. You can also add a custom time period by specifying from and to Date and Time (Time added in ver7.3) |
Languages | If it is a multi-language bot, you can select specific languages to filter the conversation that occurred in those languages. The page shows the conversations that occurred in all enabled languages by default. Not applicable to the Performance tab. |
Channels | Select specific channels to filter the conversation that occurred in those channels. The page shows the conversations that occurred in all enabled channels by default. |
Task/Intent | Select specific tasks or intents to filter the conversation related to those tasks or intents. The page shows the conversations related to all tasks or intents by default. Not applicable in the Intent Not Found tab. |
Utterance Type | Select the Trained option to filter the conversations that only contained trained utterances to the bot. To view the conversations that involved untrained utterances, click Not Trained. The page shows the conversations related to both by default. Applicable only to the Intent Found tab. |
Ambiguous | Select the Show Ambiguous option to filter the conversations that identify multiple tasks or intents and asked the user to choose from the presented options. Available only on the Intent Not Found tab. |
Developer Interactions | Select Include Developer Interactions if you want to include developer interactions in the results. By default, the developer interactions aren’t included. Developers include both the bot owner and shared developers. |
Custom Tags |
Select the specific custom tags to filter the records based on the meta-information, session data, and filter criteria. You can add these tags at three levels:
You can set the criteria as either Contain and Does Not Contain the specified value. Not available on the Debug logs tab. You can define Tags as key-value pairs from Script written anywhere in the application like Script node, Message, entity, confirmation prompts, error prompts, Knowledge Graph responses, BotKit SDK, etc., etc.
|
Identified and Unidentified Intents
The primary details, filter criteria, and the advanced details for both the Intent Found and Intent Not Found are similar, with minor differences. You can also train the bot for any utterances directly from these tabs.
Primary Details
Field | Description |
---|---|
Utterances | The actual utterance entered by the user. The details in the tab are grouped by utterances by default. To turn off grouping by utterance, click the Utterances header and turn off the Group by Utterances option. |
Intent (applies only to the Intent Found tab) |
The intent that was identified for the user utterance. You can take a look at the identified intent and the user utterance to determine if they are the right match. If not, you can train the bot from here. To turn on grouping by intent, click the Intent header and turn on the Group by Task option. |
Traits | All the traits associated that are identified for the listed utterances. Note: This information is available for analytics generated after June 1st, 2021. |
UserID |
The UserID of the end user related to the conversation. You can choose to display the metrics based on either Kore user id or channel specific unique id. Note: channel specific ids are shown only for the users who have interacted with the bot during the selected period. |
Language | The language in which the conversation occurred. |
Date & Time | The date and time of the chat. |
Training the Bot
You can train an intent from both the Intent Found and Intent Not Found tabs. To do so, hover over a row in any of these tabs, and click the Train icon. It opens the Test & Train page from where you can train the bot. For more information, read Testing and Training a Bot.
Failed Tasks
The Failed Tasks tab shows the following details related to the task that was identified but failed to execute for any reason:
Field | Description |
---|---|
Utterances | The actual utterance entered by the user. The details in the tab are grouped by utterances by default. To turn off grouping by utterance, click the Utterances header and turn off the Group by Utterances option. |
Task Name | The task that was identified for the user utterance. To turn on grouping by task name, click the Task Name header and turn on the Group by Task option. |
Failure Point | Nodes or points in the task execution journey where the failure occurred resulting in the task cancellation or user drop. Click an entry to view the complete conversation for that session with markers to identify the intent detection utterance and the failure/drop-out point. Depending on the task type, click Failure Point shows more details. |
Type of Issue |
Shows one of these options as the reason for failure:
|
User ID |
The UserID of the end-user related to the conversation. You can choose to display the metrics based on either Kore user id or channel-specific unique id. Note: channel-specific ids are shown only for the users who have interacted with the bot during the selected period. |
Language | The language in which the conversation occurred. |
Date & Time | The date and time of the chat. |
Pinned
Any records from Identified and Unidentified Intents or Failed Tasks tabs that you have pinned would be displayed here in the Pinned tab. The following details related to the task/intent are displayed:
Field | Description |
---|---|
Utterances | The actual utterance entered by the user. The details in the tab are grouped by utterances by default. To turn off grouping by utterance, click the Utterances header and turn off the Group by Utterances option. |
Intent | The intent/task that was identified/failed. |
Type of Issue | Shows the reason for failure in case of Task Failure records, as mentioned in the section above. |
User ID |
The UserID of the end-user related to the conversation. You can choose to display the metrics based on either Kore user id or channel-specific unique id. Note: channel-specific ids are shown only for the users who have interacted with the bot during the selected period. |
Language | The language in which the conversation occurred. |
Date & Time | The date and time of the chat. |
Advanced View
For all the user utterances listed under the Intent found, Intent not found, Failed Task, and Pinned tabs, you can open advanced details related to the user session with the following sub-tabs:
- Details: This shows the basic details of the session along with a JSON file that includes the NLP analysis for the conversation.
- NLP Analysis: Provides a visual representation of the NLP analysis including intent scoring and selection. For more information, read Testing and Training a Bot.
- Chat history: Directs you to the exact message or conversation for which the record is logged and shows the entire chat history of the user session.
Chat History
Chat History provides visibility into the user information with the inclusion of the following functionality:
- User Profile: Provides a 360-degree view of the user along with their usage metrics.
- User Conversation Sessions: Lists out all the sessions of the user in the given time period with the selected utterance section expanded.
- Go to Selected Utterance: Selected utterance will be highlighted in orange.
Following are the user information details provided:
Functionality | Attribute | Description |
---|---|---|
User Profile | Kore User ID | User id assigned by the platform |
Channel Data | Data received from the channel i.e. the information that is available in User Context. | |
User Meta Tags | The total number of meta tags associated with the user and key-value pairs for the most recent ones. | |
Latest Interaction | Last time the user interacted with the bot | |
Total Conversation Sessions | The total number of interactive and non-interactive sessions registered by the user from the beginning of time | |
Total Conversation Sessions in the Last 30 Days | The total number of interactive and non-interactive sessions registered by the user in the last 30 days | |
*The next few attributes will not be displayed if there is no interaction by the user in the last 30 days | ||
Last 30 Days’ Intent Detection Rate | (Total identified intents / (Total identified intents + unidentified utterances)) * 100 for the utterances over the last 30 days | |
Intents Requested | Total identified intents + unidentified utterances | |
Intents Identified | Total intents identified | |
Last 30 Days Goal Completion Rate | (Tasks success tasks / (Total success tasks + total failed tasks) ) * 100 for the tasks over the last 30 days | |
Tasks Initiated | Total success tasks + total failed tasks | |
Tasks Completed | Tasks successfully completed | |
Recent Conversation Flows | Top 10 popular conversation flows executed by the user in the last 30 days. Popular flows are determined by the number of instances that the flow was executed. | |
User Conversation Sessions | Session Attributes | |
Session Start | Session start date and time. | |
Session End | Session end date and time. | |
Channel | Channel in which the session was initiated. | |
Agent Transfer Tag | The session where the user was transferred to an agent. Sessions should be considered even if the user returns to the bot. | |
Drop Off Tag | The session where the user dropped off. | |
Total Success Tasks | Count of tasks successfully completed in the session. | |
Total Failed Tasks | Count of tasks failed in the session. | |
Intents Identified | Count of intents successfully identified in the session. | |
Intents Not Identified | Count of intents unidentified in the session and list of unidentified intents. | |
Conversation Path | The series of tasks initiated by the user in the session. | |
Session Meta Tags | Count of the session meta tags used with the details of the most recent custom meta tags displayed. | |
Conversation Transcript | ||
Message Meta Tags | The chat transcript is annotated with message tags for messages with meta-tags associated with them. | |
Agent Transfer | Indicates the point of agent transfer at the last message prior to transfer. | |
Drop Off | Indicate the point of drop off at the last message prior to dropping off. |
Performance
The Performance tab shows the following information related to the backend performance of the bot:
Field | Description |
---|---|
Node Name | The name of the service or script or WebHook within the task that got executed in response to the user utterance. To turn on grouping by components to which these scripts or services belong, click the Node Name header and turn on the Group by Component option. |
Type |
Shows whether it is a script or service or WebHook. NOTE: WebHook details are included from ver 7.0. |
Task | The task that was identified for the user utterance. To turn on grouping by task name, click the Task Name header and turn on the Group by Task option. |
Total Runs | The total number of times within the date period that the script or service was run for any user utterances. |
Success Ratio | The percentage of the service or script runs that got executed successfully. |
2XX Responses | The percentage of the service or script runs that returned 2xx response. |
Non-2XX Responses | The percentage of the service or script runs that returned non-2xx response. |
Average Response Time | The average response time of the script or service in the total number of runs. |
Advanced Performance Details
Clicking a service or script or WebHook name opens an advanced details dialog for the service which lists each instance of its run along with separate tabs for successful and failed runs. Analyzing the average response time of different runs gives you insights into any aberrations in the service or script execution. Click any row to open the JSON response associated with the service or script run.
Debug Log
Any custom debug statements that you entered in the Script node using koreDebugger.log("<debug statement>")
statements are displayed in this tab (debug statement should be in a string format).
The logs include the user conversation from across all channels. You can use them for bot analysis especially in case of failures during user interaction.
The details include:
- The actual statement that you have defined at the time of Bot definition.
- Date and time of logging
- Channel
- User ID (along with channel-specific id)
- Language of interaction
- Task name if available
- Developer flag – to indicate if the interaction was performed by a developer or end-user.
You can also view the details of the chat history associated with the session. To view more details, follow the below steps:
- Click on any log record.
- On the corresponding window, you can find the Details and Chat History tabs.
- Under the Details tab, you can find the task name, channel, language, and flow.
- Click the Chat History tab. You can find the chat transcript where the log is recorded.
-
If the debug log is generated from a bot message, you are navigated to that specific message in the chat transcript.
-
If the debug log is not part of the bot message, you are navigated to the latest message added before the debug statement.
-
For universal bots, the debug statements from the universal and linked bots are included in the logs.
The debug logs also include the error messages related to BotKit, like, when the platform could not reach the BotKit or when the BotKit did not acknowledge the message sent by the platform. The message includes details like the <endpoint>
, <error code>
, and <response time>
.
Storage Limitations
The platform imposes restrictions on the number of log statements retained per bot. The limit is a combination of volume and period:
- Only the latest 700 statements per bot are stored.
- Statements older than 7 days are removed.
Exporting the Data
You can export the data present on the Bot Analyze page to a CSV file by clicking the Export icon on the top right corner of the page.
Once you click the icon, the export process starts and you can use the Status Tracker dock to track the export progress. After completion of the export, the dock shows the export status and if it’s successful provides a link to download the file.
The download includes the information present on the selected tab as well as the detailed analysis based on the selected filters.
These records will also include the Meta Tag information.