GETTING STARTED
Kore.ai XO Platform
Virtual Assistants Overview
Natural Language Processing (NLP)
Concepts and Terminology
Quick Start Guide
Accessing the Platform
Navigating the Kore.ai XO Platform
Building a Virtual Assistant
Help & Learning Resources
Release Notes
Current Version
Recent Updates
Previous Versions
Deprecations
Request a Feature
CONCEPTS
Design
Storyboard
Overview
FAQs
Conversation Designer
Overview
Dialog Tasks
Mock Scenes
Dialog Tasks
Overview
Navigate Dialog Tasks
Build Dialog Tasks
Node Types
Overview
Intent Node
Dialog Node
Dynamic Intent Node
GenAI Node
GenAI Node (v2, BETA)
GenAI Prompt
Entity Node
Form Node
Confirmation Node
Message Nodes
Logic Node
Bot Action Node
Service Node
Webhook Node
Script Node
Process Node
Agent Transfer
Node Connections
Node Connections Setup
Sub-Intent Scoping
Entity Types
Entity Rules
User Prompts or Messages
Voice Call Properties
Knowledge AI
Introduction
Knowledge Graph
Introduction
Terminology
Build a Knowledge Graph
Manage FAQs
Knowledge Extraction
Import or Export Knowledge Graph
Prepare Data for Import
Importing Knowledge Graph
Exporting Knowledge Graph
Auto-Generate Knowledge Graph
Knowledge Graph Analysis
Answer from Documents
Alert Tasks
Small Talk
Digital Skills
Overview
Digital Forms
Digital Views
Introduction
Widgets
Panels
Session and Context Variables
Context Object
Intent Discovery
Train
NLP Optimization
ML Engine
Overview
Model Validation
FM Engine
KG Engine
Traits Engine
Ranking and Resolver
Training Validations
NLP Configurations
NLP Guidelines
LLM and Generative AI
Introduction
LLM Integration
Kore.ai XO GPT Module
Prompts & Requests Library
Co-Pilot Features
Dynamic Conversations Features
Guardrails
Intelligence
Introduction
Event Handlers
Contextual Memory
Contextual Intents
Interruption Management
Multi-intent Detection
Amending Entities
Default Conversations
Conversation Driven Dialog Builder
Sentiment Management
Tone Analysis
Default Standard Responses
Ignore Words & Field Memory
Test & Debug
Overview
Talk to Bot
Utterance Testing
Batch Testing
Conversation Testing
Conversation Testing Overview
Create a Test Suite
Test Editor
Test Case Assertion
Test Case Execution Summary
Glossary
Health and Monitoring
NLP Health
Flow Health
Integrations
Actions
Actions Overview
Asana
Configure
Templates
Azure OpenAI
Configure
Templates
BambooHR
Configure
Templates
Bitly
Configure
Templates
Confluence
Configure
Templates
DHL
Configure
Templates
Freshdesk
Configure
Templates
Freshservice
Configure
Templates
Google Maps
Configure
Templates
Here
Configure
Templates
HubSpot
Configure
Templates
JIRA
Configure
Templates
Microsoft Graph
Configure
Templates
Open AI
Configure
Templates
Salesforce
Configure
Templates
ServiceNow
Configure
Templates
Stripe
Configure
Templates
Shopify
Configure
Templates
Twilio
Configure
Templates
Zendesk
Configure
Templates
Agents
Agent Transfer Overview
Custom (BotKit)
Drift
Genesys
Intercom
NiceInContact
NiceInContact(User Hub)
Salesforce
ServiceNow
Configure Tokyo and Lower versions
Configure Utah and Higher versions
Unblu
External NLU Adapters
Overview
Dialogflow Engine
Test and Debug
Deploy
Channels
Publishing
Versioning
Analyze
Introduction
Dashboard Filters
Overview Dashboard
Conversations Dashboard
Users Dashboard
Performance Dashboard
Custom Dashboards
Introduction
Custom Meta Tags
Create Custom Dashboard
Create Custom Dashboard Filters
LLM and Generative AI Logs
NLP Insights
Task Execution Logs
Conversations History
Conversation Flows
Conversation Insights
Feedback Analytics
Usage Metrics
Containment Metrics
Universal Bots
Introduction
Universal Bot Definition
Universal Bot Creation
Training a Universal Bot
Universal Bot Customizations
Enabling Languages
Store
Manage Assistant
Team Collaboration
Plan & Usage
Overview
Usage Plans
Templates
Support Plans
Invoices
Authorization
Conversation Sessions
Multilingual Virtual Assistants
Get Started
Supported Components & Features
Manage Languages
Manage Translation Services
Multiingual Virtual Assistant Behavior
Feedback Survey
Masking PII Details
Variables
Collections
IVR Settings
General Settings
Assistant Management
Manage Namespace
Data
Overview
Guidelines
Data Table
Table Views
App Definitions
Data as Service
HOW TOs
Build a Travel Planning Assistant
Travel Assistant Overview
Create a Travel Virtual Assistant
Design Conversation Skills
Create an ‘Update Booking’ Task
Create a Change Flight Task
Build a Knowledge Graph
Schedule a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Train the Assistant
Use Traits
Use Patterns
Manage Context Switching
Deploy the Assistant
Use Bot Functions
Use Content Variables
Use Global Variables
Use Web SDK
Build a Banking Assistant
Design Conversation Skills
Create a Sample Banking Assistant
Create a Transfer Funds Task
Create a Update Balance Task
Create a Knowledge Graph
Set Up a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Add Data to Data Tables
Update Data in Data Tables
Add Data from Digital Forms
Train the Assistant
Composite Entities
Use Traits
Use Patterns for Intents & Entities
Manage Context Switching
Deploy the Assistant
Configure an Agent Transfer
Use Assistant Functions
Use Content Variables
Use Global Variables
Intent Scoping using Group Node
Analyze the Assistant
Create a Custom Dashboard
Use Custom Meta Tags in Filters
APIs & SDKs
API Reference
API Introduction
Rate Limits
API List
koreUtil Libraries
SDK Reference
SDK Introduction
Web SDK
How the Web SDK Works
SDK Security
SDK Registration
Web Socket Connect and RTM
Tutorials
Widget SDK Tutorial
Web SDK Tutorial
BotKit SDK
BotKit SDK Deployment Guide
Installing the BotKit SDK
Using the BotKit SDK
SDK Events
SDK Functions
Installing Botkit in AWS
Tutorials
BotKit - Blue Prism
BotKit - Flight Search Sample VA
BotKit - Agent Transfer

ADMINISTRATION
Intro to Bots Admin Console
Administration Dashboard
User Management
Managing Your Users
Managing Your Groups
Role Management
Manage Data Tables and Views
Bot Management
Enrollment
Inviting Users
Sending Bulk Invites to Enroll Users
Importing Users and User Data
Synchronizing Users from Active Directory
Security & Compliance
Using Single Sign-On
Two-Factor Authentication for Platform Access
Security Settings
Cloud Connector
Analytics for Bots Admin
Billing
  1. Docs
  2. Virtual Assistants
  3. Analyzing Your Bot
  4. Task Execution Logs

Task Execution Logs

The Task Execution Logs feature helps you gain in-depth insights into the task execution-related data and assess your virtual assistant’s performance in executing tasks. The Analyze > Task Execution Logs page shows information specific to task execution in the following sections:

  • Failed Task: Indicates the number of unsuccessful tasks.
  • API Calls: Displays all the Service node and Webhook node executions-related data, and the number of failed services during Bot interactions.
  • Script Execution: Displays analytics data for all the script node executions and the number of failed scripts during Bot interactions.
  • Debug Log: Custom Debug logs include user conversations from across all channels for analyzing your VA.
  • Pinned: Pinned Task Execution Logs records. Specific records are pinned to highlight them for easy access and viewing.

Task Execution Logs Fields

The Analyze > Task Execution Logs page displays the following fields specific to task execution:

Failed Task

In a scenario where all the user utterances are successfully mapped to an intent, but the task cannot be completed for some reason, then such utterances are listed under this tab. You can group them based on task and failure types to analyze and solve issues with the VA.

See the following table and the Features section to know more:

Failed Task – Type of Issues

Different types of issues that occur during a Failed Task are listed as follows:

  • Task aborted by user
  • Alternate task initiated
  • Chat Interface refreshed
  • Human agent transfer
  • Authorization attempt failure – Max attempts reached
  • Incorrect entity failure – Max attempts reached
  • Script failure
  • Service failure
  • Inactivity or External Events (from ver8.0) – when the conversation session and as a result, the in-progress task is closed due to inactivity or external events.

Description of Failed Task Fields

The following table lists the fields on the Failed Task tab with descriptions:

Fields Description
Utterances The utterances that are successfully mapped to an intent, but still the task failed due to some issue. The details in the tab are grouped by utterances based on similarity by default. To turn off grouping by utterance, click the Utterances header and turn off the Group by Utterances option.
Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Failure Point Nodes or points in the task execution journey where the failure occurred, resulting in the task cancellation or user drop. Click an entry to view the complete conversation for that session with markers to identify the intent detection utterance and the failure/drop-out point. Depending on the task type, clicking Failure Point shows more details.
Type of Issue Shows the reason for failure in case of Task Failure records.
UserID The UserID of the end user related to the conversation. You can view the metrics based on either Kore User id or Channel User Id.

Channel-specific ids are shown only for the users who have interacted with the VA during the selected period.

Language The language in which the conversation occurred.

If it is a multi-lingual VA, you can select specific languages to filter the conversation that occurred in those languages. The page shows the conversations that occurred in all enabled languages by default.
Date & Time The date and time of the chat. You can sort the data by either Newest to Oldest or Oldest to Newest.

Performance

Developers can monitor all the scripts and API services across the VA’s tasks from a single window. The performance tab displays information related to the backend performance of the VA in two sections, namely API Calls and Script Execution. The platform stores the following meta-information:

API Calls

The API Calls section provides information on the API calls execution performance for a Service Node or Webhook based on the following metrics:

  • Node name, Type, and task name.
  • Success %
  • Channel
  • The total number of calls with a 200 (success) responses and the total number of calls with a non-200 (failure) response. You can view the actual response code from the details page that opens when you click the service row.
  • Average Response times

Description of the API Calls Fields

The following table lists the fields in the API Calls section with their descriptions:

Fields Description
Node Name The name of the service or script or Webhook within the task that got executed in response to the user utterance. The Node Name column displays the names of the Preprocessor script node, API call, and the Postprocessor script node.

To turn on grouping by components to which the Webhook services belong, click the Node Name header and turn on the Group by NodeName option. 

To turn on grouping by Pre and Postprocessor scripts for a Service Node, click the Node Name header and enable the Group by NodeName (PreProcessor or PostProcessor Script) option.

Type Shows whether it is a script or service or Webhook.

Webhook details are included from ver 7.0.

Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Success% The percentage of successful service or script node task executions.
Channel The channel on which the Service/Webhook task execution is performed
2XX Responses The percentage of the service or script runs that returned a 2xx API response.
Non 2XX Responses The percentage of the service or script runs that returned a non-2xx API response.
Avg Response Time The average response time of the script or service in the total number of runs.

For a Service node, this value displays the total execution time for the Preprocessor script, API, and Postprocessor script calls.

Note: The execution times for each of these calls is maintained separately in the backend.

This can be sorted from High to Low or Low to High under the Performance tab.

Status Code Filter service executions based on the status codes. From the More Filters drop-down menu > Status Code, you can choose one or more status codes:

  • Success status code: 200 
  • Non-success status code: 304, 400, 401, 403, 404, 408, 409, 500, 502, 503, and 504

Script Execution

The Script Execution section provides information on the VA’s script execution performance based on the following metrics:

  • Node name and task name
  • Success %
  • Channel
  • Average Response Times
  • Appropriate alerts if a script or a service is failing consecutively.

Description of the Script Execution Fields

The following table lists the fields in the Script Execution section with their descriptions:

 

Fields Description
Node Name The name of the service or script or Webhook within the task that was executed in response to the user utterance. The Node Name displays the names of the Preprocessor, Service, and Postprocessor script nodes.

To turn on grouping by components to which the Webhook services belong, click the Node Name header and turn on the Group by NodeName option. 

To turn on grouping by Pre and Postprocessor scripts for a Service Node, click the Node Name header and enable the Group by NodeName (PreProcessor Script or PostProcessor Script) option.

Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Success% The percentage of the service or script runs that got executed successfully.
Channel The channel on which the Script task execution is performed.
Avg Response Time The average response time of the script or service in the total number of runs.

For a Service node, this value displays the Total Execution Time for the Preprocessor, Service Node, and Postprocessor script calls. Please refer to the image below.

Note: The execution times for each of these calls is maintained separately in the backend.

This can be sorted from High to Low or Low to High under the Performance tab.

Script Performance

On the Performance Dashboard, the service node’s script execution details are displayed in the Script Execution Rate section.  Learn more.

Additionally, the Script Performance section displays information on the Service Node execution. Learn more.

Debug Log

Any custom debug statements that you entered in the Script node using the script koreDebugger.log("<debug statement>")are displayed on this tab. Debug statements should be in a string format. See the following table to know more:

The logs include the user conversation from across all channels. You can use them for bot analysis, especially in case of failures during user interaction.

The details include the following:

  • The actual statement that you have defined at the time of Bot definition.
  • Date and time of logging
  • Channel
  • User ID (along with channel-specific id)
  • Language of interaction
  • Task name, if available

You can also view the details of the chat history associated with the session. To view more details, follow the below steps:

  1. Click a logged record.
  2. On the corresponding window, you can find the Details and Chat History tabs.
  3. Under the Details tab, you can find the task name, channel, language, and flow.
  4. Click the Chat History tab. You can find the chat transcript where the log is recorded.
    • If the debug log is generated from a VA message, you are navigated to that specific message in the chat transcript.
    • If the debug log is not part of the VA message, you are navigated to the latest message added before the debug statement.

For universal VAs, the debug statements from the universal and linked assistants are included in the logs. The debug logs also include the error messages related to BotKit, for example, when the platform could not reach the BotKit or when the BotKit did not acknowledge the message sent by the platform. The message includes details like the <endpoint>, <error code>, and <response time>.

Description of Debug Log Fields

The following table lists the fields on the Debug Log tab with descriptions:

Fields Description
Log Description of the debug log. For example, getIndex is not defined.
Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Debug Point  The point or a node in the conversation where the error is identified. For example, buildDataForCarousel
Channel Specific channel where the conversation occurred.
Language The language in which the conversation occurred.

If it is a multi-lingual VA, you can select specific languages to filter the conversation that occurred in those languages. The page shows the conversations that occurred in all enabled languages by default.
UserID The UserID of the end user related to the conversation. You can view the metrics based on either Kore User id or Channel User Id.

Channel-specific ids are shown only for the users who have interacted with the VA during the selected period.

Date & Time The date and time of the chat. You can sort the data by either Newest to Oldest or Oldest to Newest.

Pinned

Any records from the Failed Tasks, API Calls, and Script Execution sections that are pinned are displayed here. The fields available in the Pinned section under Task Execution Logs are listed here as well. However, they pertain to Task Execution Logs. Learn more.

Storage Limitations

The platform imposes restrictions on the number of log statements retained per VA. The limit is a combination of volume and period:

  • Only the latest 700 statements per VA are stored.
  • Statements older than 7 days are removed.

Task Execution Logs Analysis

The following sections describe more about the options available on Task Execution Logs page and the analysis of the records captured here.

Features

The following list details the features available for Task Execution Logs including Failed Tasks, API Calls, Script Execution, and Debug Log.

  • You can filter the information based on various criteria such as User Utterances, Intent, user id (Kore user id or channel-specific unique id), date-period, the channel of use, language, etc. You can also filter records based on multiple custom tags. See Filter Criteria to know more.
  • Complete meta information is stored for later analysis, including the original user utterance, the channel of communication, entities extracted (if any), custom tags applied, detailed Task Execution Logs.
  • Any important record you want to mark, track later, or both can be pinned which appear on the Pinned tab.
  • Sorting feature is available for Date and Time (Oldest to Newest, Newest to Oldest). You can export the insights data as a CSV file.

Fields Matrix

The following matrix shows the availability of fields on each tab of the Task Execution Log dashboard:

Field Failed Tasks API Calls Script Execution Pinned Debug Log
Utterances X X X
Intent X X X X
Traits X X X X X
UserID X X
Language X X
Date & Time X X
Prompt Type X X X X X
Task Name X
Node Name X X X
Failure Point X X X X
Type of Issue X X X
Type X X X
Total Runs X X X
Success% X X X
2XX Responses X X X
Non 2XX Responses X X X
Avg Response Time X X X
Log X X X X
Debug Point X X X X
Channel X X X X

Task Execution Logs

The Task Execution Logs feature helps you gain in-depth insights into the task execution-related data and assess your virtual assistant’s performance in executing tasks. The Analyze > Task Execution Logs page shows information specific to task execution in the following sections:

  • Failed Task: Indicates the number of unsuccessful tasks.
  • API Calls: Displays all the Service node and Webhook node executions-related data, and the number of failed services during Bot interactions.
  • Script Execution: Displays analytics data for all the script node executions and the number of failed scripts during Bot interactions.
  • Debug Log: Custom Debug logs include user conversations from across all channels for analyzing your VA.
  • Pinned: Pinned Task Execution Logs records. Specific records are pinned to highlight them for easy access and viewing.

Task Execution Logs Fields

The Analyze > Task Execution Logs page displays the following fields specific to task execution:

Failed Task

In a scenario where all the user utterances are successfully mapped to an intent, but the task cannot be completed for some reason, then such utterances are listed under this tab. You can group them based on task and failure types to analyze and solve issues with the VA.

See the following table and the Features section to know more:

Failed Task – Type of Issues

Different types of issues that occur during a Failed Task are listed as follows:

  • Task aborted by user
  • Alternate task initiated
  • Chat Interface refreshed
  • Human agent transfer
  • Authorization attempt failure – Max attempts reached
  • Incorrect entity failure – Max attempts reached
  • Script failure
  • Service failure
  • Inactivity or External Events (from ver8.0) – when the conversation session and as a result, the in-progress task is closed due to inactivity or external events.

Description of Failed Task Fields

The following table lists the fields on the Failed Task tab with descriptions:

Fields Description
Utterances The utterances that are successfully mapped to an intent, but still the task failed due to some issue. The details in the tab are grouped by utterances based on similarity by default. To turn off grouping by utterance, click the Utterances header and turn off the Group by Utterances option.
Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Failure Point Nodes or points in the task execution journey where the failure occurred, resulting in the task cancellation or user drop. Click an entry to view the complete conversation for that session with markers to identify the intent detection utterance and the failure/drop-out point. Depending on the task type, clicking Failure Point shows more details.
Type of Issue Shows the reason for failure in case of Task Failure records.
UserID The UserID of the end user related to the conversation. You can view the metrics based on either Kore User id or Channel User Id.

Channel-specific ids are shown only for the users who have interacted with the VA during the selected period.

Language The language in which the conversation occurred.

If it is a multi-lingual VA, you can select specific languages to filter the conversation that occurred in those languages. The page shows the conversations that occurred in all enabled languages by default.
Date & Time The date and time of the chat. You can sort the data by either Newest to Oldest or Oldest to Newest.

Performance

Developers can monitor all the scripts and API services across the VA’s tasks from a single window. The performance tab displays information related to the backend performance of the VA in two sections, namely API Calls and Script Execution. The platform stores the following meta-information:

API Calls

The API Calls section provides information on the API calls execution performance for a Service Node or Webhook based on the following metrics:

  • Node name, Type, and task name.
  • Success %
  • Channel
  • The total number of calls with a 200 (success) responses and the total number of calls with a non-200 (failure) response. You can view the actual response code from the details page that opens when you click the service row.
  • Average Response times

Description of the API Calls Fields

The following table lists the fields in the API Calls section with their descriptions:

Fields Description
Node Name The name of the service or script or Webhook within the task that got executed in response to the user utterance. The Node Name column displays the names of the Preprocessor script node, API call, and the Postprocessor script node.

To turn on grouping by components to which the Webhook services belong, click the Node Name header and turn on the Group by NodeName option. 

To turn on grouping by Pre and Postprocessor scripts for a Service Node, click the Node Name header and enable the Group by NodeName (PreProcessor or PostProcessor Script) option.

Type Shows whether it is a script or service or Webhook.

Webhook details are included from ver 7.0.

Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Success% The percentage of successful service or script node task executions.
Channel The channel on which the Service/Webhook task execution is performed
2XX Responses The percentage of the service or script runs that returned a 2xx API response.
Non 2XX Responses The percentage of the service or script runs that returned a non-2xx API response.
Avg Response Time The average response time of the script or service in the total number of runs.

For a Service node, this value displays the total execution time for the Preprocessor script, API, and Postprocessor script calls.

Note: The execution times for each of these calls is maintained separately in the backend.

This can be sorted from High to Low or Low to High under the Performance tab.

Status Code Filter service executions based on the status codes. From the More Filters drop-down menu > Status Code, you can choose one or more status codes:

  • Success status code: 200 
  • Non-success status code: 304, 400, 401, 403, 404, 408, 409, 500, 502, 503, and 504

Script Execution

The Script Execution section provides information on the VA’s script execution performance based on the following metrics:

  • Node name and task name
  • Success %
  • Channel
  • Average Response Times
  • Appropriate alerts if a script or a service is failing consecutively.

Description of the Script Execution Fields

The following table lists the fields in the Script Execution section with their descriptions:

 

Fields Description
Node Name The name of the service or script or Webhook within the task that was executed in response to the user utterance. The Node Name displays the names of the Preprocessor, Service, and Postprocessor script nodes.

To turn on grouping by components to which the Webhook services belong, click the Node Name header and turn on the Group by NodeName option. 

To turn on grouping by Pre and Postprocessor scripts for a Service Node, click the Node Name header and enable the Group by NodeName (PreProcessor Script or PostProcessor Script) option.

Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Success% The percentage of the service or script runs that got executed successfully.
Channel The channel on which the Script task execution is performed.
Avg Response Time The average response time of the script or service in the total number of runs.

For a Service node, this value displays the Total Execution Time for the Preprocessor, Service Node, and Postprocessor script calls. Please refer to the image below.

Note: The execution times for each of these calls is maintained separately in the backend.

This can be sorted from High to Low or Low to High under the Performance tab.

Script Performance

On the Performance Dashboard, the service node’s script execution details are displayed in the Script Execution Rate section.  Learn more.

Additionally, the Script Performance section displays information on the Service Node execution. Learn more.

Debug Log

Any custom debug statements that you entered in the Script node using the script koreDebugger.log("<debug statement>")are displayed on this tab. Debug statements should be in a string format. See the following table to know more:

The logs include the user conversation from across all channels. You can use them for bot analysis, especially in case of failures during user interaction.

The details include the following:

  • The actual statement that you have defined at the time of Bot definition.
  • Date and time of logging
  • Channel
  • User ID (along with channel-specific id)
  • Language of interaction
  • Task name, if available

You can also view the details of the chat history associated with the session. To view more details, follow the below steps:

  1. Click a logged record.
  2. On the corresponding window, you can find the Details and Chat History tabs.
  3. Under the Details tab, you can find the task name, channel, language, and flow.
  4. Click the Chat History tab. You can find the chat transcript where the log is recorded.
    • If the debug log is generated from a VA message, you are navigated to that specific message in the chat transcript.
    • If the debug log is not part of the VA message, you are navigated to the latest message added before the debug statement.

For universal VAs, the debug statements from the universal and linked assistants are included in the logs. The debug logs also include the error messages related to BotKit, for example, when the platform could not reach the BotKit or when the BotKit did not acknowledge the message sent by the platform. The message includes details like the <endpoint>, <error code>, and <response time>.

Description of Debug Log Fields

The following table lists the fields on the Debug Log tab with descriptions:

Fields Description
Log Description of the debug log. For example, getIndex is not defined.
Task Name The task that is identified for the user utterance. To turn on grouping by task name, click the Task Name header and enable the Group by Task option.
Debug Point  The point or a node in the conversation where the error is identified. For example, buildDataForCarousel
Channel Specific channel where the conversation occurred.
Language The language in which the conversation occurred.

If it is a multi-lingual VA, you can select specific languages to filter the conversation that occurred in those languages. The page shows the conversations that occurred in all enabled languages by default.
UserID The UserID of the end user related to the conversation. You can view the metrics based on either Kore User id or Channel User Id.

Channel-specific ids are shown only for the users who have interacted with the VA during the selected period.

Date & Time The date and time of the chat. You can sort the data by either Newest to Oldest or Oldest to Newest.

Pinned

Any records from the Failed Tasks, API Calls, and Script Execution sections that are pinned are displayed here. The fields available in the Pinned section under Task Execution Logs are listed here as well. However, they pertain to Task Execution Logs. Learn more.

Storage Limitations

The platform imposes restrictions on the number of log statements retained per VA. The limit is a combination of volume and period:

  • Only the latest 700 statements per VA are stored.
  • Statements older than 7 days are removed.

Task Execution Logs Analysis

The following sections describe more about the options available on Task Execution Logs page and the analysis of the records captured here.

Features

The following list details the features available for Task Execution Logs including Failed Tasks, API Calls, Script Execution, and Debug Log.

  • You can filter the information based on various criteria such as User Utterances, Intent, user id (Kore user id or channel-specific unique id), date-period, the channel of use, language, etc. You can also filter records based on multiple custom tags. See Filter Criteria to know more.
  • Complete meta information is stored for later analysis, including the original user utterance, the channel of communication, entities extracted (if any), custom tags applied, detailed Task Execution Logs.
  • Any important record you want to mark, track later, or both can be pinned which appear on the Pinned tab.
  • Sorting feature is available for Date and Time (Oldest to Newest, Newest to Oldest). You can export the insights data as a CSV file.

Fields Matrix

The following matrix shows the availability of fields on each tab of the Task Execution Log dashboard:

Field Failed Tasks API Calls Script Execution Pinned Debug Log
Utterances X X X
Intent X X X X
Traits X X X X X
UserID X X
Language X X
Date & Time X X
Prompt Type X X X X X
Task Name X
Node Name X X X
Failure Point X X X X
Type of Issue X X X
Type X X X
Total Runs X X X
Success% X X X
2XX Responses X X X
Non 2XX Responses X X X
Avg Response Time X X X
Log X X X X
Debug Point X X X X
Channel X X X X
메뉴