This document provides information on the various releases and the corresponding feature updates and enhancements introduced in version 10.1.x of the Platform.
v10.1.24 April 13, 2024
Patch Release
This update includes security updates and bug fixes.
Important Security Update and Potential Impact on NLP for Spanish and Dutch Bots
This release addresses a high-priority security vulnerability. This fix might cause minor disruptions to the NLP functionality of standard network bots for users with Spanish or Dutch language bots. In rare cases, some user utterances might not be interpreted correctly by the bot’s intent recognition.
What to do if you use Spanish or Dutch bots: We recommend retraining and republishing your bots. This will ensure your bots continue to function optimally.
We appreciate your cooperation as we prioritize security.
v10.1.23 March 30, 2024
Patch Release
This update includes feature enhancements and bug fixes. Key features and enhancements included in this release are summarized below.
LLM & Generative AI
Introducing Kore.ai XO GPT Module Support for Rephrase Dialog Responses
The XO GPT module now supports the Rephrase Dialog Responses feature. This is in addition to the Rephrase User Query and the Conversation Summary features already supported by the XO GPT module. This enhances the end-user experience with empathetic and contextual responses based on the user’s emotions and conversation context. Learn more.
Web SDK
Enhanced Conversation Session Management for Seamless Conversation Continuity Across Reconnects
The Web SDK now provides comprehensive control on how to connect, reconnect, or refresh the connection with the platform and manage the conversation session behavior. The new property ConnectionMode
is introduced to streamline this flow. It also addresses previous limitations where the behavior was controlled by the isReconnect
flag, potentially leading to inconsistencies and interruptions in the user experience.
The ConnectionMode
parameter supports the following four values, each dictating different behaviors for handling conversation sessions during a reconnect or refresh action:
Default
: This mode ensures the continuation of ongoing conversation sessions without triggering the `OnConnect` event unless there’s no active session, in which case a new session is created and the event is triggered.Start_New
: This forces the closure of any ongoing sessions and initiates a new conversation session, always triggering the `OnConnect` event. This applies to both virtual assistant and agent conversations.Start_New_Resume_Agent
: This closes any ongoing virtual assistant conversation session and creates a new one. However, if an agent conversation is ongoing, it is maintained. The appropriate events are emitted for session closure, new session creation, and agent session closure (if applicable).ReConnect
: This option keeps the socket connection alive without affecting any ongoing conversations or agent sessions.
For backward compatibility, the platform continues to support the isReconnect
flag. However, the ConnectionMode
takes precedence over the existing isReconnect
flag if both are provided. This ensures that existing integrations remain unaffected while allowing developers to adopt the new behavior without disrupting current functionalities.
Additionally, the platform will emit relevant events and log messages in the BotKit (if enabled) to inform developers about session closures, new session creations, and agent session closures. This will allow developers to take appropriate actions, such as closing the agent session in a third-party system if required.
(The latest Web SDK v2.0 is available at https://github.com/Koredotcom/web-kore-sdk/tree/v2/10.1.23/.)
v10.1.22 March 23, 2024
Patch Release
This update includes bug fixes.
v10.1.21 March 10, 2024
Patch Release
This update includes feature enhancements and bug fixes. Key features and enhancements included in this release are summarized below.
LLM & Generative AI
New Usage Analytics Framework for LLM & Generative AI
The XO Platform’s new usage analytics framework offers comprehensive insights into the utilization of Large Language Models (LLMs) and Generative AI features. This robust framework collects, analyzes, and presents comprehensive data on user interactions, request-response dynamics, and payload details. It enables bot designers to track and compare usage across various LLM features and refine prompts and settings to boost performance and user experience.
The Framework provides various easy and flexible ways to access the logs:
LLM and GenAI Logs
The new Analytics feature offers comprehensive insights into LLM requests and responses. It shows granular data such as the features accessing LLMs, response generation times, payload details, token usage, and more.
LLM & Generative AI Payload Details in the Debug Logs
Comprehensive payload details for all runtime features in the Debug Logs empower bot designers to test and refine various properties and settings during design.
LLM & GenAI Usage Logs Public API
Try our new public API to extract the LLM & GenAI Usage Logs and import them into any other business application for reporting and analysis. Learn more.
Import Pre-built Prompts Without Configuring the Associated Models
Creating prompts for Custom LLM integrations is now made easy. New prompts can be created by copying any of the prebuilt system prompts without the need to enable the underlying pre-built integrations. This streamlined process lets bot designers utilize existing prompts to build better, more engaging new prompts.
Bot Export/Import
Incremental Export or Import of NLP Components
The NLP components are further subdivided to provide much more granular control for managing the export and import process. It enables bot designers to deploy changes selectively, saving time and effort.
The list of NLP Data for Incremental Import:
- Before this release: NLP Settings, Utterances, Patterns, and Standard Responses
- After this release: NLP Settings, Utterances, Patterns, Standard Responses, Concepts, Synonyms, Traits, and Rules.
Note that the Rules must be exported/imported along with the Dialog Tasks to which they are tagged. Learn more.
Introducing Task Execution-Based Containment Report API
The report provides granular data on task execution results, including success and failure rates, categorized by drop-off, self-service, and agent-handoff scenarios for a specified period. The information helps bot designers gain comprehensive insights into Virtual Assistants’ performance, identify areas for improvement, and optimize users’ overall conversation experience.
Key information provided in the report:
- Task Name: Name of the task executed – Dialog, Alert task, or FAQ.
- Execution Status: Successful task execution is marked as ‘Success’. Any failure in executing the task, including drop-off, is marked as a ‘Failure’.
- Execution Count: Number of times a task is executed (Success or Failure).
- Total Sessions: The overall count of sessions during which the task was executed. The session count will be the aggregation of [Drop-off + Self-Service + Agent Hand-off] sessions.
- Self-Service Sessions: The total number of sessions marked as self-service in which the given task is executed. A session is classified as Self-Service if the conversation ends successfully.
- Drop-off Sessions: Total number of sessions that are marked as Drop-off in which the given task is executed. A session is classified as a Drop-off session if the user abandons the conversation due to one of the following reasons:
- A user left the conversation midway without completing any task.
- The task ended as a failure.
- The last interaction/message resulted in an ‘intent not identified’ scenario.
- No response from the assistant.
- The user abandons the assistant during a live conversation.
- Agent Transfer Sessions: The total number of sessions marked as Agent Transfer in which the given task is executed. A session is classified as an Agent Transfer session if the conversation leads to an agent transfer in that session.
Channels
Configure Display Name for Email Channel
Bot designers can now personalize the Display Name for a configured email address for the Email Channel (Deploy > Channel > Email). Providing the display name ensures that the name linked to the sender’s email address is visible to the end user in the sender’s email profile. When the display name is not provided, the default display name is displayed, which is a combination of the sender’s name linked to the email address and the text ‘Bot (via Kore)’.
With Default Display Name:
v10.1.20 February 24, 2024
Patch Release
This update includes feature enhancements and bug fixes. Key features and enhancements included in this release are summarized below.
LLM & Generative AI
Introducing Custom LLM Integration Support for GenAI Prompt
In addition to pre-built commercial LLMs, the GenAI Prompt now supports Custom LLMs. It empowers bot designers to craft personalized prompts to unlock the full potential of the GenAI Prompt to deliver uniquely tailored conversation experiences for their users. Learn more.
v10.1.19 February 11, 2024
Patch Release
This update includes feature enhancements and bug fixes. Key features and enhancements included in this release are summarized below.
LLM & Generative AI
Introducing Kore.ai XO GPT Models
The new Kore.ai XO GPT Models module provides fine-tuned large language models optimized for enterprise conversational AI applications. These models have been evaluated and fine-tuned to be accurate, safe, and efficient for production deployment. Initial capabilities include Conversation Summarization and User Query Rephrasing. Additional models for features like Intent Resolution, Bot Response Rephrasing, Entity Co-referencing, etc., are planned in future updates.
Key Benefits of Kore.ai XO GPT Models
- Accuracy: The XO GPT module leverages smaller foundation models, typically under 10 billion parameters, that have been fine-tuned specifically for conversational AI applications. By tuning smaller models rather than directly prompting larger generative models, the XO GPT Models achieve better accuracy, relevance, and interpretability for production deployment.
- High Performance: The XO GPT Models are hosted along with the XO Platform and are relatively smaller in size. This results in faster response times, making them suitable for production use cases for digital and voice interactions.
- Accelerate Time-to-Value with Pre-Tuned Models: The Kore.ai XO GPT Models come pre-fine-tuned for conversational AI use cases, eliminating the complex process of prompt engineering required for adopting commercial LLMs. Enterprises can rapidly deploy these models to start realizing value immediately without needing in-house machine learning expertise or long tuning cycles.
- Data Security and Privacy: The Kore.ai XO GPT Models are fully integrated into the XO Platform, enabling the same enterprise-grade data confidentiality, privacy, and governance enforced across the XO stack.
Features Supported by Kore.ai XO GPT
The Kore.ai XO GPT module supports the following features:
- Rephrase User Query: This XO GPT model utilizes the bot domain knowledge and conversation history to expand and rephrase user queries for improved understanding by downstream NLP components. This includes better recognition of contextual intents, entity co-referencing, and more. Learn more.
- Conversation Summary: This model generates concise, natural language summaries of interactions between the virtual assistant, users, and human agents. It distills the key intents, entities, decisions, and outcomes into an easy-to-read synopsis. Companies can leverage conversation summarization to boost agent productivity, ensure process compliance, and create better contextual recommendations – without having to read lengthy transaction histories. It is pre-integrated with Kore.ai’s Contact Center platform. It is also extensible to third-party applications via API integration. Learn more.
An example of the usage: When Conversation Summary is enabled and the conversation is transferred to an agent in SmartAssist, the Conversation Summary is displayed on the Agent Console, as shown in the screenshot below.
Introducing Custom LLM Integration Support for Conversation Summary
In addition to the Kore.ai XO GPT, the Platform now supports Custom LLMs for generating Conversation Summary. It empowers bot designers to craft personalized prompts, allowing them to leverage their own custom models to generate the summary. Learn more.
Introducing Custom LLM Integration Support for GenAI Node
In addition to pre-built commercial LLMs, the GenAI Node now supports Custom LLMs. It empowers bot designers to craft personalized prompts to unlock the full potential of the GenAI node to deliver uniquely tailored conversation experiences for their users. Learn more.
GPT-4 Support for Zero-shot ML Model
The Platform now supports Azure OpenAI GPT-4 and OpenAI GPT-4 for the Zero-shot ML Model. The model offers enhanced capability and accuracy. Learn more.
Service Node Enhancement
Pre and Post-Processor for Service Node
The Platform now allows defining pre-processors and post-processors within the definition of the Service Node, directly improving execution times and end-user experience. This also reduces the need to add separate Script nodes, making it easier to manage the dialog definition.
Also, the respective logs and analytics are updated for this change:
- The API Execution time now includes pre-processor and post-processor execution time if defined in the Task Execution Logs > API Calls tab.
- The pre and post-processor scripts executed as part of the service node definition and their execution time are tracked separately in the Task Execution Logs > Script Execution tab so the developer can check and debug the script execution details if needed.
- The Task Execution Logs > Export and the GetAnalytics API response now include the segregation details of the Service node execution time.
Chat History Enhancement
Add Alternate Text for Bot Messages Written Using JavaScript
The Platform uses a default “JavaScript Message” label in the Chat History for bot messages written using JavaScript. Now, bot designers can add Alternate Text to these messages to explain their purpose more clearly. This Alternate Text will be shown along with the JavaScript label in the Chat History across the Platform. Bot designers can include the “Alternate text” for a JavaScript message using the function tags.addAlternateText(“value”).
An example of the usage: For the Account creation task, Platform renders a template in the end channel for the user to select by defining a JavaScript message specific to the channel. In this JavaScript message, developers can use the predefined function to generate the Alternate text explaining that the JavaScript message written is for the selection of Account type. After the user executes the task, the Alternate message will be displayed in the Chat History along with the JavaScript message tag indicating that this was a JavaScript message and that it was used for Account Selection.
Language and Spell Correction Enhancements
Spell Correction Version 2
Version 10.1.19 of the XO Platform includes a new version of the Spell Correction (Version 2) for English that comes with the following advantages:
- Consistent Spell Correction Experience: The spell correction is performed at a central place immediately after identifying the conversation language, unlike the previous version, where spell correction used to happen separately at each NLP engine. This makes the Spell Correction experience consistent across languages.
- Enhanced Accuracy: It checks words against comprehensive dictionaries and uses edit distance to find and rank possible corrections based on their commonness in the dictionary.
For all the existing bots in which the spell correction is enabled, the Platform will display an upgrade banner on the ‘Thresholds & Configurations’ screen. You can initiate the upgrade from the banner, and the Platform will automatically configure and enable the required settings under ‘Advanced NLP Configurations’. Learn more.
New Language Support – Portuguese (European)
The Platform now supports conversations in the Portuguese (European) Language. Learn more.
v10.1.18 January 21, 2024
Patch Release
This update includes feature enhancements and bug fixes.
Feature | Enhancement |
---|---|
NER Info Details in the Context Object | The NER information is now included in the conversational context. Bot designers can leverage the details populated in the NER info object in node transitions without writing additional conditions. Learn more. |
Add Notes to Batch Testing | The new Notes field allows bot designers to record the purpose or context of each test run. Later, the notes can be used to track or compare why a particular batch test run was implemented. Learn more |
NLP V2 No Longer Supported | NLP Version 2 is no longer supported by the XO Platform. You will no longer be able to downgrade bots from NLP Version 3 to Version 2. Learn more |
v10.1.17 January 07, 2024
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Support for Kerberos SPNEGO Authentication | The XO Platform now enables enterprises to use Kerberos SPNEGO authentication flow to connect securely with their internal applications. Learn more. |
Max Timeout Increased to 30 Seconds in Service Nodes | The maximum timeout range for Service nodes has been increased from 20 to 30 seconds for Enterprise accounts/workspaces. This change applies to new as well as existing Service nodes. Learn more |
v10.1.16 December 16, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
LLM & Generative AI Enhancements | Introducing Custom LLM Framework The XO Platform now enables enterprises to power up their virtual assistants with any Large Language Model (LLM) of their preference. The bring-your-own (BYO) model framework supports integrations with externally hosted models by third parties as well as models hosted by the enterprises themselves. This generic framework works seamlessly with the Auth Profiles module of the platform, enabling enterprises to use the authentication mechanism of their choice. Learn more. Introducing Prompts & Requests Library Introducing Support for Post Processor for Request Prompts |
GPT 3.5 Turbo and GPT 4 Support for LLM & Generative AI Features |
The Platform now supports additional versions of Azure OpenAI and OpenAI models for various features:
These models offer enhanced capability and accuracy. GPT-4 provides a longer context window (8K tokens), enabling it to consider and remember more information. For example, using GPT-4 for the GenAI Node will enable you to share the full preceding conversation leading to improved conversation quality. |
Sunshine Conversations Channel Enhancements |
The Sunshine Conversations channel integration is enhanced to include the following features for bot designers:
|
Handling of Special Characters in LOV Synonyms |
In non-Chat Script engine languages (other than English, Spanish, German, and French), special characters in LOV synonyms are now normalized during both design time and runtime, ensuring accurate extraction of entities from user inputs. The change will apply to existing bots when the model is trained for the first time after this release. Similarly, the published copy of the bots will incorporate this change when the model is first published after this release. |
v10.1.15 December 02, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Nice inContact (DFO) Channel Support | The Platform now supports Nice inContact as a channel in the DFO mode for integrating a virtual assistant with the NiceCXone Chat Automation platform to create a rich and synchronous messaging experience for the end users. Learn more. |
Document with Text Template Support on Gupshup’s WhatsApp Channel | The Platform now supports the WhatsApp “Document with Text” outbound message template for the Gupshup channel. The template allows bot designers to send a document with accompanying text for the end-user to view and download. Learn more. |
Display ‘Postback’ Value and ‘Title’ for Buttons in the WebSDK Templates | In addition to the ‘Postback’ value, the Platform now captures the ‘Title’ associated with the buttons used in the WebSDK templates. This enables the Platform to show both the ‘Postback’ value and ‘Title’ in the WebSDK’s chat window, Chat History API, and Conversation History API. Learn more. |
v10.1.14 November 18, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Nice inContact Channel Support | The Platform now supports Nice inContact as a channel for integrating a virtual assistant with the NiceCXone Chat Automation platform for creating a rich and synchronous messaging experience for the end users. Learn more. |
ServiceNow Agent Integration – Additional Capabilities |
Pass User Information to the ServiceNow Agent System: When the Platform transfers an end user’s conversation to the ServiceNow Agent System, the user appears as a Guest. Bot designers can now use the ‘ServiceNowMetaData’ object to pass on user information to the agent system and use the information in the agent system to update the username in the agent chat. Learn more. Route User Messages to Different Agent Groups by ‘streamId’: The Platform now sends ‘streamId’ as a context variable in the request payload sent to ServiceNow. Bot designers can define a Work Item Routing Condition in the Queue using the ‘streamId’ context variable in the ServiceNow agent system to route user messages to different agent groups by ‘streamId’. Learn more. |
Genesys Chat Renamed to Genesys Cloud CX Messaging | Genesys provides multiple offerings for chat automation, however, the Platform only supports integration with Genesys Cloud CX Messaging. To make it clear on the user interface, the Genesys Chat channel is renamed to Genesys Cloud CX Messaging across the Platform. Learn more. |
Increased Allowed Retries Limit for IVR | The Allowed Retries limit for an Entity node has increased from 5 to 10. This change gives flexibility to bot designers in defining the retries within a broader range and also makes the range consistent with the No Match (Voice Call Properties) retry limit. Learn more. |
Custom Dashboard: Widgets Alignment Capabilities | New widgets alignment capabilities on the Custom Dashboard allow bot designers to add up to 4 widgets per row and organize them by moving anywhere within or across the rows on the Dashboard. They can also manually resize the widgets. Learn more. |
Ability to Cancel an Ongoing Batch Test Execution | The Platform now allows bot designers to cancel an ongoing Batch Test execution in both ‘Published’ and ‘In-Development’ modes. This flexibility lets designers intervene immediately if a test case requires any change, rather than waiting for the execution to finish. Learn more. |
v10.1.13 November 4, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Customize Agent Transfer Containment Type | The Agent Transfer node now has a new Containment Type property that allows bot designers to choose whether an instance of Agent Transfer should be considered as Self-service or Agent Hand-off. This configuration helps separate the instances where handing off to an agent is by design; for example, a user asking for a use case for which the virtual assistant is not yet trained. Learn more. |
Auto-upgrade to NLP Version 3 |
In continuation of our previous communications, the NLP V2 is deprecated, and all virtual assistants should be migrated to the NLP V3. As part of this update, all ‘inactive bots’ are automatically upgraded to NLP v3. Inactive bots are defined as the ones that did not have any end-user interaction in the past 90 days as of the deployment date. The bot designers will need to manually trigger the training and publishing of the bots to ensure that they function correctly. We also plan to auto-upgrade all ‘active bots’. Starting November 18, 2023, all active bots using NLP v2 will be automatically upgraded to NLP v3 over the next two weeks. The upgrade also includes initiating training for all bot languages in both the configured and published copies for customer bots. During this upgrade process, the intent identification may not function as expected, potentially leading to inconsistent user experiences. To minimize disruptions, we strongly recommend upgrading your bots manually before November 18 during your non-critical business hours. Learn more. |
New Language Support – Tagalog | The Platform now supports conversation in the Tagalog language. |
Split Incorrectly Combined Words in Spanish |
In Spanish, users often combine two words while writing. For example, ‘los’, ‘hombres’, and ‘como’ are combined into one word – ‘loshombrescomo’. The combined words negatively impact the NLP performance and accuracy in identifying the right intent and entities, especially for Standard and Few-shot network types. A new configuration is introduced in Advanced NLU Configuration – Split Combined Words. When the configuration is set to enable, the combined word is split into valid words and considered for intent detection. Learn more. |
Customize the Standard Behavior of ‘help’ |
The Platform now provides better control over the behavior of the prebuilt ‘help’ intent.
Note: You can use the custom_help even without disabling the system help. |
v10.1.12 October 14, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Create Apps using additional JWT signing algorithms |
When creating an app to use public APIs, the Platform now supports two more JWT signing algorithms to choose from – HS512 and RS512, in addition to HS256 and RS256. Please note the exception: RS256 and RS512 JWT signing algorithms are not supported for Custom BotKit or Botkit. |
New Import Bot Functions API | The new Import BotFunctions API allows bot designers to import only the bot functions as a separate file. This new API reduces unnecessary steps in the process of updating the Bot Functions file. Learn more. |
Filter API Call Logs by Status Code | Bot designers can now filter API Call Logs by status codes and gain insights into the performance of services in different scenarios, troubleshoot issues, and implement appropriate corrective actions. Learn more. |
Uniform Voice Call Properties across Entity, Message, and Confirmation nodes |
All the Voice Call Properties previously available only at the Entity node are now supported at the Message node and the Confirmation node. Having uniform Voice Call Properties across the nodes provides bot designers better control at the node level, allowing them to use the properties as per their customers’ specific use cases for supported voice channels. Learn more. Please note the exception: Currently, the Customize Retries Behavior function across the nodes is supported only for the IVR channel. |
Support for Genesys AWS Regions |
Earlier, the Platform routed all the requests by default to the US East region URL of the Genesys cloud system. As a result, if the customer’s Genesys Cloud system was deployed in a region other than the US East, the Genesys chat channel did not work. Now, the Platform has introduced a new Genesys Cloud Login URL field on the Genesys Chat channel page and the Genesys Agent Transfer configurations page, allowing bot designers to provide the URL associated with their Customer’s Genesys AWS Region. This ensures that the Platform directs conversation requests to the chosen Genesys AWS region. |
Public API for Conversation Testing |
The Platform has introduced the following public APIs for conversation testing so that bot designers can automate the creation and execution of a test suite: |
Option to Delete Configured LLM Integrations | The new Reset Configuration option on the LLM & Generative AI page lets bot designers delete a configured LLM integration that’s no longer in use. Learn more. |
Auto-upgrade Inactive bots to NLP Version 3 | Over the next few weeks, the Platform will identify all inactive bots that have not had any runtime sessions in the last three months and auto-upgrade them to NLP V3. |
Customize the Header and Footer of the Email Template | A new Email Template Design (optional) setting is introduced on the Email Channel Configuration page that allows bot designers to customize the header and footer of the email template for the Email Channel. Learn more. |
v10.1.11 September 23, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Migrating to NLP v3 | The XO Platform released NLP version 3 earlier this year. This version offers better performance, accuracy, and the latest LLM features. With this release, a virtual assistant created in NLP v2 can only be trained and published after upgrading to NLP v3. In the coming 3-5 weeks, the Platform will automatically migrate any existing virtual assistants to NLP v3. After the upgrade, the virtual assistant’s configured copy will be in NLP v3, but the published copy will remain in NLP v2. Upon publishing the virtual assistant for the first time post-upgrade, its version will change to NLP v3. Learn more. |
Batch Test Extended to Universal Bots | The Batch Test feature is now extended to include Universal Bots. Analysts can easily derive actionable insights from the intents of all linked bots to understand overall test coverage and performance of intents in a Universal Bot. Learn more. |
Support for POST Method for BasicAuth Profile | In addition to the GET method, bot designers can now use the POST method to authenticate the BasicAuth profile request to retrieve the data. Learn more. |
Agent Integration support for ServiceNow Utah and Vancouver Versions | The Platform now supports the Service Now Agent Integration with the latest Utah and Vancouver versions of the ServiceNow system, in addition to the Tokyo version. Learn more. |
v10.1.10 September 10, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Digital Forms Enhancements |
The overall Digital Forms experience is improved significantly:
|
Anthropic LLM & Generative AI Integration |
In addition to the Anthropic Claude-1 model integration, the Platform now provides out-of-box integration with the Anthropic Claude Instant model. The models’ advanced Conversational AI capabilities can be leveraged for the LLM features. LLM features that support Claude-1:
LLM features that support both Claude-1 and Claude Instant:
|
Generative AI-specific Nodes Renamed |
The Generative AI-specific nodes are renamed to better reflect their intended purpose:
The nodes’ functionality remains the same. |
Batch Test Execution in Publish Mode | Bot designers can now execute a newly created batch test suite in the Published mode. It helps quickly test and evaluate VA’s performance following a production release. Learn more. |
Prebuilt Integrations with Bitly and Here | Bitly integration provides actions to shorten long URLs. Learn more. Here integration provides actions to find the current geolocation by free text. Learn more. |
Notification Event to WebSDK when BotKit is Unresponsive | The Platform now creates a new event when the BotKit is unresponsive and sends the event to the WebSDK channel. Bot designers can use the event to send a customized message to the end users. Learn more. |
“PrivateClaims” Key to Manage Secure Data in WebSDK | The Platform has added a new “PrivateClaims” key as an alias to the existing “SecureCustomData” key. This key can be used to securely pass the additional data to the Platform. Learn more. |
Enhanced ‘Get User Information’ API | The ‘Get User Information’ API now supports a new ‘status’ request parameter that helps bot analysts get the list of active users on the Platform. In addition, the API now also returns the users’ last login date and time. Learn more. |
Ability to Pass Complex Payload in Alert Task | Bot designers can now declare a complex PayloadField object in the Preprocessors of Alert Task. This PayloadField object can be used in the API Request calls of Alerts to fetch results that are to be used in the Bot Response. It’s useful, especially for configuring alerts that don’t require user input. Learn more. |
New timeOnly Rule for Time Entity | A new “timeOnly(multilang)” entity rule is introduced for the Time entity that allows bot designers to exclude the “T” and “offset” from the time value, for example, 18:00:00 vs. T18:00:00+5:50. Learn more. |
Enhanced NER Detection | The Platform has introduced a new NER Threshold setting under Advanced NLU Configurations > Machine Learning. Only entity values surpassing the threshold are returned, effectively filtering out irrelevant results. Learn more. |
Customize the Flow When User Inputs Results in Intent and Ambiguous Entity Values | A new setting is introduced under Advanced NPL Configuration – Precedence for Intents with Ambiguous Entities to empower bot designers to control how the bot should respond when a user input results in intent and ambiguous entity values. Learn more. |
Spanish – Spell Correction Enhancements |
The spell correction is now enabled in Spanish for ML and KG modules for the following:
|
Performance Improvements and Fixes |
|
v10.1.9 August 19, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Anthropic LLM & Generative AI Integration |
The XO Platform now provides out-of-box integration with the Anthropic Claude-1 model. The model’s advanced Conversational AI capabilities can be leveraged for the following LLM features:
|
Multilingual Support for Answer from Docs | The Answer from Documents feature is now available in Non-English languages supported by the XO Platform. Learn more. |
Sunshine Conversations Channel Enhancement | As part of the Pass Control action, bot designers can now include additional metadata to offer better context to the receiving agent system when a conversation is transferred, a session ends, or for creating Zendesk tickets. Learn more. |
Debug Logs API supports WebSDK Channel | The XO Platform’s Debug Logs API now allows bot designers to access the debug logs for the WebSDK Channel (‘rtm’), in addition to the existing IVR channel (‘ivrVoice’). Learn more. |
Handling End of Task Event | Selecting the ‘Terminate call’ option under IVR Channel – Voice Call Properties – End of Task Behavior no longer turns off the End of Task event at the bot level. The ‘Trigger End of Task Event’ option continues to update the End of Task event. Learn more. |
Spanish – Spell Correction enhancements |
The spell correction is now enabled in Spanish for ML and KG modules for the following:
|
v10.1.8 August 06, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Revamped Intent Discovery Journey |
The Intent Discovery tool was introduced as a beta feature in v10 of the Platform. This powerful tool helps bot designers auto-extract popular intents from previous user conversations. It reduces the time and effort to build a virtual assistant and leads to the success of your Conversational AI Journey. Based on valuable customer feedback, we’ve enhanced the Intent Discovery module with the following features:
|
Zoom Contact Center Channel | The Platform now supports Zoom Contact Center as a channel to automate voice and messaging services using asynchronous Webhook integration. Learn more. |
GupShup Support for WhatsApp Payment | The Platform now supports the Whatsapp Payment Outbound message templates in the Gupshup WhatsApp Business Channel, empowering developers to build use cases based on WhatsApp Pay. Learn more. |
Manage the Push Notifications for Web/Mobile SDK Channel | Bot designers can now selectively send push notifications for events such as “Websocket Disconnected,” “App Termination,” and “Message Delivery Failure” in the Web/Mobile SDK channel. They can also customize the push notification messages sent to the client app using the Manage Push Notifications option in the Web/Mobile SDK channel. Learn more. |
Locale Definition Property for IVR | Bot designers can now include the language and locale identifier (xml:lang= “<value>”) in the VoiceXML file by enabling the Locale Definition property for the IVR Channel. Automatic Speech Recognition engines use the language attribute to enhance speech recognition. Learn more. |
v10.1.7 July 22, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Confluence Integration | The XO Platform now offers pre-built integration with Confluence. You can enable the integration and explore various actions associated with your Confluence instance to create blog content or pages. Learn more. |
Asana Integration | The XO Platform now offers pre-built integration with Asana. You can enable the integration and explore various actions associated with your Asana instance to find information regarding projects, users, and tasks. Learn more. |
Trace ID for Incoming Messages | The XO Platform now assigns a unique Trace ID for every incoming message throughout the lifecycle of the message, including the internal application logs and conversational history. This Trace Id is available as part of the Chat History module in various places within the application, and it can be used as a trace for any debugging purposes. Learn more. |
Parallel Editing for Concepts & Synonyms | The Parallel Editing feature enables multiple users in a team to simultaneously add or delete Concepts and Synonyms during bot training. Learn more. |
Ranker & Resolver Engine Enhancement | The Ranker and Resolver Analysis panel now displays the list of matched training data during Utterance Training in all the supported NLU languages for Standard and Few-shot Model network types. This crucial information lets NLP analysts understand how the NLP Engine works and why a specific intent is qualified as a winning intent. Learn more. |
v10.1.6 July 08, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Repeat Bot Responses Event for the IVR Channels | The XO Platform has introduced a new Repeat Bot Response event within the Conversation Events category. Bot developers can enable this event and customize the trigger conditions. This empowers end-users to ask the bot to repeat its recent responses at any point during the conversation. The Auto-generate Response option uses the LLM-generated response. Currently, this event is supported for IVR, Audiocodes, and Twilio Voice channels. Learn more. |
New IVR Configuration Properties |
The Platform now allows configuring the following IVR Configuration Properties per bot. These configurations were part of the application configuration and are now available to be customized per bot.
|
Publish Intent-Specific Training and Traits without Dependency | The Publish Workflow now empowers NLP trainers to publish only the training data of selected intents without including the task definitions. This applies to ML Utterances, Patterns, and Rules. The trainers can also independently publish Trait Groups and other NL Model configurations. This feature is particularly useful when multiple users are collaborating on a bot, as it enables bot developers or NLP trainers to independently publish their work without affecting each other’s ongoing tasks. Learn more. |
Few-shot Model for Standard Bot Traits | The Platform has introduced a new Few-shot Model for training and identifying traits. This model enhances the accuracy of the NLP engine in identifying traits with minimal training required. All new bots created in NLP Version 3 will now use the Few-shot Model as the default traits model. However, older bots created in NLP Version 3 will still use the Standard traits model as the default. Learn more. |
Debug Logs for Service, Script, and Webhook Nodes | The Platform has enhanced the Debug Logs to show complete information on Service, Script, and Webhook node executions without having to switch to the Analytics – Performance module. With the new Show More option, bot developers can view the request and response inline and quickly identify any issues in the execution flow. Learn more. |
Task Execution Logs Separated from NLP Insights | The task execution-related data is separated from the Analyze > NLP Insights menu and moved to the new Analyze > Task Execution Logs menu. The separation makes it easy for business users and bot developers to easily find and evaluate the virtual assistant’s performance in identifying and executing tasks. Learn more. |
Pre and Post-Processor Scripts for GenAI Node | The GenAI Node now includes Pre-processor Script and Post-processor Script options. These options empower bot developers to define scripts to manipulate data or response and incorporate it into rules or exit scenarios as required. Learn more. |
Support for Generative AI Features in Non-English Languages | The Generative AI features are now supported for non-English NLU and Bot languages. Learn more. |
ServiceNow Agent Supports OAuth with Refresh Token | The Platform now offers a new authorization option called OAuth with Refresh Token for the ServiceNow Agent Integration. This option enables analysts to integrate with the ServiceNow Agent seamlessly. Learn more. |
Delete an Agent Transfer Integration | Bot designers now have the option to delete an already configured external Agent Transfer integration. Learn more. |
Improved Load Performance of Analytics | The release includes various updates that improve the load performance of all the analytics modules. The platform now includes dedicated infrastructure to serve all the analytics modules. The users can view the Analytics data only for the latest six months of conversations. Learn more. |
v10.1.5 June 24, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Multiple Models Support for LLM & Generative AI | While configuring the LLM integration, you can now enable multiple LLM models for both OpenAI and Azure OpenAI. This allows you to select a specific model for a specific feature and tweak the model’s settings to suit your use case. Learn more. |
GenAI Prompt | The new GenAI Prompt node lets bot developers leverage the full potential of LLM and Generative AI models to quickly build their own prompts. Developers can also select a specific AI model, tweak its settings, and preview the response for the prompt. The node allows developers to creatively leverage LLMs by defining the prompt using conversation context and the response from the LLMs in defining the subsequent conversation flow. Learn more. |
Azure OpenAI support for Answer from Docs and Zero-shot model | The Azure OpenAI integration is now available for the Answer from Docs feature and the Zero-shot model. You can enable either the OpenAI integration, Azure integration, or both. Learn more. |
Set Ambiguous Intents as Expected Results in Batch Tests | While creating a test suite, you can now tag multiple intents to a test utterance. It helps in scenarios where ambiguity is by-design and should be considered as True Positive. Do note that the ‘actual’ intents can include additional intents that are not part of the ‘expected’ intents. Learn more. |
Propagate Voice Call Properties for Linked Bots from Universal Bot | You can now propagate the Voice Call Properties from the Universal Bot to all its linked bots. It helps you to leverage the UB properties when a linked bot’s execution is in progress. The Voice Call Properties of the linked bot are used when the conversation is happening directly with the bot as a Standard bot. Learn more. The Voice Call Properties are supported by all the voice channels that have an IVR integration. |
None Intent In Universal Bots | The None Intent feature is now available for Universal Bots. You can add ‘None Intent’ training against linked bots of a Universal bot so that these linked bots are not qualified when the user utterance identifies the ‘None Intent’. Learn more. |
Customize the Flow when the Service Node Calls Time out | The XO Platform now provides an option to bot designers to stay in the context of the current dialog even when the service node calls time out. You can customize the dialog flow to provide a contextual response or take an alternate path in the conversation flow for a better error-handling experience. Learn more. |
Use Dynamic Variables for Defining Transition Conditions | The XO Platform now allows you to use dynamic variable resolutions for the right-hand side field values in Transition Conditions of Dialog Tasks. Dynamic variable (content, context, or env) resolutions allow you to populate the values of certain fields at runtime based on the current state of the conversation or the user’s input, allowing you to manage the conversation flow more efficiently. Learn more. |
Custom Entity Configuration Using Variables | While configuring a custom entity for a node, in addition to using an expression, you can now use a Content, Context, or Environment variable that contains a regular expression or concept as the value. Learn more. |
New initiateTraining Parameter for Publish Bot API |
The new ‘initiateTraining’ parameter in the request body lets you indicate whether to trigger or skip the training process when publishing the bot. The parameter’s value can be:
We highly recommend always initiating the training as part of the publishing process. Learn more. |
Performance Improvements and Fixes |
|
v10.1.4 June 10, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Conversation Summary API | The new Conversation Summary API summarizes the conversation between a user and an agent. The agent can be a virtual agent or a human agent. The API accepts the conversation Id or the transcripts as input and provides an auto-generated conversation summary. Currently, the API supports transcripts only in English (en) and is available only in our global deployment (https://bots.kore.ai). Learn more. |
Dynamic Intent Node |
The new Dynamic Intent node lets bot designers dynamically trigger any of the flows present in the bot by defining the intent name using variables. The target intent can be either a Dialog Task or a FAQ. You can use this node to proactively disambiguate an intent and choose different intents based on the user or conversation context. In the case of linked bots associated with Universal Bots, this node can also be used to trigger flows present in any other linked bots of the corresponding Universal Bot. This feature allows you to invoke reusable or utility flows present in one linked bot from another, for example, user authentication, OTP verification, etc. It simplifies switching between tasks in different linked bots of a Universal Bot. Learn more. |
Ability to Contextually Secure Sensitive Inputs |
You can now contextually secure user inputs received at specific stages of the conversation. This is an extension of the current PII functionality. The PII functionality redacts specific information of the users at any point in the conversation by matching it with the defined patterns. However, there can be scenarios where the user input has to be masked only at certain places in the conversation and not at all times. For example, the PIN number, which is usually a 4 or 6-digit number, should be masked only when asked for. Masking any 4 or 6-digit number throughout the conversation is not desirable. The new feature in the XO allows you to contextually secure the user inputs provided for certain Entity nodes. It enables bot designers to secure the input where required without impacting similar-looking inputs elsewhere. Designers can define one or more patterns to identify sensitive data and choose how to mask or de-identify it using options like redaction, replacement, or masking. Learn more. |
Separate Retry Settings for Timeout, No Match, and Error Prompts | The platform allows bot designers to define the prompts to be played for various events like Timeout, No Match, and Error. However, the platform allowed only one set of prompts and configurations for all these events. You can now configure the prompts, number of allowed retries, and behavior on exceeding retries for each of these events. This allows you to create a more personalized voice conversation experience. Currently, this is available only for the IVR Channel. Learn more. |
Ambiguous Intents Identified Event for Universal Bots | Support for Ambiguous Intents Identified Event is now extended for Universal Bots. Learn more. |
New Ignore Words Rule for LOV Entity | The new ignoreWords rule helps bot developers to extend the ignore or stop words list for the LOV entity. Learn more. |
Typeahead Suggestion for Namespaces | While creating or editing a variable, instead of manually scrolling and selecting the namespaces, you can now search for namespaces using the typeahead suggestion. Filter and search for namespaces are also enabled on the Manage Variable Namespaces screen. Learn more. |
Manage Workspaces | The XO Platform now allows an admin user to manage Billing, Payment, and Usage information directly using the Manage button under the workspace. This is available for all the users with the ‘Admin’ role for the Bot Admin Console. Learn more. |
Support for GIF format in Web & Mobile SDK channel | The Web/Mobile SDK now renders .GIF files within the chat window, instead of showing them as attachments. Learn more. |
Node.js Upgrade | The Node.js version is upgraded from 14.x to 16.x/18.x. |
v10.1.3 May 30, 2023
(Released on May 30, 2023 in the US region | May 27, 2023 in all other regions)
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Google Maps Integration | The XO Platform now offers pre-built integration with Google Maps. You can enable the integration and explore various actions associated with Google Maps to find places by name and locations by coordinates. Learn more. |
DHL Integration | The XO Platform now offers pre-built integration with DHL. You can enable the integration and explore various actions associated with DHL to track your order and find the locations of DHL. Learn more. |
Support for MS Teams Channel using Single Tenant App | You can now enable Microsoft Teams using the single-tenant app, in addition to the multi-tenant app, as a delivery channel to your Kore.ai Virtual Assistant to allow it to interact with end-users using Microsoft Teams. Learn more. |
Sessions History API Enhancement | A new parameter is added to filter the API response data using Session ids. Learn more. |
WebSocket Enhancements |
|
v10.1.2 May 13, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Shopify Integration | The XO Platform now offers pre-built integration with Shopify. You can enable the integration and explore various actions associated with your Shopify Shop instance to find information regarding Customers, Products, and Orders. Learn more. |
v10.1.1 April 29, 2023
Patch Release
This update includes bug fixes and feature enhancements.
Feature | Enhancement |
---|---|
Improved Genesys Channel Support | The Platform now supports OAuth based integration for enabling the Genesys Chat channel. The Basic Auth flow is no longer supported. Learn more. |
Pre-built Integration with Nice in Contact (UserHub) for Agent Transfer | Kore.ai’s pre-built Agent Transfer integrations now allow you to seamlessly hand off the conversations to the Nice in Contact (UserHub) agent system. Learn more. |
Entity Placeholder support for Few-shot | The platform now supports replacing of the entity values with entity placeholders for the Few-shot Model. This improves the accuracy levels of the Few-shot model. |
v10.1 April 16, 2023
Minor Release
The v10.1 of the Kore.ai XO Platform focuses on leveraging the power of Large Language Models and Generative AI to enable enterprises to create intelligent conversational experiences. The release offers a copilot for smart assistance, better conversational capabilities, and delivers personalized responses.
For more information on key features and enhancements introduced in the release, see What’s new in v10.1.