This document provides information on the various releases and the corresponding feature updates and enhancements introduced in the Platform’s version 10.3.x or later.
v10.9 Dec 11, 2024
Minor Release
This update includes enhancements and bug fixes.
LLM & Generative AI
Introducing Custom LLM Prompt Streaming
The XO Platform introduces the Custom LLM Prompt Streaming feature, which allows responses to be sent to the user piece by piece in real-time as they are generated. This feature applies to the GenAI Node and GenAI Prompt using OpenAI models.
Zero-Shot Enhancements
The Zero-Shot intent detection model has been significantly enhanced to improve contextual understanding and intent-matching accuracy, addressing challenges related to large intent sets and false positives.
Key Updates
- Expanded Input: The enhanced model now incorporates intent descriptions, conversation history, list of intents, and user utterances for more accurate intent recognition.
- New Prompt Structure: A new prompt, “Zero-Shot-V2,” is available for XO v10 and v11, enabling platform users to input additional contextual components.
- System Prompts: Pre-built system prompts are provided for out-of-the-box integrations (Azure OpenAI and OpenAI), allowing the users to leverage the enhanced Zero-Shot Model without crafting custom prompts.
- Custom Prompts: Platform users can create custom prompts linked to system models for Zero-Shot, offering full control over prompt design to tailor them to specific features, contexts, and user needs.
- Custom LLM Support: The Zero-Shot ML model can now be used with the Bring Your Own Model (BYOM), enabling platform users to define prompts and leverage the enhanced capabilities with their custom LLMs.
These enhancements significantly improve the Zero-Shot model’s ability to accurately identify intents, particularly in complex or nuanced conversational scenarios, while maintaining compatibility with existing configurations.
v10.8.1 Nov 18, 2024
Patch Release
This update includes bug fixes.
v10.8 Nov 03, 2024
Minor Release
This update includes enhancements and bug fixes.
LLM & Generative AI
Amazon Bedrock Integration
XO Platform now offers Amazon Bedrock as an out-of-box (OOB) integration. This integration lets platform users access Amazon Bedrock’s models directly from the XO Platform. The users can create custom prompts for their specific use cases and use the connected models across all Co-Pilot and Dynamic Conversations features. Note that while Amazon Bedrock is available as an OOB integration, XO Platform does not provide any system prompts or templates. Users can only make use of the model with the help of custom prompts.
Key features:
- Amazon Bedrock Connection: Connect to multiple Amazon Bedrock models securely using your AWS credentials and IAM role authentication.
- Integration Setup: Configure the integration by providing an integration name, model details, endpoint, and IAM role credentials.
- Custom Prompts: Create model-specific prompts for all Co-Pilot and Dynamic Conversations features with complete control over prompt design and optimization.
- Security: End-to-end encryption with secure temporary credential management through Amazon STS.
Key benefits:
- Access to Amazon’s advanced language models.
- Seamless integration with existing XO Platform features.
- Flexible prompt customization.
- Enterprise-grade security.
Channels
Netcore WhatsApp Business Integration
The Platform now supports WhatsApp business messaging using Netcore as a new channel. This integration enables businesses to connect their WhatsApp Business account through Netcore to send text, media, and interactive messages using Virtual Assistants.
Key features:
- Single API Integration: Manage all WhatsApp conversations and virtual assistants through one API.
- Rich Messaging Capabilities: Send text messages, share media files, and add interactive buttons and quick replies.
- Secure Interactions: Built-in end-to-end encryption for secure message delivery.
Conversation Testing
Support for Voice Experience Testing
The Platform now supports conversation testing for IVR (Interactive Voice Response) channels, enabling platform users to simulate and validate voice-based dialog flows before deployment.
Key features:
- Test Setup: Choose between Webchat and IVR channels, configure connect events, and add custom scripts to simulate real-world scenarios.
- Testing Options: Record and validate IVR flows with text inputs, timeout simulations, and error-handling scenarios. Each test case displays channel-specific indicators.
- Test Suite Management: Create, import, and export test cases with complete regression testing and response validation support.
Support for Preprocessor Script
The new Preprocessor Script for Conversation Testing lets platform users control preconditions during conversation testing. Users can run custom scripts before the recording, validation, and execution phases.
Key features:
- Script Editor: Create, edit, and manage preprocessor scripts with built-in syntax highlighting and error checking.
- Testing Controls: Execute scripts before initiating test recording and validation, with the ability to modify and re-validate on the fly.
- Data Management: Control session context by modifying data, simulating external systems, and tracking changes through secure execution.
v10.7.1 Oct 21, 2024
Patch Release
This update includes enhancements and bug fixes.
Bot Admin Console
Enhanced Audit Report
The Audit Report is enhanced to provide better user tracking and accountability. A user email column is added to the audit logs table along with the existing user name as a unique identifier and is also included in the exported audit report. Column names are updated for consistency across the UI and exported files. These enhancements allow for more precise identification of users who made changes, improving traceability and simplifying troubleshooting processes.
Backward compatibility: User email will be available only for new audit entries, not for existing data.
v10.7 Sep 28, 2024
Minor Release
This update includes enhancements and bug fixes.
LLM & Generative AI
Dynamic Variables for LLM Integration
XO Platform now supports dynamic variables for integration with Language Models. It allows platform users to use content, context, and environment variables when configuring system and custom LLMs. It helps create more adaptable and secure virtual assistants that can easily adjust to different needs and settings.
Key features:
- Use content and environment variables for Co-pilot features and all variable types for Dynamic Conversation features.
- Configure API keys, endpoints, authentication settings, etc., using variables.
- Test LLM configurations using sample values.
Key benefits:
- Adapt LLM settings across different environments.
- Enhance security by storing sensitive information as environment variables.
- Simplify configuration management and updates.
- Improve testing and development processes.
- Enable flexible deployment for multi-tenant applications.
Enhanced Guardrails Framework for GenAI
This update significantly improves the Guardrails framework, enhancing safety and reliability in GenAI deployments. With optimized screening, better reporting, and broader LLM support, platform users can create more secure and effective AI solutions while improving overall system performance.
Key updates:
- Optimized screening process for faster responses.
- Detailed fallback reporting with specific breach information.
- Improved logging for better visibility into guardrail activities.
- Extended support for custom LLMs.
Key benefits:
- Increased safety and reliability in GenAI deployments.
- Improved developer experience with clearer logging and debugging.
- Faster response times due to optimized screening processes.
- Consistent guardrail enforcement across all LLM types.
Backward compatibility:
- This update applies to all existing bots, ensuring a seamless transition to the improved framework.
Enhanced PII Protection in LLM Interactions
XO Platform now protects sensitive data in LLM calls by adding placeholders. For example, phone numbers are replaced with “[Phone Number]”. This enhances privacy and security, reducing the risk of exposing personal information to external LLM services.
Terminology Updates for Improved Clarity and Inclusivity
This update includes an important terminology change on the platform:
- Zero-shot Model Naming Consistency: Standardized the name to ‘Zero-shot Model’ across the platform, eliminating confusion caused by inconsistent naming; for example, ‘Zero-shot Model with OpenAI’ was used under ML > Network Type.
- The Guardrail previously named “Blacklist Topics” has been renamed to “Restrict Topics”.
Channels
Sinch Conversational API Integration
XO Platform now supports Sinch as a new channel. Sinch offers a new omnichannel integration that allows developers to enable virtual assistant interactions across multiple channels, eliminating the need to set up and manage each channel individually.
Key features:
- Omnichannel Support: Manage conversations seamlessly through a single API and deploy virtual assistants across multiple channels, including WhatsApp, Facebook Messenger, Instagram, Viber Bot, Viber Business Messages, Telegram Bot, KakaoTalk, LINE, WeChat, RCS, SMS, MMS, and Sinch Chat.
- Rich Messaging Capabilities: Send text messages, media, and rich content across various channels. Maintain consistent user interactions with conversation management.
- Template Compatibility: Ensure compatibility between selected templates and the configured delivery channel in Sinch.
Key benefits:
- Simplify development and deployment with a unified API for multiple messaging platforms.
- Create seamless and engaging user experiences across various channels.
- Enhance security and user authentication in messaging interactions.
Web SDK
Enhanced Webchat Configuration – From SDK to UI
Previously, configuring certain Webchat features required SDK modifications. This update brings key SDK Webchat functionalities into the user interface, allowing for a more unified and user-friendly approach to bot customization.
Key updates:
- Manage Webchat settings directly from the UI without using SDK.
- New UI-configurable parameters include location sharing, Google Maps API integration, Chat history management, Paginated scrolling, Typing indicator customization, Emoji shortcut controls, and Speaker and Send button toggles.
- SDK Override Option to prioritize UI configurations over SDK settings. This option is disabled by default to render the existing customizations and can be enabled for seamless transitions.
Key benefits:
- Faster and simpler Webchat setup and modification process.
- Greater control over Webchat features without SDK knowledge.
- Flexibility to switch between UI and SDK configurations.
Backward compatibility:
- Existing SDK configurations remain intact until the override option is enabled in the UI.
Process Apps Deprecation
We’re announcing the deprecation of Process Apps, effective December 31, 2024. This change paves the way for a more advanced, AI-driven automation flow offered by GALE.
Key points:
- Deprecation Timeline: Process Apps will no longer be supported after December 31, 2024. All existing Process Apps will cease to function after this date.
- Suggested Alternate: Explore GALE as an alternative solution. GALE is our next-generation platform for AI-powered automation. It offers advanced features, improved efficiency, and a future-ready architecture.
General Settings
Change Default Bot Language via API
The XO platform now allows users to update the default language of their existing bots using a public API, providing greater flexibility and control over their chatbot configurations.
Key updates:
- Public API for default Language Change.
- Change your app’s default language at any time without creating a new bot.
- Enable any published language in the bot as your default bot language.
Key benefits:
- Adapt your chatbot to changing language requirements effortlessly.
- Save development time and resources by modifying existing bots.
- Easily manage multilingual bots within a single app instance.
Note: The changes made via the API are instantly reflected in both In-development and Published copies of your bot.
Marketplace
Deprecation of Azure OpenAI and OpenAI from Marketplace Integrations
The Prebuilt Dialog Templates using OpenAI and Azure OpenAI are discontinued. These templates were originally provided to explore the art of possibility when LLMs were relatively new. However, they used older models that the model providers no longer support. Our customers are already exploring the full power of GenAI-powered conversational experiences using the GenAI Node and GenAI Prompt node.
Deprecation of Answer from Documents
The Search AI module introduced in the XO v11 version provides an advanced RAG framework for answer generation use cases. The ‘Answer from Documents’ feature provides very basic RAG functionality. The feature will no longer receive updates and will be discontinued in an upcoming release. We strongly recommend that customers use the Search AI module.
Dialog Builder
PII Redaction in API Responses (Service Node)
The platform now supports the redaction of sensitive/PII information in responses from external services. Users can select specific parts of API responses for PII scanning and apply suitable redaction patterns.
Key updates:
- A new “PII Redaction for API responses” setting in Service Node Component Properties.
- Customizable redaction for specific keys in API responses.
- Multiple redaction methods, including original value, de-identification, random value, static text, or masking.
- Supports various API response structures, including key-value pairs, arrays, and nested objects.
- Enhanced logging with options for original or redacted data display.
Key benefits:
- Improved compliance with data privacy regulations
- Reduced risk of accidental sensitive data exposure
- Flexible configuration to balance usability and privacy.
v10.6.1 Sep 14, 2024
Patch Release
This update includes enhancements and bug fixes.
Dialog Builder
Error Handling for Service Nodes
Service Node’s error handling capability is enhanced to provide greater control over non-timeout error scenarios. It allows platform users to customize dialog execution when API calls fail for reasons other than timeouts.
Key updates:
- A new option to continue dialog execution after a service call failure.
- Ability to transition to a specific node upon encountering an error.
- Detailed error information is available in the service node response object.
v10.6 Sep 01, 2024
Patch Release
This update includes enhancements and bug fixes.
LLM and Generative AI
Introducing Guardrails
Large language models (LLMs) are powerful AI systems that can be leveraged to offer human-like conversational experiences. The Kore.ai XO Platform offers a wide range of features to leverage the power of LLMs. LLMs are usually pre-trained with a vast corpus of public data sources, and the content is not fully reviewed and curated for correctness and acceptability for enterprise needs. This results in generating harmful, biased, or inappropriate content at times. The XO Platform’s Guardrail framework mitigates these risks by validating LLM requests and responses to enforce safety and appropriateness standards.
Guardrails enable responsible and ethical AI practices by allowing developers to easily enable/disable rules and configure settings for different features using LLMs. Additionally, platform users can design and implement fallback behaviors for a feature, such as triggering specific events, if a guardrail detects content that violates set standards.
Azure OpenAI GPT-4 Turbo and GPT-4o Support for LLM & Generative AI Features
The Platform now supports two new Azure OpenAI models for various Co-Pilot and Dynamic Conversations Features:
- GPT-4 Turbo: It’s a high-speed, accurate model ideal for real-time applications like chatbots, virtual assistants, and content generation.
- GPT-4o: It’s the most advanced multimodal model, which can accept both text and images as input, offering improved efficiency and cost-effectiveness compared to GPT-4 Turbo.
Bot Versioning for GenAI & LLM
The platform now includes GenAI and LLM settings in bot versioning, including Model Integrations, Custom Prompts, Feature & Model Prompt mapping, and Safeguards. It allows platform users to manage and track GenAI & LLM configuration changes across different bot/app versions, enhancing control and customization of GenAI features.
Analytics
Enhanced Rate Limit API Response
This update helps API users understand which specific rate limit they’ve hit, allowing them to plan their request strategies more effectively and reduce errors in API usage.
Specific error messages:
- Per-minute limit: “You’ve exceeded the per-minute rate limit. Please wait for some time before retrying.”
- Hourly limit: “You’ve exceeded the hourly rate limit. Please wait for some time before retrying.”
Channels
Customizable VXML Error Threshold in IVR Voice Call Properties
This update removes the restriction of having a fixed VXML Error Threshold at the environment level that applies to all bots, with no option for customization. Platform users now have the flexibility to set the number of retries at the bot level, tailoring it to the specific needs of their system.
Key updates:
- Customizable Error Threshold: Set retry limits based on the system that is being used.
- Default and Custom Options: Choose the default setting or customize the number of retries.
- Retry Range: Customizable range from 1 to 3, 3 being the default.
Backward Compatibility:
- Existing bots default to the Use Default option, which is also included in the bot export/import processes.
Update or Delete Delivered Bot Messages in MS Teams
Bot messages in Microsoft Teams can now be updated or deleted even after they have been delivered to users. This feature provides greater flexibility and control, allowing platform users to disable or remove template messages after a user has taken action on them.
Key updates:
- New ‘channelActionMetadata’ object:
- Captures MsTeams ActivityID, ConversationID, and KoreMessageID.
- Only stores metadata for the latest bot message.
- New channel utility functions:
- channelUtil.getActionMetadata(): Retrieves metadata.
- channelUtil.executeAction(): Updates or deletes messages.
- Automatic updates:
- Chat history updates for modified messages
- Message tags emitted for updated/deleted messages
Dialogs
Manage Components Search Enhancement
The improved search functionality in the Manage Components page allows platform users to find components more easily, regardless of whether they remember a component’s technical name or display name.
Key updates:
- A new “Display Name” field is added for all nodes.
- Search now supports both Name and Display Name for all nodes.
- Dynamic filtering as user types.
- Case-insensitive search.
- Matches from the beginning of field names.
- Real-time results update.
Digital Forms
Field Validations using Post Processor Script
The platform now supports custom field validations in Digital Forms using a post-processor script. It allows platform users to create complex, custom validation rules using JavaScript, improving data collection accuracy and user experience.
Key updates:
- Custom Validation:
- JavaScript-based post-processor script.
- Supports dynamic variables, multi-field validations, and regex.
- Design Time:
- An expandable text box for script input.
- Retry limit (20) with an error message on exceeding.
- Run Time:
- Script processed on form submission.
- Error handling with task failure events and debug logs.
- Validation Types:
- Field-level: Highlights field with the error message below
- Form-level: Displays error message above submit button
Backward Compatibility:
- Existing forms treat the post-processor as an empty script.
v10.5.1 Aug 11, 2024
Patch Release
This update includes bug fixes and minor enhancements.
LLM and Generative AI
Multi-language Support for System and Custom LLMs
The platform now supports all bot languages for both system and custom LLMs.
Key updates:
- Language-specific responses: LLMs can now generate responses in bot languages that LLMs also support.
- Preserved sentiment: Responses maintain original sentiment.
- Expanded language options: Available for all LLM features and prompts.
Key benefits:
- Improved accuracy in non-English interactions.
- Enhanced user experience for global audiences.
Custom LLM Framework Update for GenAI Node
The platform has significantly enhanced the GenAI Node’s custom prompt creation flow. It now supports dynamic variables in the prompt definition, gives full control over the prompt structure, and dynamically defines prompt definitions using JavaScript. These updates provide greater flexibility, better conversation context control, and more sophisticated prompt engineering capabilities, allowing platform users to create more advanced and tailored GenAI applications with improved custom LLM integrations.
Key updates:
- Dynamic variables improvements: Conversation history is redefined as an array of objects. New variables are introduced for Required Entities, Collected Entities, and Conversation History Length.
- Variable support: Context, Environment, and Content variables are now supported in prompts and scripts.
- JavaScript mode: Ability to create prompts using JavaScript and preview option for script validation.
Channels
MS Teams Modal Dialog Support
The platform now supports Microsoft Teams Modal Dialogs, enhancing the interactive capabilities of virtual assistants deployed on the MS Teams channel. This support allows platform users to handle the ‘Invoke’ action-type messages used by MS Teams for Modal Dialogs.
Key updates:
- New “Modal Dialogs” toggle: The new toggle is introduced in the channel configuration and is off by default.
- Custom URL configuration: When enabled, platform users can set a Custom URL to which the platform forwards Modal Dialog messages for processing.
- Message handling: The platform maintains the conversation context when handling Modal Dialog messages. It forwards these messages with pre-context to the Custom URL for processing, then relays responses back to the user, maintaining a seamless interaction.
Key benefits:
- Enhanced interactivity: Enables form-based experiences in Teams.
- Seamless integration: Preserves conversation context throughout.
- Improved user experience: Supports more complex interactions.
Backward compatibility:
- This update is available to all existing apps, and it’s disabled by default. If enabled, the app needs to be republished to make these changes take effect.
Admin Console
IP Address Restriction Enhancement
The Admin Console now supports Regex patterns for IP address restrictions. The regex support makes it easier for administrators to manage access across extensive IP ranges.
Key updates:
- Administrators can enter IP ranges using regex patterns when IP Address restriction is enabled.
- Account access is limited to IP addresses matching the provided regex patterns.
Key benefits:
- Reduced manual entry.
- Greater precision in identifying IP addresses through pattern matching.
- Streamlined administration for large IP ranges.
Entity Node
Transient Entity Feature for Enhanced Data Privacy
The platform has introduced a “Transient Entity” feature for the Entity node. It allows platform users to ensure that sensitive user inputs do not persist after a conversation session ends.
Key updates:
- The new “Transient Entity” toggle in Entity Node > Component Settings. It’s visible when Sensitive Entity is enabled.
- It’s a component-level property, ensuring consistent application across all instances and flows using the Entity node.
- Applies to all channels, including IVR.
- Masks data during conversation based on existing Sensitive Entity settings.
- Removes specified data from conversation history once the session ends.
- Displays a placeholder “[data_purged]” in conversation history where data has been removed.
Key benefits:
- Enhanced Data Privacy: Sensitive information does not persist after conversations end.
- Regulatory Compliance: Helps businesses meet GDPR, CCPA, and other data protection regulations.
- Customizable: Works with existing Sensitive Entity settings for tailored data masking.
- Audit-Friendly: Improves audit trails with a clear indication of purged data.
- Industry-Specific Value: Particularly beneficial for the BFSI sector with strict data regulations.
Known Issues
We are working to fix these issues in the next release:
- Task Execution Logs (script execution): Entity values in the Context Object are redacted and not purged.
- In Analytics > Conversation History: When a Transient Entity is printed in the message/confirmation node, the data is redacted but not purged.
- Service Node: The usage of the transient entity in the service node request is in plain text in the response.
- Debug Logs (Analytics): When a transient entity is printed as part of debugger statements, the data is in redacted form in the Debug Logs (Analytics) and is not purged.
Digital Forms
Field Validations using Regex
The platform now supports Regex-based field validations in Digital Forms, enhancing data collection capabilities.
Key updates:
- Regex Option: Added alongside the existing predefined conditions in Field Validation.
- Flexible Validation: Fields are validated based on the provided regex patterns.
- Error Handling: Displays a custom error for a regex pattern mismatch.
Key benefits:
- Provides greater control over input formats.
- Enables precise data validation for complex scenarios.
Agent Transfer
Enhanced Agent Chat History Link
The platform has improved the functionality of chat history links provided to agents during conversation transfers. The access limit for these links has been increased from 5 to 10 times, allowing supervisors to better audit them. Additionally, the links now display the specific conversation that prompted the transfer, providing more relevant context to agents.
Backward compatibility:
- The existing agent transfer chat history links remain unchanged. These changes apply only to the links generated after this release.
v10.5 July 27, 2024
Patch Release
This update includes feature enhancements and bug fixes. The enhancements are summarized below.
LLM & Generative AI Framework
OpenAI GPT-4 Turbo and GPT-4o Support for LLM & Generative AI Features
The Platform now supports two new OpenAI models for various Co-Pilot and Dynamic Conversations features.
- GPT- 4 Turbo is a high-speed, accurate model ideal for real-time applications like chatbots, virtual assistants, and content generation.
- GPT- 4o is an advanced multimodal model that can accept both text and images as input, offering improved efficiency and cost-effectiveness compared to GPT-4 Turbo.
Admin Console
Usage Information for Enterprise Accounts
Enterprise users can now view consolidated usage data for all their Bots and Apps in the Bot Admin Console. Key updates:
- The new Billing menu in the Admin Console left navigation.
- Combined XO10 and XO11 usage statistics on a single page.
- Detailed usage trends are accessible via the Manage button.
Deploy Management
Import, Export, and Publish GenAI and LLM Settings
A new ‘GenAI and LLM’ option is now available under the ‘Settings’ section for Import, Export, and Publish operations.
The setting includes:
- Integrations
- Prompts and Requests Library
- Feature Mappings
- Guardrails (for v11 only)
Key benefits:
- Deploy GenAI and LLM features across multiple bots.
- Enhance flexibility in managing GenAI-related settings.
- Ensure seamless operation of runtime features for end-users.
Full vs. Incremental Import:
- Full Import
- Deletes existing models and prompts in the target app.
- Overwrites with models, prompts, and GenAI features from the import file.
- Retains model configurations if a model exists in both source and target
- Replaces all feature mappings, custom instructions, and guardrails.
- Incremental Import
- Keeps existing prompts, only adds new ones.
- Replaces all feature mappings, custom instructions, and guardrails.
- Both import types:
- Preserve existing integrations.
- Import XO-GPT integration as-is.
- Enable imported features with warnings.
- Handle “Azure Open AI by Kore.ai” integration based on token status.
Backward compatibility:
- Existing bots in the Configured state are copied to the Published state.
These changes aim to streamline the deployment and management of GenAI and LLM features across multiple bots and ensure the seamless operation of runtime features for end-users.
Channels
Instagram Channel Support
The XO Platform has added Instagram as a new channel option. Users can enable and deploy their virtual assistants on Instagram. Learn more.
APIs
SDK Push Notifications Management APIs
The platform has introduced a new set of SDK Notifications APIs to enhance control over push notifications for mobile devices using the Web/Mobile SDK channel.
Key updates:
- A new API scope is added to the Bot builder – “SDKPushNotification”.
- Three new APIs have been introduced:
- DeviceSubscription API: Subscribe the device to SDK push messages and receive subscription status and device details.
- Subscribed User Devices API: Lists all the mobile devices subscribed to SDK push notifications and their OS types.
- DeviceUnsubscription API: Unsubscribes SDK push messages for specific or all devices of a user.
Key benefits:
- Improved user control: Manage users’ push notification preferences more effectively.
- Enhanced flexibility: Manage device subscriptions programmatically.
Public API for SSO Configuration Management
The platform has introduced public APIs for managing Single Sign-On (SSO) configurations. This new API complements existing bot creation and publishing APIs, allowing for a more comprehensive automation of account setup processes. It significantly enhances the platform’s capabilities for enterprise customers who require frequent audits or automated deployments.
Key updates:
- Public API for SSO configuration management:
- Fetch the SSO Meta API: Returns the existing SSO configuration along with the URLs.
- Enable SSO API: Enables the SSO configuration for an account.
- Disable SSO API: Dsables the SSO configuration for an account.
- Update the SSO Configuration API: Updates the SSO configuration for an account.
- Support for SAML protocol.
Key benefits:
- Streamlined automation: Customers can now fully integrate SSO configuration into their CI/CD workflows.
- Reduced manual effort: Simplifies the audit process by allowing automated SSO setup.
- Increased flexibility: Account administrators can programmatically manage SSO settings.
- Improved efficiency: Facilitates faster and more consistent SSO deployment across accounts.
Virtual Assistant
Timeout Settings Moved to Instance Properties for Service Node
The timeout settings for the Service Node have been moved from Component Properties to Instance Properties.
Key benefits:
- Increased flexibility: Customize timeout settings for each dialog individually without affecting other tasks using the same Service Node.
- Improved error handling: The “Jump to Specific Node” option now works more reliably within the current dialog.
Backward compatibility:
- Existing service nodes retain their current timeout behavior while the timeout settings are moved to the Instance Properties.
Rephrased User Query Details in the Context Object
The platform now includes the Rephrased User Query in the context object, making it available for downstream tasks. This enhancement improves intent detection, entity extraction, and search accuracy by providing enriched user input by incorporating contextual signals. Platform users can now leverage rephrased queries for dialog execution and API calls to Search AI.
Key updates:
- New “UserQuery” context object:
context.UserQuery : { originalUserQuery : <original user input> , rephrasedUserQuery : <rephrase user query> }
- New “Conversation History” setting: Indicates the conversation history length – the number of previous messages sent as context to LLM:
Generative AI > Dynamic Conversation > Rephrase User Query > Advanced Settings > Conversation History Length
v10.4.1 July 13, 2024
Patch Release
This update includes bug fixes.
v10.4 June 29, 2024
Patch Release
This update includes feature enhancements and bug fixes. The enhancements are summarized below.
Voice Call Properties
Dynamic Values for Timeout Duration in Voice Call Properties
This update enables dynamic timeout settings for voice calls via environment variables. Users can now manage timeouts across multiple components without manual adjustments. This approach enhances consistency, reduces errors, and simplifies voice call property management.
The users now have two options for setting timeout durations:
- Preset: Select a maximum wait time between 1 and 60 seconds to receive input.
- Environment Variable: Select any environment variable from a drop-down list or use a search bar to find a specific variable. Learn more.
NLU
Ability to Import ML Utterances from One Language to Another (without Translation)
The platform now supports copying utterances between languages within the same app. This feature simplifies importing and synchronizing utterance data across multiple languages. (The ability to automatically translate the copied utterances in the target language will be available soon.)
Learn more.
Improvements to Zip Code Entities
The Zip Code entity has been enhanced to identify wild cards like “ “ and “-”. For example, “1 2 3 45” is identified as “12345”. Learn more.
Digital Forms
Option to Clear Default Date During Design Time
Date fields on digital forms now have a clear (‘x’) icon, which allows users to easily remove the default date value. Learn more.
Agent Transfer
Attachment Sharing Between Users and Live Agents
Users can now send files to agents during conversations. This improves communication and helps solve issues faster. This feature is currently available only for ServiceNow agent integration.
Capability to Handle Agent Fallback Errors
The platform has introduced a new “Agent Transfer fallback response” to improve user experience during agent transfers. Instead of leaving the conversation idle, the platform can now inform users with an appropriate response that can be configured in the app definition. This feature allows for clearer communication and better handling of technical issues during agent transfers. Learn more.
Channels
Discontinuation of the Google Business Messages Channel
Google announced the discontinuation of the Google Business Messages channel from July 31, 2024. This channel will be phased out in the coming weeks. If you have alternative chat channels, consider inviting your customers to continue conversations there. For more details, refer to the Google announcement.
v10.3.1 June 15, 2024
Patch Release
This update includes bug fixes and minor enhancements. The enhancement is summarized below.
Language and Spell Correction Enhancements
Spell Correction Version 2 for Dutch
A new spell correction (Version 2.0) is now available for the Dutch language. Learn more.
v10.3 June 01, 2024
Patch Release
This update includes feature enhancements and bug fixes. Key features and enhancements included in this release are summarized below.
Digital Forms
Preprocessor Script Support for Digital Forms
The Digital Forms module now provides the ability to dynamically configure the form definition and behavior. The newly introduced Preprocessor configuration allows updating the form definition dynamically using JavaScript. The platform executes this preprocessor during the runtime and delivers the form definition to the channel. The preprocessor can use the environment, content, and context variables.
The following are some of the key use cases:
- Dynamically changing the form field titles, descriptions, etc.
- Dynamically populating the values of fields, for e.g., options in a drop-down component
- Changing the language of the messages to support multilingual conversations
The koreUtil library has been extended with the “getFormDefinition” function to retrieve and modify the form definition.
This feature also helps address the current limitation of system messages available only in English. The “formMsgMeta” section of the form data contains the full list of system messages and errors, which can be modified using the Preprocessor. Learn more.
BotKit
OnVariableUpdate Event Support for the Universal Bot
The Universal Virtual Assistant now supports the “OnVariableUpdate” BotKit event. By subscribing to this event, the platform will push all the variables defined within the virtual assistant to the BotKit, allowing platform users to manage the bot variables seamlessly.
Additionally, for bot variables with the propagation flag enabled, the platform will include the details of the propagated child bots and the variable data sent to the BotKit. Learn more.
Language and Spell Correction Enhancements
Spell Correction Version 2 for French
A new spell correction (Version 2.0) is now available for the French language. Learn more.
Previous Releases
For details on the features introduced in the previous versions of 10.x releases, see the previous versions’ release notes.