GETTING STARTED
Kore.ai XO Platform
Virtual Assistants Overview
Natural Language Processing (NLP)
Concepts and Terminology
Quick Start Guide
Accessing the Platform
Navigating the Kore.ai XO Platform
Building a Virtual Assistant
Help & Learning Resources
Release Notes
Current Version
Recent Updates
Previous Versions
Deprecations
Request a Feature
CONCEPTS
Design
Storyboard
Overview
FAQs
Conversation Designer
Overview
Dialog Tasks
Mock Scenes
Dialog Tasks
Overview
Navigate Dialog Tasks
Build Dialog Tasks
Node Types
Overview
Intent Node
Dialog Node
Dynamic Intent Node
GenAI Node
GenAI Prompt
Entity Node
Form Node
Confirmation Node
Message Nodes
Logic Node
Bot Action Node
Service Node
Webhook Node
Script Node
Process Node
Agent Transfer
Node Connections
Node Connections Setup
Sub-Intent Scoping
Entity Types
Entity Rules
User Prompts or Messages
Voice Call Properties
Knowledge AI
Introduction
Knowledge Graph
Introduction
Terminology
Build a Knowledge Graph
Manage FAQs
Knowledge Extraction
Import or Export Knowledge Graph
Prepare Data for Import
Importing Knowledge Graph
Exporting Knowledge Graph
Auto-Generate Knowledge Graph
Knowledge Graph Analysis
Answer from Documents
Alert Tasks
Small Talk
Digital Skills
Overview
Digital Forms
Digital Views
Introduction
Widgets
Panels
Session and Context Variables
Context Object
Intent Discovery
Train
NLP Optimization
ML Engine
Overview
Model Validation
FM Engine
KG Engine
Traits Engine
Ranking and Resolver
Training Validations
NLP Configurations
NLP Guidelines
LLM and Generative AI
Introduction
LLM Integration
Kore.ai XO GPT Module
Prompts & Requests Library
Co-Pilot Features
Dynamic Conversations Features
Intelligence
Introduction
Event Handlers
Contextual Memory
Contextual Intents
Interruption Management
Multi-intent Detection
Amending Entities
Default Conversations
Conversation Driven Dialog Builder
Sentinment Management
Tone Analysis
Default Standard Responses
Ignore Words & Field Memory
Test & Debug
Overview
Talk to Bot
Utterance Testing
Batch Testing
Conversation Testing
Conversation Testing Overview
Create a Test Suite
Test Editor
Test Case Assertion
Test Case Execution Summary
Glossary
Health and Monitoring
NLP Health
Flow Health
Integrations
Actions
Actions Overview
Asana
Configure
Templates
Azure OpenAI
Configure
Templates
BambooHR
Configure
Templates
Bitly
Configure
Templates
Confluence
Configure
Templates
DHL
Configure
Templates
Freshdesk
Configure
Templates
Freshservice
Configure
Templates
Google Maps
Configure
Templates
Here
Configure
Templates
HubSpot
Configure
Templates
JIRA
Configure
Templates
Microsoft Graph
Configure
Templates
Open AI
Configure
Templates
Salesforce
Configure
Templates
ServiceNow
Configure
Templates
Stripe
Configure
Templates
Shopify
Configure
Templates
Twilio
Configure
Templates
Zendesk
Configure
Templates
Agents
Agent Transfer Overview
Custom (BotKit)
Drift
Genesys
Intercom
NiceInContact
NiceInContact(User Hub)
Salesforce
ServiceNow
Configure Tokyo and Lower versions
Configure Utah and Higher versions
Unblu
External NLU Adapters
Overview
Dialogflow Engine
Test and Debug
Deploy
Channels
Publishing
Versioning
Analyze
Introduction
Dashboard Filters
Overview Dashboard
Conversations Dashboard
Users Dashboard
Performance Dashboard
Custom Dashboards
Introduction
Custom Meta Tags
Create Custom Dashboard
Create Custom Dashboard Filters
LLM and Generative AI Logs
NLP Insights
Task Execution Logs
Conversations History
Conversation Flows
Conversation Insights
Feedback Analytics
Usage Metrics
Containment Metrics
Universal Bots
Introduction
Universal Bot Definition
Universal Bot Creation
Training a Universal Bot
Universal Bot Customizations
Enabling Languages
Store
Manage Assistant
Team Collaboration
Plan & Usage
Overview
Usage Plans
Templates
Support Plans
Invoices
Authorization
Conversation Sessions
Multilingual Virtual Assistants
Get Started
Supported Components & Features
Manage Languages
Manage Translation Services
Multiingual Virtual Assistant Behavior
Feedback Survey
Masking PII Details
Variables
Collections
IVR Settings
General Settings
Assistant Management
Manage Namespace
Data
Overview
Data Table
Table Views
App Definitions
Data as Service
HOW TOs
Build a Travel Planning Assistant
Travel Assistant Overview
Create a Travel Virtual Assistant
Design Conversation Skills
Create an ‘Update Booking’ Task
Create a Change Flight Task
Build a Knowledge Graph
Schedule a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Train the Assistant
Use Traits
Use Patterns
Manage Context Switching
Deploy the Assistant
Use Bot Functions
Use Content Variables
Use Global Variables
Use Web SDK
Build a Banking Assistant
Design Conversation Skills
Create a Sample Banking Assistant
Create a Transfer Funds Task
Create a Update Balance Task
Create a Knowledge Graph
Set Up a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Add Data to Data Tables
Update Data in Data Tables
Add Data from Digital Forms
Train the Assistant
Composite Entities
Use Traits
Use Patterns for Intents & Entities
Manage Context Switching
Deploy the Assistant
Configure an Agent Transfer
Use Assistant Functions
Use Content Variables
Use Global Variables
Intent Scoping using Group Node
Analyze the Assistant
Create a Custom Dashboard
Use Custom Meta Tags in Filters
Migrate External Bots
Google Dialogflow Bot
APIs & SDKs
API Reference
API Introduction
Rate Limits
API List
koreUtil Libraries
SDK Reference
SDK Introduction
SDK Security
SDK Registration
Web Socket Connect and RTM
Installing the BotKit SDK
Using the BotKit SDK
SDK Events
SDK Functions
SDK Tutorials
BotKit - Blue Prism
BotKit - Flight Search Sample VA
BotKit - Agent Transfer
Widget SDK Tutorial
Web SDK Tutorial
ADMINISTRATION
Introduction to Admin Console
Administration Dashboard
User Management
Add Users
Manage Groups
Manage Roles
Data Tables and Views
Assistant Management
Enrollment
Invite Users
Send Bulk Invites
Import User Data
Synchronize Users from AD
Security & Control
Using Single-Sign On (SSO)
Two-Factor Authentication (2FA)
Security Settings
Cloud Connector
Analytics
Billing
  1. Home
  2. Docs
  3. Virtual Assistants
  4. Builder
  5. Dialog Task
  6. GenAI Prompt (BETA)

GenAI Prompt (BETA)

The GenAI Prompt lets bot developers leverage the full potential of LLM and Generative AI models to quickly build their own prompts. Developers can select a specific AI model, tweak its settings, and preview the response for the prompt. The node allows developers to creatively leverage LLMs by defining the prompt using conversation context and the response from the LLMs in defining the subsequent conversation flow.

Node Behavior

Runtime

You can work with this node like with any other node within Dialog Tasks and can invoke it within multiple tasks. During runtime, the node behaves as follows:

  1. On reaching the GenAI Prompt, the platform parses any variable used in the prompt and constructs the request using the Prompt and the Advanced Settings.
  2. An API call is made to the model with the request.
  3. The response is stored in the context object as part of the dialog context and can be used to define the transitions or any other part of the bot configuration.
  4. The platform exits from the GenAI Prompt node when a successful response is received, or the defined timeout condition is met.

Enable the Node

This node is not available by default. You can enable it for all Dialog Tasks as follows:

  • Configure the Open AI integration and enable the GenAI Prompt feature under Build > Natural Language > Advanced NLU Settings. You can also select an LLM model and its settings for the features. By default, these selections are applicable across the platform for the feature. Learn more.
Note: If you do not configure an LLM model and do not enable the GenAI Prompt feature, then the node will not be available within the Dialog Builder.

Setting up a GenAI Prompt in a dialog task involves adding the node at the appropriate location in the dialog flow and configuring various properties of the node, as explained below.

Add the Node

  1. Go to Build > Conversational Skills > Dialog Tasks and select the task to which you want to add the GenAI Prompt.
  2. Use the “+” button next to the node under which you want to add the GenAI Prompt. Then, choose GenAI Prompt, and then click New GenAI Prompt. (For more information on adding nodes, see different ways to add a node.) Alternatively, you can drag and drop the GenAI Prompt node to the required location on the canvas.
  3. The GenAI Prompt window is displayed with the Component Properties tab selected by default.

Configure the Node

Component Properties

The settings made within this section affect this node across all instances in all dialog tasks.

General Settings

In this section, you can provide Name and Display Name for the node and write your own OpenAI Prompt.

  • Prompt: A prompt allows you to define the request to be sent to the LLMs for generating a response. Some of the use cases for prompts include entity or topic extraction, rephrasing, or dynamic content generation. The prompt can have up to 2000 characters, and it can be defined using text, Context, Content, and Environment variables.
  • Preview Response: Check the preview of the OpenAI response for your prompt. When you click Preview Response, the Platform parses any variable used in the prompt and constructs OpenAI request using the Prompt and the Advanced Settings. If the response is not relevant, you can tweak the Prompt and the Advanced Settings to make the response better.

Advanced Settings

In this section, you can change the model and tweak its settings.

Adjusting the settings allows you to fine-tune the model’s behavior to meet your needs. The default settings work fine for most cases. However, if required, you can tweak the settings and find the right balance for your use case.
Set System Context, Temperature, and Max Tokens

  • Model: The default model for which the settings are displayed. You can choose another supported mode if it’s configured. If you select a non-default model, it’s used for this node only. If you want to change the default model, you can select the model in the drop-down list and use the Mark Default option shown next to its name.
  • System Context: Add a brief description of the use case context to guide the model.
  • Temperature: The setting controls the randomness of the model’s output. A higher temperature, like 0.8 or above, can result in unexpected, creative, and less relevant responses. On the other hand, a lower temperature, like 0.5 or below, makes the output more focused and relevant.
  • Max Tokens: It indicates the total number of tokens used in the API call to the model. It affects the cost and the time taken to receive a response. A token can be as short as one character or as long as one word, depending on the text.

Advanced Controls

In this section, you can select the maximum wait time to receive a response from the LLM and decide how the bot should respond when the timeout occurs.
Timeout settings for the node

  • Timeout: Select the maximum wait time from the drop-down list. The timeout range can be any value between 10 to 60, the default being 10.
  • Timeout Error Handling: Choose how the bot should respond when the timeout occurs:
    • Close the Task and trigger Task Execution Failure Event
    • Continue with the task and transition to this node; select the node from the drop-down list.

Instance Properties

On the Instance Properties tab, you can configure the instance-specific fields for this GenAI Prompt. These settings are applicable only for this instance and will not affect any other instances of this node.

Custom Tags

In this section, you can add Custom Meta Tags to the conversation flow to profile VA-user conversations and derive business-critical insights from usage and execution metrics. You can add tags for the following:

  • Message: Define custom tags to be added to the current message in the conversation.
  • User: Define custom tags to be added to the user’s profile information.
  • Session: Define custom tags to be added to the current conversation session.

For more information on custom tags, see Custom Meta Tags.

Connections Properties

On the Connections tab, you can set the transition properties to determine the node in the dialog task to execute next. You can write conditional statements based on the values of any Entity or Context Objects in the dialog task, or you can use intents for transitions. See Adding IF-Else Conditions to Node Connections for a detailed setup guide.

Note: These conditions apply only for this instance and will not affect this node when used in any other dialog.

About Responses

All the responses collected are stored in context variables. For example, {{context.GenerativeAINode.NodeName.properties}}. You can define transitions using the context variables.
The responses are captured in a specific format, as shown below.

“context”:{
"GenerativeAINode": {
    "NodeName": {
      "id": "cmpl-7UbzLTumD9ALpfa1mcpf15dK3RnWM",
      "object": "text_completion",
      "created": 1687530223,
      "model": "text-davinci-003",
      "choices": [
        {
          "text": "\n\nI'm sorry, I'm not able to provide that information. However, I would be happy to direct you to a website that may provide the information you are looking for.",
          "index": 0,
          "logprobs": null,
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 58,
        "completion_tokens": 37,
        "total_tokens": 95
      },
      "1687530221473": [
        {
          "nodeId": "NodeName",
          "startTime": "2023-06-23T14:23:42.904Z",
          "endTime": "2023-06-23T14:23:46.029Z"
        }
      ]
    }
  }
}

GenAI Prompt (BETA)

The GenAI Prompt lets bot developers leverage the full potential of LLM and Generative AI models to quickly build their own prompts. Developers can select a specific AI model, tweak its settings, and preview the response for the prompt. The node allows developers to creatively leverage LLMs by defining the prompt using conversation context and the response from the LLMs in defining the subsequent conversation flow.

Node Behavior

Runtime

You can work with this node like with any other node within Dialog Tasks and can invoke it within multiple tasks. During runtime, the node behaves as follows:

  1. On reaching the GenAI Prompt, the platform parses any variable used in the prompt and constructs the request using the Prompt and the Advanced Settings.
  2. An API call is made to the model with the request.
  3. The response is stored in the context object as part of the dialog context and can be used to define the transitions or any other part of the bot configuration.
  4. The platform exits from the GenAI Prompt node when a successful response is received, or the defined timeout condition is met.

Enable the Node

This node is not available by default. You can enable it for all Dialog Tasks as follows:

  • Configure the Open AI integration and enable the GenAI Prompt feature under Build > Natural Language > Advanced NLU Settings. You can also select an LLM model and its settings for the features. By default, these selections are applicable across the platform for the feature. Learn more.
Note: If you do not configure an LLM model and do not enable the GenAI Prompt feature, then the node will not be available within the Dialog Builder.

Setting up a GenAI Prompt in a dialog task involves adding the node at the appropriate location in the dialog flow and configuring various properties of the node, as explained below.

Add the Node

  1. Go to Build > Conversational Skills > Dialog Tasks and select the task to which you want to add the GenAI Prompt.
  2. Use the “+” button next to the node under which you want to add the GenAI Prompt. Then, choose GenAI Prompt, and then click New GenAI Prompt. (For more information on adding nodes, see different ways to add a node.) Alternatively, you can drag and drop the GenAI Prompt node to the required location on the canvas.
  3. The GenAI Prompt window is displayed with the Component Properties tab selected by default.

Configure the Node

Component Properties

The settings made within this section affect this node across all instances in all dialog tasks.

General Settings

In this section, you can provide Name and Display Name for the node and write your own OpenAI Prompt.

  • Prompt: A prompt allows you to define the request to be sent to the LLMs for generating a response. Some of the use cases for prompts include entity or topic extraction, rephrasing, or dynamic content generation. The prompt can have up to 2000 characters, and it can be defined using text, Context, Content, and Environment variables.
  • Preview Response: Check the preview of the OpenAI response for your prompt. When you click Preview Response, the Platform parses any variable used in the prompt and constructs OpenAI request using the Prompt and the Advanced Settings. If the response is not relevant, you can tweak the Prompt and the Advanced Settings to make the response better.

Advanced Settings

In this section, you can change the model and tweak its settings.

Adjusting the settings allows you to fine-tune the model’s behavior to meet your needs. The default settings work fine for most cases. However, if required, you can tweak the settings and find the right balance for your use case.
Set System Context, Temperature, and Max Tokens

  • Model: The default model for which the settings are displayed. You can choose another supported mode if it’s configured. If you select a non-default model, it’s used for this node only. If you want to change the default model, you can select the model in the drop-down list and use the Mark Default option shown next to its name.
  • System Context: Add a brief description of the use case context to guide the model.
  • Temperature: The setting controls the randomness of the model’s output. A higher temperature, like 0.8 or above, can result in unexpected, creative, and less relevant responses. On the other hand, a lower temperature, like 0.5 or below, makes the output more focused and relevant.
  • Max Tokens: It indicates the total number of tokens used in the API call to the model. It affects the cost and the time taken to receive a response. A token can be as short as one character or as long as one word, depending on the text.

Advanced Controls

In this section, you can select the maximum wait time to receive a response from the LLM and decide how the bot should respond when the timeout occurs.
Timeout settings for the node

  • Timeout: Select the maximum wait time from the drop-down list. The timeout range can be any value between 10 to 60, the default being 10.
  • Timeout Error Handling: Choose how the bot should respond when the timeout occurs:
    • Close the Task and trigger Task Execution Failure Event
    • Continue with the task and transition to this node; select the node from the drop-down list.

Instance Properties

On the Instance Properties tab, you can configure the instance-specific fields for this GenAI Prompt. These settings are applicable only for this instance and will not affect any other instances of this node.

Custom Tags

In this section, you can add Custom Meta Tags to the conversation flow to profile VA-user conversations and derive business-critical insights from usage and execution metrics. You can add tags for the following:

  • Message: Define custom tags to be added to the current message in the conversation.
  • User: Define custom tags to be added to the user’s profile information.
  • Session: Define custom tags to be added to the current conversation session.

For more information on custom tags, see Custom Meta Tags.

Connections Properties

On the Connections tab, you can set the transition properties to determine the node in the dialog task to execute next. You can write conditional statements based on the values of any Entity or Context Objects in the dialog task, or you can use intents for transitions. See Adding IF-Else Conditions to Node Connections for a detailed setup guide.

Note: These conditions apply only for this instance and will not affect this node when used in any other dialog.

About Responses

All the responses collected are stored in context variables. For example, {{context.GenerativeAINode.NodeName.properties}}. You can define transitions using the context variables.
The responses are captured in a specific format, as shown below.

“context”:{
"GenerativeAINode": {
    "NodeName": {
      "id": "cmpl-7UbzLTumD9ALpfa1mcpf15dK3RnWM",
      "object": "text_completion",
      "created": 1687530223,
      "model": "text-davinci-003",
      "choices": [
        {
          "text": "\n\nI'm sorry, I'm not able to provide that information. However, I would be happy to direct you to a website that may provide the information you are looking for.",
          "index": 0,
          "logprobs": null,
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 58,
        "completion_tokens": 37,
        "total_tokens": 95
      },
      "1687530221473": [
        {
          "nodeId": "NodeName",
          "startTime": "2023-06-23T14:23:42.904Z",
          "endTime": "2023-06-23T14:23:46.029Z"
        }
      ]
    }
  }
}
Menu