はじめに
対話型AIプラットフォーム
チャットボットの概要
自然言語処理(NLP)
ボットの概念と用語
クイックスタートガイド
プラットフォームへのアクセス
ボットビルダーの操作
リリースノート
最新バージョン(英語)
以前のバージョン(英語)
廃止機能(英語)
コンセプト
設計
ストーリーボード
ダイアログタスク
ダイアログタスクとは
ダイアログビルダー
ノードタイプ
インテントノード
ダイアログノード
エンティティノード
フォームノード
確認ノード
ロジックノード
ボットアクションノード
サービスノード
Webhookノード
スクリプトノード
グループノード
エージェント転送ノード
ユーザープロンプト
音声通話プロパティ
イベント ハンドラー
ナレッジグラフ
ナレッジグラフの抽出
ナレッジグラフの構築
ボットにナレッジグラフを追加
グラフの作成
ナレッジグラフの構築
既存のソースからFAQを構築
特性、同義語、停止用語
変数ネームスペースの管理
更新
ノード間の質問と回答の移動
用語の編集と削除
質問と応答の編集
ナレッジグラフの分析
通知タスク
スモールトーク
デジタルスキル
デジタルフォーム
デジタルビュー
デジタルビューとは
パネル
ウィジェット
トレーニング
トレーニングとは
機械学習
機械学習とは
モデル検証
ファンダメンタルミーニング
ナレッジグラフ
示唆
ランキングおよび解決
NLPの詳細設定
NLPのガイドライン
インテリジェンス
インテリジェンスとは
コンテキスト
コンテキストインテント
割り込み
複数インテントの検出
エンティティの変更
デフォルトの会話
センチメント管理
トーン分析
テストとデバッグ
ボットと会話
発話テスト
バッチテスト
会話テスト
デプロイ
チャネル
公開
分析
ボットの分析
NLPメトリクス
会話フロー
Usage Metrics
封じ込め測定
カスタムダッシュボード
カスタムダッシュボードとは
メタタグ
カスタムダッシュボードとウィジェット
ユニバーサルボット
ユニバーサルボットとは
ユニバーサルボットの定義
ユニバーサルボットの作成
ユニバーサルボットのトレーニング
ユニバーサルボットのカスタマイズ
他言語の有効化
ストア
プラントと使用
Overview
Usage Plans
Support Plans
Invoices
管理
ボット認証
複数言語対応ボット
個人を特定できる情報の編集
ボット変数の使用
IVRのシステム連携
一般設定
ボット管理
ハウツー
会話スキルの設計
バンキングボットを作成
バンキングボット – 資金の振り替え
バンキングボット – 残高を更新
ナレッジグラフを構築
スマートアラートの予約方法
デジタルスキルの設計
デジタルフォームの設定方法
デジタルビューの設定方法
データテーブルのデータの追加方法
データテーブルのデータの更新方法
Add Data from Digital Forms
ボットのトレーニング
示唆の使用方法
インテントとエンティティのパターンの使用方法
コンテキスト切り替えの管理方法
ボットのデプロイ
エージェント転送の設定方法
ボット関数の使用方法
コンテンツ変数の使用方法
グローバル変数の使用方法
Web SDK Tutorial(英語)
Widget SDK Tutorial(英語)
ボットの分析
カスタムダッシュボードの作成方法
カスタムタグを使ってフィルタリング
管理
ボット管理者コンソール
ダッシュボード
ユーザーの管理
ユーザーの管理
グループの管理
ロール管理
ボット管理モジュール
登録
ユーザーの招待
招待状の一括送信
ユーザーデータのインポート
Active Directoryからユーザーを同期
セキュリティ/コンプライアンス
シングル サインオンの使用
セキュリティ設定
Billing(日本未対応)
  1. ホーム
  2. Docs
  3. Virtual Assistants
  4. Builder
  5. Dialog Task
  6. GenAI Prompt (BETA)

GenAI Prompt (BETA)

The GenAI Prompt lets bot developers leverage the full potential of LLM and Generative AI models to quickly build their own prompts. Developers can select a specific AI model, tweak its settings, and preview the response for the prompt. The node allows developers to creatively leverage LLMs by defining the prompt using conversation context and the response from the LLMs in defining the subsequent conversation flow.

Node Behavior

Runtime

You can work with this node like with any other node within Dialog Tasks and can invoke it within multiple tasks. During runtime, the node behaves as follows:

  1. On reaching the GenAI Prompt, the platform parses any variable used in the prompt and constructs the request using the Prompt and the Advanced Settings.
  2. An API call is made to the model with the request.
  3. The response is stored in the context object as part of the dialog context and can be used to define the transitions or any other part of the bot configuration.
  4. The platform exits from the GenAI Prompt node when a successful response is received, or the defined timeout condition is met.

Enable the Node

This node is not available by default. You can enable it for all Dialog Tasks as follows:

  • Configure the Open AI integration and enable the GenAI Prompt feature under Build > Natural Language > Advanced NLU Settings. You can also select an LLM model and its settings for the features. By default, these selections are applicable across the platform for the feature. Learn more.
Note: If you do not configure an LLM model and do not enable the GenAI Prompt feature, then the node will not be available within the Dialog Builder.

Setting up a GenAI Prompt in a dialog task involves adding the node at the appropriate location in the dialog flow and configuring various properties of the node, as explained below.

Add the Node

  1. Go to Build > Conversational Skills > Dialog Tasks and select the task to which you want to add the GenAI Prompt.
  2. Use the “+” button next to the node under which you want to add the GenAI Prompt. Then, choose GenAI Prompt, and then click New GenAI Prompt. (For more information on adding nodes, see different ways to add a node.) Alternatively, you can drag and drop the GenAI Prompt node to the required location on the canvas.
  3. The GenAI Prompt window is displayed with the Component Properties tab selected by default.

Configure the Node

Component Properties

The settings made within this section affect this node across all instances in all dialog tasks.

General Settings

In this section, you can provide Name and Display Name for the node and write your own OpenAI Prompt.

  • Prompt: A prompt allows you to define the request to be sent to the LLMs for generating a response. Some of the use cases for prompts include entity or topic extraction, rephrasing, or dynamic content generation. The prompt can have up to 2000 characters, and it can be defined using text, Context, Content, and Environment variables.
  • Preview Response: Check the preview of the OpenAI response for your prompt. When you click Preview Response, the Platform parses any variable used in the prompt and constructs OpenAI request using the Prompt and the Advanced Settings. If the response is not relevant, you can tweak the Prompt and the Advanced Settings to make the response better.

Advanced Settings

In this section, you can change the model and tweak its settings.

Adjusting the settings allows you to fine-tune the model’s behavior to meet your needs. The default settings work fine for most cases. However, if required, you can tweak the settings and find the right balance for your use case.
Set System Context, Temperature, and Max Tokens

  • Model: The default model for which the settings are displayed. You can choose another supported mode if it’s configured. If you select a non-default model, it’s used for this node only. If you want to change the default model, you can select the model in the drop-down list and use the Mark Default option shown next to its name.
  • System Context: Add a brief description of the use case context to guide the model.
  • Temperature: The setting controls the randomness of the model’s output. A higher temperature, like 0.8 or above, can result in unexpected, creative, and less relevant responses. On the other hand, a lower temperature, like 0.5 or below, makes the output more focused and relevant.
  • Max Tokens: It indicates the total number of tokens used in the API call to the model. It affects the cost and the time taken to receive a response. A token can be as short as one character or as long as one word, depending on the text.

Advanced Controls

In this section, you can select the maximum wait time to receive a response from the LLM and decide how the bot should respond when the timeout occurs.
Timeout settings for the node

  • Timeout: Select the maximum wait time from the drop-down list. The timeout range can be any value between 10 to 60, the default being 10.
  • Timeout Error Handling: Choose how the bot should respond when the timeout occurs:
    • Close the Task and trigger Task Execution Failure Event
    • Continue with the task and transition to this node; select the node from the drop-down list.

Instance Properties

On the Instance Properties tab, you can configure the instance-specific fields for this GenAI Prompt. These settings are applicable only for this instance and will not affect any other instances of this node.

Custom Tags

In this section, you can add Custom Meta Tags to the conversation flow to profile VA-user conversations and derive business-critical insights from usage and execution metrics. You can add tags for the following:

  • Message: Define custom tags to be added to the current message in the conversation.
  • User: Define custom tags to be added to the user’s profile information.
  • Session: Define custom tags to be added to the current conversation session.

For more information on custom tags, see Custom Meta Tags.

Connections Properties

On the Connections tab, you can set the transition properties to determine the node in the dialog task to execute next. You can write conditional statements based on the values of any Entity or Context Objects in the dialog task, or you can use intents for transitions. See Adding IF-Else Conditions to Node Connections for a detailed setup guide.

Note: These conditions apply only for this instance and will not affect this node when used in any other dialog.

About Responses

All the responses collected are stored in context variables. For example, {{context.GenerativeAINode.NodeName.properties}}. You can define transitions using the context variables.
The responses are captured in a specific format, as shown below.

“context”:{
"GenerativeAINode": {
    "NodeName": {
      "id": "cmpl-7UbzLTumD9ALpfa1mcpf15dK3RnWM",
      "object": "text_completion",
      "created": 1687530223,
      "model": "text-davinci-003",
      "choices": [
        {
          "text": "\n\nI'm sorry, I'm not able to provide that information. However, I would be happy to direct you to a website that may provide the information you are looking for.",
          "index": 0,
          "logprobs": null,
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 58,
        "completion_tokens": 37,
        "total_tokens": 95
      },
      "1687530221473": [
        {
          "nodeId": "NodeName",
          "startTime": "2023-06-23T14:23:42.904Z",
          "endTime": "2023-06-23T14:23:46.029Z"
        }
      ]
    }
  }
}

GenAI Prompt (BETA)

The GenAI Prompt lets bot developers leverage the full potential of LLM and Generative AI models to quickly build their own prompts. Developers can select a specific AI model, tweak its settings, and preview the response for the prompt. The node allows developers to creatively leverage LLMs by defining the prompt using conversation context and the response from the LLMs in defining the subsequent conversation flow.

Node Behavior

Runtime

You can work with this node like with any other node within Dialog Tasks and can invoke it within multiple tasks. During runtime, the node behaves as follows:

  1. On reaching the GenAI Prompt, the platform parses any variable used in the prompt and constructs the request using the Prompt and the Advanced Settings.
  2. An API call is made to the model with the request.
  3. The response is stored in the context object as part of the dialog context and can be used to define the transitions or any other part of the bot configuration.
  4. The platform exits from the GenAI Prompt node when a successful response is received, or the defined timeout condition is met.

Enable the Node

This node is not available by default. You can enable it for all Dialog Tasks as follows:

  • Configure the Open AI integration and enable the GenAI Prompt feature under Build > Natural Language > Advanced NLU Settings. You can also select an LLM model and its settings for the features. By default, these selections are applicable across the platform for the feature. Learn more.
Note: If you do not configure an LLM model and do not enable the GenAI Prompt feature, then the node will not be available within the Dialog Builder.

Setting up a GenAI Prompt in a dialog task involves adding the node at the appropriate location in the dialog flow and configuring various properties of the node, as explained below.

Add the Node

  1. Go to Build > Conversational Skills > Dialog Tasks and select the task to which you want to add the GenAI Prompt.
  2. Use the “+” button next to the node under which you want to add the GenAI Prompt. Then, choose GenAI Prompt, and then click New GenAI Prompt. (For more information on adding nodes, see different ways to add a node.) Alternatively, you can drag and drop the GenAI Prompt node to the required location on the canvas.
  3. The GenAI Prompt window is displayed with the Component Properties tab selected by default.

Configure the Node

Component Properties

The settings made within this section affect this node across all instances in all dialog tasks.

General Settings

In this section, you can provide Name and Display Name for the node and write your own OpenAI Prompt.

  • Prompt: A prompt allows you to define the request to be sent to the LLMs for generating a response. Some of the use cases for prompts include entity or topic extraction, rephrasing, or dynamic content generation. The prompt can have up to 2000 characters, and it can be defined using text, Context, Content, and Environment variables.
  • Preview Response: Check the preview of the OpenAI response for your prompt. When you click Preview Response, the Platform parses any variable used in the prompt and constructs OpenAI request using the Prompt and the Advanced Settings. If the response is not relevant, you can tweak the Prompt and the Advanced Settings to make the response better.

Advanced Settings

In this section, you can change the model and tweak its settings.

Adjusting the settings allows you to fine-tune the model’s behavior to meet your needs. The default settings work fine for most cases. However, if required, you can tweak the settings and find the right balance for your use case.
Set System Context, Temperature, and Max Tokens

  • Model: The default model for which the settings are displayed. You can choose another supported mode if it’s configured. If you select a non-default model, it’s used for this node only. If you want to change the default model, you can select the model in the drop-down list and use the Mark Default option shown next to its name.
  • System Context: Add a brief description of the use case context to guide the model.
  • Temperature: The setting controls the randomness of the model’s output. A higher temperature, like 0.8 or above, can result in unexpected, creative, and less relevant responses. On the other hand, a lower temperature, like 0.5 or below, makes the output more focused and relevant.
  • Max Tokens: It indicates the total number of tokens used in the API call to the model. It affects the cost and the time taken to receive a response. A token can be as short as one character or as long as one word, depending on the text.

Advanced Controls

In this section, you can select the maximum wait time to receive a response from the LLM and decide how the bot should respond when the timeout occurs.
Timeout settings for the node

  • Timeout: Select the maximum wait time from the drop-down list. The timeout range can be any value between 10 to 60, the default being 10.
  • Timeout Error Handling: Choose how the bot should respond when the timeout occurs:
    • Close the Task and trigger Task Execution Failure Event
    • Continue with the task and transition to this node; select the node from the drop-down list.

Instance Properties

On the Instance Properties tab, you can configure the instance-specific fields for this GenAI Prompt. These settings are applicable only for this instance and will not affect any other instances of this node.

Custom Tags

In this section, you can add Custom Meta Tags to the conversation flow to profile VA-user conversations and derive business-critical insights from usage and execution metrics. You can add tags for the following:

  • Message: Define custom tags to be added to the current message in the conversation.
  • User: Define custom tags to be added to the user’s profile information.
  • Session: Define custom tags to be added to the current conversation session.

For more information on custom tags, see Custom Meta Tags.

Connections Properties

On the Connections tab, you can set the transition properties to determine the node in the dialog task to execute next. You can write conditional statements based on the values of any Entity or Context Objects in the dialog task, or you can use intents for transitions. See Adding IF-Else Conditions to Node Connections for a detailed setup guide.

Note: These conditions apply only for this instance and will not affect this node when used in any other dialog.

About Responses

All the responses collected are stored in context variables. For example, {{context.GenerativeAINode.NodeName.properties}}. You can define transitions using the context variables.
The responses are captured in a specific format, as shown below.

“context”:{
"GenerativeAINode": {
    "NodeName": {
      "id": "cmpl-7UbzLTumD9ALpfa1mcpf15dK3RnWM",
      "object": "text_completion",
      "created": 1687530223,
      "model": "text-davinci-003",
      "choices": [
        {
          "text": "\n\nI'm sorry, I'm not able to provide that information. However, I would be happy to direct you to a website that may provide the information you are looking for.",
          "index": 0,
          "logprobs": null,
          "finish_reason": "stop"
        }
      ],
      "usage": {
        "prompt_tokens": 58,
        "completion_tokens": 37,
        "total_tokens": 95
      },
      "1687530221473": [
        {
          "nodeId": "NodeName",
          "startTime": "2023-06-23T14:23:42.904Z",
          "endTime": "2023-06-23T14:23:46.029Z"
        }
      ]
    }
  }
}
メニュー