はじめに
対話型AIプラットフォーム
チャットボットの概要
自然言語処理(NLP)
ボットの概念と用語
クイックスタートガイド
プラットフォームへのアクセス
ボットビルダーの操作
リリースノート
最新バージョン(英語)
以前のバージョン(英語)
廃止機能(英語)
コンセプト
設計
ストーリーボード
ダイアログタスク
ダイアログタスクとは
ダイアログビルダー
ノードタイプ
インテントノード
ダイアログノード
エンティティノード
フォームノード
確認ノード
ロジックノード
ボットアクションノード
サービスノード
Webhookノード
スクリプトノード
グループノード
エージェント転送ノード
ユーザープロンプト
音声通話プロパティ
イベント ハンドラー
ナレッジグラフ
ナレッジグラフの抽出
ナレッジグラフの構築
ボットにナレッジグラフを追加
グラフの作成
ナレッジグラフの構築
既存のソースからFAQを構築
通知タスク
スモールトーク
デジタルスキル
デジタルフォーム
デジタルビュー
デジタルビューとは
パネル
ウィジェット
トレーニング
トレーニングとは
機械学習
機械学習とは
モデル検証
ファンダメンタルミーニング
ナレッジグラフ
示唆
ランキングおよび解決
NLPの詳細設定
NLPのガイドライン
インテリジェンス
インテリジェンスとは
コンテキスト
コンテキストインテント
割り込み
複数インテントの検出
エンティティの変更
デフォルトの会話
センチメント管理
トーン分析
テストとデバッグ
ボットと会話
発話テスト
バッチテスト
会話テスト
デプロイ
チャネル
公開
分析
ボットの分析
NLPメトリクス
会話フロー
Usage Metrics
封じ込め測定
カスタムダッシュボード
カスタムダッシュボードとは
メタタグ
カスタムダッシュボードとウィジェット
LLM and Generative AI
Introduction
LLM Integration
Kore.ai XO GPT Module
Prompts & Requests Library
Co-Pilot Features
Dynamic Conversations Features
ユニバーサルボット
ユニバーサルボットとは
ユニバーサルボットの定義
ユニバーサルボットの作成
ユニバーサルボットのトレーニング
ユニバーサルボットのカスタマイズ
他言語の有効化
ストア
プラントと使用
Overview
Usage Plans
Support Plans
Invoices
管理
ボット認証
複数言語対応ボット
個人を特定できる情報の編集
ボット変数の使用
IVRのシステム連携
一般設定
ボット管理
ハウツー
会話スキルの設計
バンキングボットを作成
バンキングボット – 資金の振り替え
バンキングボット – 残高を更新
ナレッジグラフを構築
スマートアラートの予約方法
Integrations
Actions
Actions Overview
Asana
Configure
Templates
Azure OpenAI
Configure
Templates
BambooHR
Configure
Templates
Bitly
Configure
Templates
Confluence
Configure
Templates
DHL
Configure
Templates
Freshdesk
Configure
Templates
Freshservice
Configure
Templates
Google Maps
Configure
Templates
Here
Configure
Templates
HubSpot
Configure
Templates
JIRA
Configure
Templates
Microsoft Graph
Configure
Templates
Open AI
Configure
Templates
Salesforce
Configure
Templates
ServiceNow
Configure
Templates
Stripe
Configure
Templates
Shopify
Configure
Templates
Twilio
Configure
Templates
Zendesk
Configure
Templates
Agents
Agent Transfer Overview
Custom (BotKit)
Drift
Genesys
Intercom
NiceInContact
NiceInContact(User Hub)
Salesforce
ServiceNow
Configure Tokyo and Lower versions
Configure Utah and Higher versions
Unblu
External NLU Adapters
Overview
Dialogflow Engine
Test and Debug
デジタルスキルの設計
デジタルフォームの設定方法
デジタルビューの設定方法
データテーブルのデータの追加方法
データテーブルのデータの更新方法
Add Data from Digital Forms
ボットのトレーニング
示唆の使用方法
インテントとエンティティのパターンの使用方法
コンテキスト切り替えの管理方法
ボットのデプロイ
エージェント転送の設定方法
ボット関数の使用方法
コンテンツ変数の使用方法
グローバル変数の使用方法
ボットの分析
カスタムダッシュボードの作成方法
カスタムタグを使ってフィルタリング
Data
Overview
Data Table
Table Views
App Definitions
Data as Service
Build a Travel Planning Assistant
Travel Assistant Overview
Create a Travel Virtual Assistant
Design Conversation Skills
Create an ‘Update Booking’ Task
Create a Change Flight Task
Build a Knowledge Graph
Schedule a Smart Alert
Design Digital Skills
Configure Digital Forms
Configure Digital Views
Train the Assistant
Use Traits
Use Patterns
Manage Context Switching
Deploy the Assistant
Use Bot Functions
Use Content Variables
Use Global Variables
Use Web SDK
Build a Banking Assistant
Migrate External Bots
Google Dialogflow Bot
APIs & SDKs
API Reference
API Introduction
Rate Limits
API List
koreUtil Libraries
SDK Reference
SDK Introduction
Web SDK
How the Web SDK Works
SDK Security
SDK Registration
Web Socket Connect and RTM
Tutorials
Widget SDK Tutorial
Web SDK Tutorial
BotKit SDK
BotKit SDK Deployment Guide
Installing the BotKit SDK
Using the BotKit SDK
SDK Events
SDK Functions
Tutorials
BotKit - Blue Prism
BotKit - Flight Search Sample VA
BotKit - Agent Transfer
  1. ホーム
  2. Docs
  3. Virtual Assistants
  4. Natural Language
  5. Training Validations

Training Validations

NLP models play a significant role in providing natural conversational experiences for your customers and employees. Improving the accuracy of the NLP models is a continuous journey and requires fine-tuning, as you add new use cases to your virtual assistant. The Kore.ai XO Platform proactively validates the NLP training provided to the virtual assistants and provides recommendations to improve the model. This article explains the available validations, how to view these validations, and how to validate the NLU Model.

Goal-Driven Training Validations

The ML engine enables you to identify issues proactively in the training phase itself with the following set of recommendations: 

  • Untrained Intents – notifies about intents that are not trained with any utterances so that you can add the required training. 
  • Inadequate training utterances – notifies the intents with insufficient training utterances so that you can add more utterances to them. 
  • Utterance does not qualify any intent (false negative) – notifies about an utterance for which the NLP model cannot predict any intent. For example, an utterance added to Intent A is expected to predict Intent A. Whereas in some cases the model won’t be able to predict neither the trained Intent A nor any other Intents within the model. Proactively identifying such cases helps you rectify the utterance and enhance the model for prediction. 
  • Utterance predicts wrong intent (false positive) Identifies utterances that predict intents other than the trained intent. For example, when you add an utterance similar to utterances from another intent, the model could predict a different intent rather than the intent it is trained to. Knowing this would help you to rectify the utterance and improve the model prediction.
  • Utterance predicts intent with low confidence – notifies about the utterances that have low confidence scores. With this recommendation, you can identify and fix such utterances to improve the confidence score during the virtual assistant creation phase.
  • Incorrect Patterns– Notifies about the patterns that do not follow the right syntax along with the error. You can resolve such incorrect patterns to improve intent identification. In addition, the Platform now supports the following Non-CS languages for Virtual Assistant conversations:
    • Polish: Support for stop words and entity detection improvements for Number, City, Country, Person, Date, Time, Zip Code, Address, and LoV Enumerated.
    • Japanese: Support for stop words and improvements in entity detection, intent patterns, sub-intents, and bot synonyms.
    • Arabic: Support for stop words and improvements in phone number entity detection.
    • Hinglish (Hindi + English): The support for this dual language combination is newly introduced.

  • Wrong Entity Annotations – notifies wrongly annotated entities. For example, in the utterance ‘I want to travel to Hyderabad on Sunday 2pm’,. the Travel Date (Date type) entity is annotated with value ‘2PM’ (Time value). The platform checks for such wrong annotations and notifies the issue against the utterance, which helps to re-annotate the entity with the right values and improve entity recognition.
  • Short Utterance – notifies about the utterances whose word count is lesser than or equal to two. It helps you to follow best practices for the length of utterances, which depicts an actual end-user query and further improves the model’s accuracy.

How to View NLU Training Validations

  1. On the virtual assistant’s Build menu, click Natural Language -> Training.
  2. In the Intents tab, you can see the set of recommendations for the Intents and ML utterances.

    Note: The errors and warnings in this screen are examples. The ML validations vary based on the error or waning recommendation as explained in the Goal-Based NLU Training Validations section above.
  3. Hover over the validation options and view the following recommendations:
    • Hover on the Error icon to view the recommendations to resolve the error.
    • Note: An Error is displayed when the intent has a definite problem that impacts the virtual assistant’s accuracy or intent score. Errors are high severity problems.

    • Hover on the Warning icon and follow the instructions in the warning to enhance the training for ML utterances.

    Note: A warning is displayed when the issue impact the VA’s accuracy and it can be resolved. Warnings are less severe problems when compared to errors.

  4. Once you click on the Intent with error or warning, hover over the Bulb icon to view the summary of error or warning messages as illustrated below:

How to Use the NLU Validate Model

The NLU Validate Model provides the options to view the training module’s recommendations summary, refresh the recommendations, and view the Confusions Matrix, and K-fold Cross Validation reports.

The Validate Model helps you follow the best practices and quickly attain NLP accuracy for your Virtual Assistants.

Recommendations Summary

The Recommendations Summary is available when validating your model. This feature gives insights into the problem areas, errors, warnings for the intents that are building blocks for your virtual assistant’s training, and recommendations on corrective actions for better NLU accuracy.

To view the Recommendations Summary, follow the steps below:

  1. Navigate to Build > Natural Language and click the Training option on the left menu.
  2. On the Training page, click the Validate Model button.
  3. The recommendations summary slider appears with the list of issues and the recommended actions to fix them.

    Note: All the recommendations are listed under the All tab. Under the Error and Warning tabs, you can view recommendations specific to errors and warnings specifically.

Understanding the Recommendations

In the recommendation message, 5 intents have Patterns with invalid syntax, 5 is the count of the intents with the issue – patterns with invalid syntax. The intent count is displayed first for any recommendation, followed by the issue type. When you fix an issue based on the recommendation, the training model updates, and the platform automatically triggers a background task to refresh the recommendations summary.

Note: If the training model is updated with all the modifications, the platform automatically triggers a background task to refresh recommendations without explicitly informing the users.

Filter the Recommendations

Previously, for example, when an issue titled “5 Intents have utterances with low confidence score” was displayed, there was to identify the intents immediately. The user had to move the cursor against each intent in the list to identify the ones causing training issues.

The Platform now provides a Filter option in the training module to quickly identify intents and training data with issues and fix them. When applied to the recommendations list, the filter groups all the related issues and displays them in the Training panel to give you a focused view of all the open training issues.

To filter the recommendations summary issue, follow the steps below:

  1. Hover over the desired issue on the recommendations summary slider and click the Filter button.
  2. In the Training panel, all the intents with the issue type are listed. For example, if the user selects ‘43 Intents have no utterances‘, the platform filters out those 43 intents from the list of all intents. A note on the filter applied displays on the panel, and an option to reset the applied filter is available as shown below:
  3. Click the +Utterance, +Pattern, or +Rule link for an intent issue type to add the relevant values for training.
  4. Enter the values in the relevant textbox and click enter under the Intent summary window’s Utterance/Pattern/Rule tab. The Intent Summary window displays the context to resolve the filtered issue.
  5. The utterance/pattern/rule is created successfully and mapped to the intent.
  6. Once you resolve an intent type issue, the system updates (decrements) the issues’ count and the error indicator (red dot) on the Recommendation Summary panel.
  7. Similarly, the system updates the Error and Warning counts if an error or warning is resolved.

Important

  • The user can filter only one recommendation at a time.
  • When a filter is selected, it applies to the Intent training, utterance, rule, and pattern-level recommendations for the selected intent type.

Reset the Intent Recommendation Filter

With the CLEAR FILTER option, you can clear or reset a filter selection for an intent type issue. Once the filter is reset, an unfiltered view of all the intent names and intent types (errors and warnings) with all the training data (Utterances, Patterns, Negative Patterns, etc.) is displayed.

To reset the intents recommendation filter, follow the steps below:

  1. Click the CLEAR FILTER link under the Training > Intents tab.
  2. All the intent type entries are displayed without filtering the data.

 

Refreshing Recommendations

In the recommendation summary, you can refresh recommendations as explained in the previous section in some of the scenarios like:

  • Modify training data: When the training data is modified, you can check for new recommendations.
  • Implement recommendations: When a recommendation is implemented, you can verify if it is resolved.
  • Add new training data: When new intents or utterances are added, you can check for new recommendations.
  • When you click the Refresh icon, a timestamp of the last refresh is displayed along with the refreshed recommendations list. The timestamp is updated with every refresh.

    If there are untrained utterances in the training model, the platform provides you with the following options:

    Train and Regenerate

    This option allows you to train the model with untrained utterances, and then triggers a background task to generate recommendations from the latest model.

    The following steps explain the Train & Generate usage with an example.

    1. Click any intent and add a new utterance.
    2. Once the utterance is added, go to the main page and click the Refresh icon to refresh the recommendations. The following pop-up is displayed when there are untrained utterances.
    3. Click the Train & Regenerate button to train the utterances and regenerate the recommendations.
    4. A message about training initiation is displayed.
    5. Once training and regeneration of recommendations is completed, the status is displayed in the status docker.
    6. Regenerate

      Click the Regenerate button in the displayed pop-up if you want to trigger a background task that generates recommendations from the current model.

      Note: While performing the refresh, you cannot trigger the task again.
      If you click the Refresh icon when the model is being trained, recommendations from the latest Validate model are generated only once the training is completed.
    7. When you click the Refresh icon, the recommendations summary is refreshed and the count of recommendations is either increased, or decreased, or the recommendations are updated based on the latest results.

    For example, if new utterances are added to the intent, a new recommendation may get added to the summary list. Similarly, if a recommendation is implemented, the count of recommendations decreases. In a recommendation like 13 intents have very short utterances, if 2 intents are fixed, then the recommendation is updated to 11 intents have very short utterances.

    NLU Validation Options

    Next to the Validate Model, there is a drop-down with options to select either the Confusion matrix or K-fold Cross Validations.

    You can also click the Go button to access Confusion Matrix and K-fold Cross Validation reports.

    Confusion Matrix

    Confusion Matrix is useful in describing the performance of a classification model (or classifier) on a set of test data for which the true values are known. The graph generated by the confusion matrix presents an at-a-glance view of the performance of your trained utterances against the virtual assistant’s tasks. To learn more, see the Confusion Matrix section in Model Validation.

    The following screenshot shows the confusion matrix report.

  • If no data is available in the Confusion Matrix, you click the Generate button to create a report.
  • Whenever the Validate model is updated, you can click the Re-Run Model to generate the latest matrix. Once you rerun the model, the platform prompts you to either Train and Regenerate the recommendations if the model has unsaved changes or Regenerate the recommendations if the model is up to date.
  • Whenever you Train and Regenerate the recommendations, the matrix will also be regenerated. Similarly, whenever you regenerate the matrix, the recommendations are updated too.
  • K-Fold Cross Validation

    K-Fold Cross-Validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The technique involves partitioning the data into subsets, training the data on a subset, and using the other subsets to evaluate the model’s performance. To learn more, see the K-Fold Cross Validation section in Model Validation.

    The following screenshot shows the K-Fold Cross-Validation report.

    Training Validations

    NLP models play a significant role in providing natural conversational experiences for your customers and employees. Improving the accuracy of the NLP models is a continuous journey and requires fine-tuning, as you add new use cases to your virtual assistant. The Kore.ai XO Platform proactively validates the NLP training provided to the virtual assistants and provides recommendations to improve the model. This article explains the available validations, how to view these validations, and how to validate the NLU Model.

    Goal-Driven Training Validations

    The ML engine enables you to identify issues proactively in the training phase itself with the following set of recommendations: 

    • Untrained Intents – notifies about intents that are not trained with any utterances so that you can add the required training. 
    • Inadequate training utterances – notifies the intents with insufficient training utterances so that you can add more utterances to them. 
    • Utterance does not qualify any intent (false negative) – notifies about an utterance for which the NLP model cannot predict any intent. For example, an utterance added to Intent A is expected to predict Intent A. Whereas in some cases the model won’t be able to predict neither the trained Intent A nor any other Intents within the model. Proactively identifying such cases helps you rectify the utterance and enhance the model for prediction. 
    • Utterance predicts wrong intent (false positive) Identifies utterances that predict intents other than the trained intent. For example, when you add an utterance similar to utterances from another intent, the model could predict a different intent rather than the intent it is trained to. Knowing this would help you to rectify the utterance and improve the model prediction.
    • Utterance predicts intent with low confidence – notifies about the utterances that have low confidence scores. With this recommendation, you can identify and fix such utterances to improve the confidence score during the virtual assistant creation phase.
    • Incorrect Patterns– Notifies about the patterns that do not follow the right syntax along with the error. You can resolve such incorrect patterns to improve intent identification. In addition, the Platform now supports the following Non-CS languages for Virtual Assistant conversations:
      • Polish: Support for stop words and entity detection improvements for Number, City, Country, Person, Date, Time, Zip Code, Address, and LoV Enumerated.
      • Japanese: Support for stop words and improvements in entity detection, intent patterns, sub-intents, and bot synonyms.
      • Arabic: Support for stop words and improvements in phone number entity detection.
      • Hinglish (Hindi + English): The support for this dual language combination is newly introduced.

    • Wrong Entity Annotations – notifies wrongly annotated entities. For example, in the utterance ‘I want to travel to Hyderabad on Sunday 2pm’,. the Travel Date (Date type) entity is annotated with value ‘2PM’ (Time value). The platform checks for such wrong annotations and notifies the issue against the utterance, which helps to re-annotate the entity with the right values and improve entity recognition.
    • Short Utterance – notifies about the utterances whose word count is lesser than or equal to two. It helps you to follow best practices for the length of utterances, which depicts an actual end-user query and further improves the model’s accuracy.

    How to View NLU Training Validations

    1. On the virtual assistant’s Build menu, click Natural Language -> Training.
    2. In the Intents tab, you can see the set of recommendations for the Intents and ML utterances.

      Note: The errors and warnings in this screen are examples. The ML validations vary based on the error or waning recommendation as explained in the Goal-Based NLU Training Validations section above.
    3. Hover over the validation options and view the following recommendations:
      • Hover on the Error icon to view the recommendations to resolve the error.
      • Note: An Error is displayed when the intent has a definite problem that impacts the virtual assistant’s accuracy or intent score. Errors are high severity problems.

      • Hover on the Warning icon and follow the instructions in the warning to enhance the training for ML utterances.

      Note: A warning is displayed when the issue impact the VA’s accuracy and it can be resolved. Warnings are less severe problems when compared to errors.

    4. Once you click on the Intent with error or warning, hover over the Bulb icon to view the summary of error or warning messages as illustrated below:

    How to Use the NLU Validate Model

    The NLU Validate Model provides the options to view the training module’s recommendations summary, refresh the recommendations, and view the Confusions Matrix, and K-fold Cross Validation reports.

    The Validate Model helps you follow the best practices and quickly attain NLP accuracy for your Virtual Assistants.

    Recommendations Summary

    The Recommendations Summary is available when validating your model. This feature gives insights into the problem areas, errors, warnings for the intents that are building blocks for your virtual assistant’s training, and recommendations on corrective actions for better NLU accuracy.

    To view the Recommendations Summary, follow the steps below:

    1. Navigate to Build > Natural Language and click the Training option on the left menu.
    2. On the Training page, click the Validate Model button.
    3. The recommendations summary slider appears with the list of issues and the recommended actions to fix them.

      Note: All the recommendations are listed under the All tab. Under the Error and Warning tabs, you can view recommendations specific to errors and warnings specifically.

    Understanding the Recommendations

    In the recommendation message, 5 intents have Patterns with invalid syntax, 5 is the count of the intents with the issue – patterns with invalid syntax. The intent count is displayed first for any recommendation, followed by the issue type. When you fix an issue based on the recommendation, the training model updates, and the platform automatically triggers a background task to refresh the recommendations summary.

    Note: If the training model is updated with all the modifications, the platform automatically triggers a background task to refresh recommendations without explicitly informing the users.

    Filter the Recommendations

    Previously, for example, when an issue titled “5 Intents have utterances with low confidence score” was displayed, there was to identify the intents immediately. The user had to move the cursor against each intent in the list to identify the ones causing training issues.

    The Platform now provides a Filter option in the training module to quickly identify intents and training data with issues and fix them. When applied to the recommendations list, the filter groups all the related issues and displays them in the Training panel to give you a focused view of all the open training issues.

    To filter the recommendations summary issue, follow the steps below:

    1. Hover over the desired issue on the recommendations summary slider and click the Filter button.
    2. In the Training panel, all the intents with the issue type are listed. For example, if the user selects ‘43 Intents have no utterances‘, the platform filters out those 43 intents from the list of all intents. A note on the filter applied displays on the panel, and an option to reset the applied filter is available as shown below:
    3. Click the +Utterance, +Pattern, or +Rule link for an intent issue type to add the relevant values for training.
    4. Enter the values in the relevant textbox and click enter under the Intent summary window’s Utterance/Pattern/Rule tab. The Intent Summary window displays the context to resolve the filtered issue.
    5. The utterance/pattern/rule is created successfully and mapped to the intent.
    6. Once you resolve an intent type issue, the system updates (decrements) the issues’ count and the error indicator (red dot) on the Recommendation Summary panel.
    7. Similarly, the system updates the Error and Warning counts if an error or warning is resolved.

    Important

    • The user can filter only one recommendation at a time.
    • When a filter is selected, it applies to the Intent training, utterance, rule, and pattern-level recommendations for the selected intent type.

    Reset the Intent Recommendation Filter

    With the CLEAR FILTER option, you can clear or reset a filter selection for an intent type issue. Once the filter is reset, an unfiltered view of all the intent names and intent types (errors and warnings) with all the training data (Utterances, Patterns, Negative Patterns, etc.) is displayed.

    To reset the intents recommendation filter, follow the steps below:

    1. Click the CLEAR FILTER link under the Training > Intents tab.
    2. All the intent type entries are displayed without filtering the data.

     

    Refreshing Recommendations

    In the recommendation summary, you can refresh recommendations as explained in the previous section in some of the scenarios like:

  • Modify training data: When the training data is modified, you can check for new recommendations.
  • Implement recommendations: When a recommendation is implemented, you can verify if it is resolved.
  • Add new training data: When new intents or utterances are added, you can check for new recommendations.
  • When you click the Refresh icon, a timestamp of the last refresh is displayed along with the refreshed recommendations list. The timestamp is updated with every refresh.

    If there are untrained utterances in the training model, the platform provides you with the following options:

    Train and Regenerate

    This option allows you to train the model with untrained utterances, and then triggers a background task to generate recommendations from the latest model.

    The following steps explain the Train & Generate usage with an example.

    1. Click any intent and add a new utterance.
    2. Once the utterance is added, go to the main page and click the Refresh icon to refresh the recommendations. The following pop-up is displayed when there are untrained utterances.
    3. Click the Train & Regenerate button to train the utterances and regenerate the recommendations.
    4. A message about training initiation is displayed.
    5. Once training and regeneration of recommendations is completed, the status is displayed in the status docker.
    6. Regenerate

      Click the Regenerate button in the displayed pop-up if you want to trigger a background task that generates recommendations from the current model.

      Note: While performing the refresh, you cannot trigger the task again.
      If you click the Refresh icon when the model is being trained, recommendations from the latest Validate model are generated only once the training is completed.
    7. When you click the Refresh icon, the recommendations summary is refreshed and the count of recommendations is either increased, or decreased, or the recommendations are updated based on the latest results.

    For example, if new utterances are added to the intent, a new recommendation may get added to the summary list. Similarly, if a recommendation is implemented, the count of recommendations decreases. In a recommendation like 13 intents have very short utterances, if 2 intents are fixed, then the recommendation is updated to 11 intents have very short utterances.

    NLU Validation Options

    Next to the Validate Model, there is a drop-down with options to select either the Confusion matrix or K-fold Cross Validations.

    You can also click the Go button to access Confusion Matrix and K-fold Cross Validation reports.

    Confusion Matrix

    Confusion Matrix is useful in describing the performance of a classification model (or classifier) on a set of test data for which the true values are known. The graph generated by the confusion matrix presents an at-a-glance view of the performance of your trained utterances against the virtual assistant’s tasks. To learn more, see the Confusion Matrix section in Model Validation.

    The following screenshot shows the confusion matrix report.

  • If no data is available in the Confusion Matrix, you click the Generate button to create a report.
  • Whenever the Validate model is updated, you can click the Re-Run Model to generate the latest matrix. Once you rerun the model, the platform prompts you to either Train and Regenerate the recommendations if the model has unsaved changes or Regenerate the recommendations if the model is up to date.
  • Whenever you Train and Regenerate the recommendations, the matrix will also be regenerated. Similarly, whenever you regenerate the matrix, the recommendations are updated too.
  • K-Fold Cross Validation

    K-Fold Cross-Validation is a resampling procedure used to evaluate machine learning models on a limited data sample. The technique involves partitioning the data into subsets, training the data on a subset, and using the other subsets to evaluate the model’s performance. To learn more, see the K-Fold Cross Validation section in Model Validation.

    The following screenshot shows the K-Fold Cross-Validation report.

    メニュー