시작
Kore.ai 대화형 플랫폼
챗봇 개요
자연어 처리(NLP)
봇 개념 및 용어들
빠른 시작 가이드
봇 빌더 접근 방법
사용 고지 사항 (영어)
Kore.ai 봇 빌더로 작업하기
봇 구축 시작하기
릴리스 정보
현재 버전 (영어)
이전 버전 (영어)

개념
디자인
스토리보드
대화 작업
개요
Using the Dialog Builder Tool
노드 유형
사용자 의도 노드
대화 노드
엔티티 노드
양식 노드
확인 노드
서비스 노드
봇 조치 노드
Service Node
WebHook 노드
스크립트 노드
노드 그룹화하기
Agent Transfer Node
사용자 프롬프트
음성 통화 속성
대화 관리
노드 및 전환
구성 요소 전환
컨텍스트 개체
이벤트 기반 봇 조치
지식 그래프
소개
지식 추출
지식 그래프 생성
봇에 지식 그래프 추가
그래프 생성
지식 그래프 작성
FAQ 추가
작업 실행
기존 소스에서 FAQ 구축
특성, 동의어 및 불용어
변수 네임스페이스 관리
수정
용어 편집 및 삭제
용어 편집 및 삭제
질문과 응답 편집
Knowledge Graph Training
지식 그래프 분석
봇 온톨로지 가져오기 및 내보내기
지식 그래프 가져오기
지식 그래프 내보내기
지식 그래프 생성
CSV 파일에서
JSON 파일
지식 그래프 생성
경고 작업
스몰 토크
Digital Skills
디지털 양식
Views
Digital Views
Panels
Widgets
기차
봇 성능 향상 – NLP 최적화
기계 학습
소개
모델 검증
기초 의미
지식 그래프 학습
특성
순위 및 해결
고급 NLP 설정
NLP 설정 및 지침
봇 인텔리전스
소개
컨텍스트 관리
컨텍스트 관리
대화 관리
다중 – 의도 탐지
엔티티 수정
기본 대화
정서 관리
어조 분석
Test & Debug
봇과 대화
발화 테스트
배치 테스트하기
대화 테스트
배포
채널 활성화
봇 게시
분석
봇 분석하기
Conversations Dashboard
Performance Dashboard
사용자 정의 대시보드
소개
맞춤형 메타 태그
사용자 정의 대시보드 생성 방법
Conversation Flows
NLP 지표
Containment Metrics
사용량 지표
스마트 봇
소개
범용 봇
소개
범용 봇 정의
범용 봇 생성
범용 봇 학습
범용 봇 커스터마이징
범용 봇용 추가 언어 활성화
스토어
Manage Assistant
플랜 및 사용량
Overview
Usage Plans
Support Plans
플랜 관리
봇 인증
다국어 봇
개인 식별 정보 삭제하기
봇 변수 사용
IVR 통합
일반 설정
봇 관리

방법
간단한 봇 생성하기
Design Conversation Skills
뱅킹 봇 생성
뱅킹 봇 – 자금 이체
뱅킹 봇 – 잔액 업데이트
Knowledge Graph (KG) 구축
스마트 경고를 예약하는 방법
Design Digital Skills
디지털 양식 설정 방법
디지털 보기 설정 방법
데이터 테이블에 데이터를 추가하는 방법
데이터 테이블 내 데이터 업데이트 방법
UI 양식에서 데이터 테이블에 데이터를 추가하는 방법
Train the Assistant
특성 사용 방법
의도와 엔티티에 대한 패턴 사용 방법
컨텍스트 전환 관리 방법
Deploy the Assistant
상담사 전환을 설정하는 방법
봇 기능 사용 방법
콘텐츠 변수 사용 방법
전역 변수 사용 방법
Kore.ai 웹 SDK 튜토리얼
Kore.ai 위젯 SDK 튜토리얼
Analyze the Assistant
사용자 정의 대시보드 생성 방법
사용자 지정 태그를 사용하여 봇 메트릭을 필터링하는 방법

API 및 SDK
API 참조
Kore.ai API 사용
API 목록
API 컬렉션
koreUtil Libraries
SDK 참조
상담사 전환을 설정하는 방법
봇 기능 사용 방법
콘텐츠 변수 사용 방법
전역 변수 사용 방법
소개
Kore.ai 웹 SDK 튜토리얼
Kore.ai 위젯 SDK 튜토리얼

관리
소개
봇 관리자 콘솔
대시보드
사용자 관리
사용자 관리
그룹 관리
역할 관리
봇 관리 모듈
등록
사용자 초대
사용자 등록을 위한 대량 초대 보내기
사용자 및 사용자 데이터 가져오기
Active Directory에서 사용자 동기화
보안 및 준수
싱글 사인 온 사용
보안 설정
Kore.ai 커넥터
봇 관리자용 분석
Billing (지원하지 않음)
  1. Docs
  2. Virtual Assistants
  3. Analyzing Your Bot
  4. Virtual Assistant Health and Monitoring

Virtual Assistant Health and Monitoring

The Health and Monitoring dashboard offers a goal-driven approach to improving the accuracy of the virtual assistant’s Natural Language Processing (NLP) model. The training data is analyzed along with the test coverage and test results of the test suites to provide insights into the NLP Model’s performance.

This dashboard lets you achieve the following:

  • Review the test execution summary for every intent type.
  • Identify incorrect intent patterns, short training utterances, incorrect entity annotations, and training recommendations and take corrective action.
  • Drill down to specific test cases to determine test performance and coverage.
  • View the expected and matched results, and the detailed NLP analysis.
  • Tag specific test case results that need follow-up actions and collaborate with your team to improve the performance.

Note: The Health & Monitoring Dashboard is available only post 9.3 release, i.e. post-July 24, 2022.

Navigating to Health and Monitoring

To navigate to the Health and Monitoring dashboard, follow these steps:

  1. Click the Build tab on the top menu of the Virtual Assistant dashboard.
  2. Click Health & Monitoring under Testing in the left navigation menu.

Important Health and Monitoring Metrics

The following metrics help in your ML Model Validation. Learn more.

  • Accuracy: Determines if the intent identified by your ML model is correct or not.
  • F1 Score: Classifies the distribution and balances precision and recall scores. It is calculated as the weighted average of Precision and Recall.
  • Precision Score: Defines how precise/accurate your model is and is calculated as the ratio of true positives over total predicted positives (sum of true and false positives).
  • Recall Score: Defines the fraction of the relevant utterances that are successfully identified and is calculated as the ratio of true positives over actual positives (sum of true positives and false negatives).

Health and Monitoring Dashboard Components

The key components of the NLP Health and Monitoring dashboard include the coverage and execution summary panels under the Bot Health section described below:

Bot Health

The Bot Health section displays the key performance metrics and the total test coverage of the selected test suites for the Dialog intents, FAQs, Small Talks, and Traits. The Health meter depicts if your virtual assistant is trained well or not based on the key recommendation scores.

Note: You can select one, more or all the test suites from the dropdown to view NLP Analytics.

Test Cases Detailed Analysis

To get the detailed NLP analysis data of the test cases, click the View Test Cases link. This displays the summary of all the test cases executed in the selected test suite.

The Test Cases- Detailed Analysis window displays test results separately for Intents, Entities, and Traits as described below, so that you can identify the errors or areas of improvement for each category and fix them.

Intents

The Intents section displays a tabular view of the test cases executed for the Dialog Intents, FAQs, and Small Talks. The primary details displayed from the test results include the following:

  • Test Case: The test case that is executed.
  • Intent Type: Displays Dialog, FAQ, or Small Talk.
  • Test Suite: The test suite to which the test case is mapped.
  • Expected Intent: The intent that is expected to be identified from the given set of utterances.
  • Matched Intent: The intent that is matched during test execution from the given set of utterances.
  • Result Type: Result categorized as True Positive, True Negative, False Positive, or False Negative.

Entities and Traits

The Entities and Traits sections display a tabular view of the test case results for the selected Dialog Intents, FAQs, and Small Talks based on the entities and traits identified respectively. The following primary details are displayed in the respective panels:

Entities

  • Utterances: The utterances from the user’s input captured in the test cases.
  • Entity Name: The entity name identified during test execution.
  • Expected Value: The entity value expected to be identified from the given set of utterances.
  • Matched Value: The entity value identified and matched from the given set of utterances during test execution.
  • Entity Result: Result categorized as True Positive, True Negative, False Positive or False Negative.

Traits

All the primary fields for Intents are displayed along with the Trait Name identified for each test case.

Tags

After analyzing the reason for failure, you can collaborate with your team members using tags for test case executions. Tags are labels mapped to the test case results of intents, entities, and traits, indicating follow-up actions or suggestions.

The following tags are available for intents, entities, and traits:

  • Add Negative Pattern: Indicates that the user has to add a negative pattern to the intent/entity/trait test execution.
  • NeedNLPHelp: Indicates that the test execution requires explicit NLP help.
    Needs Negative Pattern: Indicates that the intent/entity/trait test execution needs a negative pattern to execute as expected.
  • Needs Training: Indicates that the virtual assistant needs training for the identified intent/entity/trait after the test execution.
  • New Intent: Indicates a new intent during test execution.

Analyzing Test Results

The test execution results for the selected test suite(s) and intent type can be analyzed in the details window which provides a drill-down view of the following performance metrics for intents, entities, and traits

Metric Name Description Intent Entity Trait
Expected Intent/Value Please refer to the Intents section. Yes Yes Yes
Matched Intent/Value Please refer to the Intents section. Yes Yes Yes
Parent Intent Learn more. Yes No Yes
Task State The status of the intent or task against which the intent is identified. Possible values include Configured or Published.
.
Yes No Yes
Result Type Please refer to the Intents section. Yes No Yes
Matched Intent Score and Expected Intent Score

Displays the individual scores for the following

Yes No Yes
Entity Name Please refer to the Entities section. No Yes No
Result Returns True if an entity is identified and False if not. No Yes No
Identified by The NLU engine that identified the entity. No Yes No
Identified using The reference entity type that was used to identify the entity during test execution. No Yes No
Confidence Score A score to determine if the test execution resulted in a favorable outcome (high score) or not (low score) when an utterance is trained for the entity. No Yes No

Navigating to the Details Section

To view the Details section, follow these steps:

  1. In the Test Cases – Detailed Analysis window, click the Intents, Entities, or Traits tab.
  2. Hover over the desired entry, and click the detailed view icon.
  3. A sliding window with the test results for the selected test case and intent type appears.Intent and Entity Details

    Trait Details are displayed in the test case details window if you select the trait intent type.

  4. Click the expansion arrow icon under Entity to view the entity order expected by the ML engine and the actual entity order.

NLP Analysis

The NLP Analysis section displays the detailed view of the historic analysis generated at the time of the test case execution for failed and successful test cases. For the selected intent type, this section gives an overview of the intents that are qualified (the definitive and probable matches) and disqualified to serve as crucial information for users trying to decode the reason for failed test cases. The following details are displayed as a graphical representation in this section:

This is different from analyzing the test results under Utterance Testing where the current analysis information is displayed based on the changes to the trained data. Learn more.

To view the NLP Analysis section, follow these steps:

  1. Please follow steps 1 to 3 mentioned in the Details section.
  2. Click the NLP Analysis tab as shown below:

Utterance Testing

Based on the test case failures, you can retrain your virtual assistant using the Utterance testing option for all possible user utterances and inputs. Training is how you enhance the performance of the NLP engine to prioritize one task or user intent over another based on the user input. To learn more, please refer to Training the Bot.

To navigate to the Utterance Testing window, follow these steps:

  1. Click the go to utterance testing (magic wand) icon on the Test Cases – Detailed Analysis page.

In the Utterance Testing window shown below, you can do the following:

  • Test & train your virtual assistant based on these recommendations to understand different user utterances and match them with intents and entities.
  • View the NLP analysis flow and Fields/Entities analysis data including the confidence score based on the NER training.
  • Use the Mark as an incorrect match link to match the user input with the right intent when it is mapped to an incorrect task.

Dialog Intent Summary

This section provides the performance metrics, test coverage and analytics for only the Dialog Intents test cases.

The sub-sections available include:

Test Coverage

This section displays the count and percentage of the intents covered and not covered. You can find the list of intents not covered using the View details option and start adding test cases for them. An Intent is considered as covered when the intent has at least one test case in the selected test suite(s).

Test Results Analysis

This section gives the breakdown of the test case results for the given intent type. The result type could have one of the following values:

  • True Positive (TP): Percentage of utterances that have correctly matched expected intent.
    In the case of Small Talk, it would be when the list of expected and actual intents are the same.
    In the case of Traits, this would include the traits matched over and above the expected matches.
  • False Positive (FP): Percentage of utterances that have matched an unexpected intent. In the case of Small Talk, it would be when the list of expected and actual intents are different.
  • False Negative (FN): Percentage of utterances that have not matched expected intent. In the case of Small Talk, it would be when the list of expected Small Talk intent is blank but the actual Small Talk is mapped to an intent.

Recommendation Notification: Shows any training recommendations available for the dialog intents.

View Recommendations

You can view relevant training recommendations for dialog intents, FAQs, or Small Talks when errors and warnings are triggered during the test execution. To view the recommendations summary, click View Recommendations on the top right of the details page.

To view the details of the utterance validations, errors, warnings, and recommendations and correct them, click the Recommendations column.

Viewing Specific Test Results

To know how to get the drill-down view of a specific test case execution, please refer to the Test Cases – Detailed Analysis section.

FAQ Summary

The FAQ Summary section displays the recommendation scores generated for FAQs from the latest batch test executions.

Viewing Additional FAQ Recommendations

For FAQ Details, clicking View Recommendations will display the report that was already run during the previous run time. To view additional recommendations, run the Inspect function. Learn more.

Knowledge Graph: Clicking this button will take you to the Knowledge Graph section where you can perform KG Analysis.

Small Talk Summary

The Small Talk Summary panel displays the recommendation scores generated for Small Talk interactions from the latest batch test executions.

Small Talk button: Click this button to view the group name and the relevant user utterances, and Bot utterances.

Trait and Entity Summary Information

The Trait Summary and Entity Summary sections display the recommendation scores generated for traits and entities respectively from the latest batch test executions.

Trait Summary

Entity Summary

Test Coverage and Test Results Analysis

Please refer to Test Coverage and Test Results Analysis for information on the sub-sections of these summary panels.

Intent Details Window

The View Details link in the Dialog intent, FAQ, and Small Talk summary sections provides access to a drill-down view of the key performance metrics and recommendations of the covered intents. The given data helps identify the intent-related issues proactively in the training phase itself to work on fixing them accordingly.

Here’s what you can do:

View the Training Data Summary

You can view the training data summary with the relevant recommendation metrics for Dialog Intents, FAQs, and Small Talks in the details panel.

The summary of all the metrics displayed is given below:

Recommendation Metric Dialog Intent FAQ Small Talk
Intent The name of the dialog intent. The name of the FAQ intent. The name of the Small Talk intent.
Utterances The count of the training utterances for that intent. N/A
Test Cases The count of the test cases that are present in the selected test suites for that intent.
True Positive (TP) The count of the intent test cases that resulted in TP.
False Negative (FN) The count of the intent test cases that resulted in FN.
False Positive (FP) The count of the intent test cases that resulted in FP.
Covered In Name of the test suites in which the intent test cases are present.
F1, Accuracy, Precision, and Recall scores These recommendation scores are displayed based on the outcomes.
Recommendations Displays the count of training recommendations for that intent. Clicking on it will display the summary of the training recommendations and their probable corrective actions. N/A N/A
Group N/A N/A The group to which the Small Talk interaction is mapped.
Path N/A The node path in the Knowledge Graph. N/A
Alt Question N/A The number of alternative questions mapped to an FAQ. N/A

View Intents Not Covered

This feature helps identify the intents not covered so as to include them in the test data for better and holistic testing of the virtual assistant. Click the three-dot menu on the right side of the panel to view the list of intents not covered in batch testing.

You can include the intents from this list to retrain your virtual assistant and improve performance.

메뉴