The following list provides definitions of commonly used terms in conversation testing.
Dynamic Text Marking
The dynamic text annotation feature allows you to annotate a section of the text. During test execution, the annotated portion of the text is ignored by the platform for text assertion. To know more, see Dynamic Text Marking.
Test Assertion
A test case assertion is an expression that encapsulates a testable logic specified for a conversation testing test case.
Test Coverage
The test coverage shows the amount of testing performed for the test suites by capturing details like how many transition flows or Intents are covered for the test suites. It helps to add more test cases to cover the missed intents and transitions.
Test Case
A test case is a set of actions designed to test the behavior of a conversational system in a specific scenario. Test cases ensure that the conversational system is functioning correctly and meeting the specified requirements and guidelines. When creating a test case, you can define a set of user inputs and expected bot responses and then perform the test by interacting with the conversational system and verifying that it provides the expected responses.
Test Suite
A conversation test suite has a set of test cases that helps the testers to execute and report the test execution results.
Test Suite Metadata
The test suite metadata captured in conversation testing are details like Intent id, node id, intent and node names, transition flows, etc., that provide more information about the test suites. The metadata captured can be used to track the test coverage and to perform the flow and text assertions.
Transition Flow of Nodes
In conversation testing, you test the correct sequence of the nodes traversed in the background for any user input. For more information, see Nodes.