Unless explicitly mentioned, there is no special engagement-level calculation - it is done by aggregating all the instances of its workspaces.
Archived workspaces aggregate the data of workspaces active within the selected date range but currently archived.
Metric | It measures.. | Is calculated as the... |
Issues | The total approved issues reported from manual and automated test Runs | Distinct count of issue IDs by creation date, where the issue is approved, is not deleted, and is not created by an integration. |
Runs | Manual and automated Test Runs started within the selected date range. | Distinct count of run IDs by start date, where the run is not deleted and has approved hours. |
Tests | Test executions finished within the selected date range. | Distinct count of Test executions within Runs, when the test execution is finished. |
Devices/OS | Unique tested device/OS combinations. | Distinct count of device/OS combinations related approved tasks within Test Runs. |
Testers | Unique QA Testers who participated in test Runs started within the selected date range. | Distinct count of QA tester IDs - defined as the user ID associated with an approved task, by task start date. |
Issues per Hour | The average amount of issues per hour of manual testing within the selected date range. | Sum of approved Issues (see description) divided by the sum of approved manual testing hours on Runs started within the selected date range. |
Locations | Unique countries included in test task finished within the selected date range. | Distinct count of country names of testers (see description) allocated to finished tasks for test executions where hours have been approved. |
Was this article helpful?
That’s Great!
Thank you for your feedback
Sorry! We couldn't be helpful
Thank you for your feedback
Feedback sent
We appreciate your effort and will try to fix the article