Skip to end of metadata
Go to start of metadata

The Test Insights page provides visualization of performance information for a specific test to help you identity issues with test performance and flakiness across platforms, operating systems  and browsers that you test against. 

Using Extended Debugging to Help Diagnose Flaky Tests

You can also use the Extended Debugging feature, which provides access to the HAR files and JavaScript console logs for your tests, to help identify flaky tests. Check out Debugging Tests with JavaScript Console Logs and HAR Files (Extended Debugging) for more information.

Accessing Test Insights

If you have an Enterprise account with Sauce Labs, access to Test Insights is available by default. In the left-hand navigation, you will see two sub-menus within Analytics  - Trends and Test Insights. Click Test Insights, and the page will automatically  load the last test you ran. 

Viewing Test Insights for a Test

You can apply the same filter criteria that you use for analyzing test trends to drill down on the performance of a test.

  1. Click Test Insights in the Sauce Labs dashboard navigation.
  2. In the Search field, enter the name of the test you want to view. 
  3. Select the filter options you want to apply, and the visualization (test metrics and scatter plot) will update with each selection.

Reading the Scatter Plot Visualization

The test performance visualization displays a scatter plot of each test run for that test, using the time period and other filter criteria that you select. At the top of the visualization are six performance statistics for the test:

  • Total Runs - total number of test runs for the selected period
  • Total Errors - total number of test runs that did not complete
  • Total Failures - total number of test runs that have a recorded status of "Failed"
  • Fastest Runtime - the fastest test run execution
  • Slowest Runtime - the slowest test run execution
  • Average Queue Time - average amount of time that the test waited in the queue for execution

Below the performance statistics, the scatter plot shows each instance of the test being run, with color coding to indicate the run status, against the time it took the test to either execute or fail. The X-axis indicates the time range that has been selected using the time filter. The Y-axis indicates the duration of the test each time it was run. You can see the specific information about the platform, operating system, and other capabilities specified in a test by hovering your mouse cursor over the point representing it on the plot. 

Using Test Insights to Spot Flaky Tests

The charting of errors and failures in the visualization can help you get an early assesment of flaky test behavior. In this example, the test constantly fails in the first and second re-run, and succeeds in the third re-try.  This is a very typical example of a flaky behavior. 

An Example of Using Test Insights

In this example, you can how the test Sign In was executed over the last 30 days on different platfroms. Along the bottom are the executions that have successfully run and passed, and have the fastest execution times. As the execution time increases, you can see that there are significantly more runs that have failed, and clusters of tests that have errored before completion. Hovering over one test run shows that it was for OS X El Capitan (10.11) on the iPad 9.3.

 

Applying a filter for OS X El Capitan (10.11), you can now see how this test is performing on that operating system across the same time period. 

Comparing the two graphs, you can see that the majority of tests with errors are associated with El Capitan, but that these tests are also consistently passing. To further analyze the cause of these errors, and to determine if there is an issue with the test itself or it's just being "flaky," you could now use the Tests and Builds table on the Trends page to view the specific errors associated with these test runs. 

  • No labels