- Getting Started with Selenium for Automated Website Testing
- Getting Started with Appium for Mobile Native Application Testing
- Selenium Bootcamp by Dave Haeffner
- Appium Bootcamp by Dave Haeffner and Matthew Edwards
The Analytics dashboard for the Sauce Labs web interface includes filters, visualizations, and metrics that provide you with comprehensive overviews of your overall test performance, and features that let you drill down for detailed information to further optimize your tests and builds.
Take a video tour of the analytics interface and see features like scrubbing down to time increments in action!
Analytics API BETA
In addition to accessing analytics information through the web interface, you can use the Beta version of our Analytics API to build your own dashboards and information queries.
With filters you can drill down on a specific dimension of your tests, or combine multiple filters to pinpoint the information you want. Any time you change an option in a filter, the visualizations and metrics will update.
The time period to display. You can select a Relative option, like Last 15 Mins or Last 3 Days. The maximum relative option you can choose is 14 days.You can also enter a Custom option that ranges from one specific date and time to another, but you must select both dates and time of day for the range. If you use click-drag to drill down on a time increment, the Custom option will update to reflect this selection.
Timezone for Visualizations
The timezone for the visualizations is set to your current timezone, so if you want to look at the analytics for a time period for a a team or organization in a different timezone, you should adjust this filter as necessary.
|Owner||The owner of the tests you want to analyze. This can be the name of an individual, a team, or an organization. You will see different options here depending on whether you are the owner of an account for an entire organization, a team or set of teams, or an individual user.|
|Operating System||Show only the tests for a specific operating system. The number in parentheses next to the name of the operating system represents the total number of tests for that operating system within the selected time span.|
|Browser||Show only the tests for a specific browser. The number in parentheses next to the name of the operating system represents the total number of tests for that browser within the selected time span. Check out Filtering for Mobile Application and Website Tests for more info about how to select options for emulator and simulator tests.|
|Build||Show only the selected builds. Check out the links in Test Configuration and Annotation for more information about adding build numbers to your tests.|
|Tag||Show only builds with the selected tags. Check out the links in Test Configuration and Annotation for more information about adding tags to your tests.|
Under the Trends tab you'll find four visualizations, three as bar charts and one as a table.
Drilling Down on Visualizations
With the Number of Tests and Pass/Fail Rate visualizations, you can click-drag across the bars in the chart to drill down on a specific time period. This will also update the dates and times in the Time filter. After drilling down on a time period, click the Back link to step back through the previous time periods.
Getting Statistical Information for a Time Increment
If you hover your mouse cursor over the bars in the charts, you can see statistical information for your tests during that time increment. The kind of statistical information depends on the chart.
|Number of Tests||The total number of tests run during that increment of time|
|Pass/Fail Rate||The total number of tests run during that increment of time, along with the percentage number of tests with each status|
|Number of Errors||The total number of errors during that increment of time, along with the full error message|
Number of Tests
The number of tests run for the selected filters and time period, with each bar representing an increment of time within that period
|The total number of tests run within the selected time period|
Each bar shows the percentage of tests with each type of test status for that time period. The four status types are:
|The overall percentage of tests that have passed within the selected time period|
|Bar Chart||Total number of each type of error for the twenty four hour period, shown as a bar graph||The total number of tests with errors within the selected time period|
Build and Test Statistics
Lists each build in the time period matching the filter options. Click the name of a build to see the list of tests it includes, and the individual performance of those tests. You can also click on the name of a test to view its Test Details page.
The Build Efficiency Metric
The Builds table includes a metric, Efficiency, that indicates the level of parallelization achieved by tests in that build. Check out Improving the Efficiency of Your Builds for more information about how interpret this metric.
Adding Build Information to Your Tests
Tests without Build
Lists the tests in the time period matching the filter options that aren't associated with a build. Click on the name of a test to view the Test Details page.
Show Only Failed or Errored Tests
Select Failed Tests Only to filter for only the tests in each build, or individual tests without a build, that have failed. You can also select Errored Tests Only to view only the tests that generated an error before they ran to completion.
Under the Insights tab you'll find a visualization to help you understand and improve the efficiency of your testing process by getting the most out of your concurrency allocation.
The highest number of concurrent tests run for the selected time period charted against the maximum concurrency allocated for your account. Bars below the red line at the top of the chart indicate that tests run during that time increment used less than the maximum concurrency, and should be examined following the suggestions in Improving the Efficiency of Your Builds to see if you can achieve higher concurrency. Hovering over the bar for a time period will show you the percentage statistic that indicates the degree of concurrency you have achieved against your maximum.
|The highest number of concurrent tests run during the selected time period.|