How are performance metrics collected for my tests?
Sauce Labs collects history for performance metrics based on the test name and platform combination. The browser version is currently ignored.
Can everyone in my organization view the front-end performance stats?
As performance is part of the job details, visibility of the performance data is determined by the visibility of the job.
Why aren't performance metrics captured for all of the URLs that were accessed during the test?
Performance metrics are captured for hard and soft page transitions, for example, those that are triggered for example by a navigate command or a click, but some types of page transitions are less explicit and are not captured. Please file a support ticket if we miss to capture performance metrics for a specific page load in your test.
Which browsers are supported?
As of now, performance metrics can only be captured on the three most recent versions of Google Chrome.
How are baselines defined?
Baselines are determined by calculating a confidence interval over the prior runs of the same test name for each URL visited during the test. The confidence interval is calculated to estimate the range of values which are expected in future runs, based on past observed performance. Observations outside of the baseline range are statistically unlikely to be observed in the absence of some fundamental change in the application’s performance.
What is the meaning of "TraceLogs warning detected"?
In order to capture performance even when some trace events are missing from traceLogs, we calculate and inject missing events on the basis of other related events. As a result, there is a chance that the main thread work data is not accurate and it may miss some categories. This is a very rare case but we have seen this issue also in Chrome DevTools when it misses some trace events in the capturing process.