Capturing the page load performance of an application behind a specific URL is a great start to detect opportunities to improve performance, but some issues will only be discovered after a user interacts with the application in a certain way. Sauce Labs has integrated the capability to capture performance into WebDriver, so you run automation steps to mimic just such an interaction before asserting performance, thus capturing any performance issues related to the application state.
NOTE: Using automation to test performance after targeted interaction with your application in no way implies that you should integrate performance testing in your existing functional test suite. Testing function and performance in the same test is likely to result in compromised results for both objectives and can obscure failure troubleshooting.
What You'll Learn
- How to enable performance in your automation script
- How to measure performance as the automation runs
- How to detect regressions
- How to write the performance results to a log
- How to view your results
Getting Started
Here's what you'll find in this section of the Sauce Performance guide:
What You'll Need
- Google Chrome (no older than 3 versions from latest) as the test browser
- SAUCE_USERNAME and SAUCE_ACCESS_KEY defined for your environment
- An automation script that performs the interaction with your app during which you want to measure performance
Set Performance Capabilities
Before you configure your script to capture performance metrics as it executes, you must update your capabilities configuration file to enable performance actions. To do this, set the extendedDebugging
and capturePerformance
sauce:options
attributes to True
. The following excerpts show you the Webdriver.io code samples for a variety of supported languages.
def driver(request): sauceOptions = { "screenResolution": "1280x768", "platformName": "Windows 10", "browserVersion": "61.0", "seleniumVersion": "3.11.0", 'name': 'Pytest Chrome W3C Sample', 'extendedDebugging': 'true', 'capturePerformance': 'true' } chromeOpts = { 'platformName':"Windows 10", 'browserName': "chrome", 'browserVersion': '61.0', 'goog:chromeOptions': {'w3c': True}, 'sauce:options': sauceOptions }
const {config} = require('./wdio.shared.conf'); const defaultBrowserSauceOptions = { build: `WebdriverIO-V6 Front-End Performance-${new Date().getTime()}`, name: `WebdriverIO-V6 Front-End Performance-${new Date().getTime()}`, extendedDebugging: true, capturePerformance: true, };
browser_name = ENV['BROWSER_NAME'] || 'chrome' options = {browser_name: browser_name, platform_name: ENV['PLATFORM_NAME'] || 'Windows 10', browser_version: ENV['BROWSER_VERSION'] || 'latest', 'sauce:options': {name: "#{scenario.feature.name} - #{scenario.name}", build: build_name, username: ENV['SAUCE_USERNAME'], access_key: ENV['SAUCE_ACCESS_KEY'], extended_debugging: true, capture_performance: true }}
Implement the Performance Command Assertion
The custom sauce:performance
command measures the performance output against a baseline of previously accepted performance values. If no baseline has been set, the Performance test will create one by measuring performance output 10 times to get an aggregate baseline. The command returns pass
when the current results are within the baseline allowances or fail
when the results fall outside the baseline. A fail result give you the option to update the baseline with the new metrics, which would then be used in each subsequent run.
Command
sauce:performance
Arguments
nameRequired | A name of the test as it would appear in the Sauce Labs application. |
metricsOptional | Specifies one or more specific metrics you want to assert. If not specified, the test defaults to See Configuration Options for the list of supported metric values. |
Sample Response
// Response from our API when Page Load Time is above the expected baseline "reason": "Metrics [load] are significantly out of the baseline", "result": "fail", "details": { "load": { "actual": 12178, "lowerLimit": 813.6839391929948, "upperLimit": 2910.759098781689 } }
Script Examples
Here are a few examples of how to use our custom command across Selenium tests written in various languages (Java, Python, Javascript). Please note that in Java you must place captured metrics in a HashMap before asserting against them.
Full code is available in the Sauce Labs demo scripts repo: Python Performance Examples.
def test_performance_firstInteractive(self, driver, request): self.setUpClass(driver) performance = driver.execute_script("sauce:performance", { "name": request.node.name, "metrics": ["firstInteractive"]}) if(performance["result"] != "pass"): assert performance["details"]["firstInteractive"]["actual "] < 5000 else: assert performance["result"] == "pass"
Full code and additional webdriver examples are available in the Sauce Labs demo scripts repo: JS Performance Examples.
describe('Sauce Labs Front-End Performance', () => { beforeEach(() => { // // Adding extra logs to the Sauce Commands Dashboard browser.execute('sauce:context=########## Start beforeEach ##########'); // // Now load the url and wait for it to be displayed browser.url(''); // // Adding extra logs to the Sauce Commands Dashboard browser.execute('sauce:context=########## End beforeEach ##########'); }); afterEach(() => { // // Adding extra logs to the Sauce Commands Dashboard browser.execute('sauce:context=########## Enf of test ##########'); }); it('logs (sauce:performance) should check if all metrics were captured', () => { // // The expected metrics const metrics = [ 'load', 'speedIndex', 'firstInteractive', 'firstVisualChange', 'lastVisualChange', 'firstMeaningfulPaint', 'firstCPUIdle', 'timeToFirstByte', 'firstPaint', 'estimatedInputLatency', 'firstContentfulPaint', 'totalBlockingTime', 'score', 'domContentLoaded', 'cumulativeLayoutShift', 'serverResponseTime', 'largestContentfulPaint', ]; // // Get the performance logs const performance = browser.execute('sauce:log', {type: 'sauce:performance'}); // // Verify that all logs have been captured metrics.forEach(metric => expect(metric in performance, `${metric} metric is missing`)); }); it('(sauce:performance) should validate speedIndex', () => { // // Get the performance logs const performance = browser.execute('sauce:log', {type: 'sauce:performance'}); // // Verify that all logs have been captured expect(performance.speedIndex < 1000, `${performance.speedIndex} is equal or bigger than 100`); }); });
Handle Regressions
Once a performance regression is discovered, users have an option to either reverse the change or update the baseline with the new performance values. If new baselines are accepted, the command will measure performance against those new values until another regression is detected, when you will again have the option to troubleshoot or update the baselines.
Since the command can be called throughout the test, we recommend creating tests that would check for performance regressions across core business flows and screens. For example, pages that are behind a login or require multiple steps.
sauce:performance
is only aware of the performance metrics of the get
URL command that was called before it and not able to capture metrics for views that were navigated via webdriver actions (e.g., button clicks).
In this example, the custom command returns performance metrics for the /inventory.html
URL.
describe('Performance Demo Test', function () { const { title } = this; it('Asserts for regressions on the inventory page', () => { // log into the app browser.url('https://www.saucedemo.com'); const username = $('[data-test="username"]'); username.setValue('standard_user'); const password = $('[data-test="password"]'); password.setValue('secret_sauce'); const loginButton = $('.login-button'); loginButton.click(); browser.url('/inventory.html'); // assert performance of pageLoad has not regressed. const performanceOutput = browser.execute('sauce:performance', { name: title, metrics: ['load'] }); assert.equal(performanceOutput.result, 'pass', performanceOutput.reason); }); });
Log Performance Metrics
Instead of asserting performance metrics you can use the Sauce Labs custom command sauce:performance
to generate a log file that you can retrieve by calling
driver.execute('sauce:log', { type: 'sauce:performance' });
NOTE: See Custom Sauce Labs Extensions for additional network and logging options enabled with extended debugging.
Full code is available in the Sauce Labs demo scripts repo: Performance Python Examples.
def test_speed_index(self, driver): self.setUpClass(driver) metrics = ["load", "speedIndex", "pageWeight", "pageWeightEncoded", "timeToFirstByte", "timeToFirstInteractive", "firstContentfulPaint", "perceptualSpeedIndex", "domContentLoaded"] performance = driver.execute_script("sauce:log", {"type": "sauce:performance"}) for metric in metrics: assert performance["speedIndex"] < 1000
Full code is available in the Sauce Labs demo scripts repo: JS Training Examples.
it('logs (sauce:performance) should check if all metrics were captured', () => { // // The expected metrics const metrics = [ 'load', 'speedIndex', 'firstInteractive', 'firstVisualChange', 'lastVisualChange', 'firstMeaningfulPaint', 'firstCPUIdle', 'timeToFirstByte', 'firstPaint', 'estimatedInputLatency', 'firstContentfulPaint', 'totalBlockingTime', 'score', 'domContentLoaded', 'cumulativeLayoutShift', 'serverResponseTime', 'largestContentfulPaint', ]; //
Sample Response
{ "load": 1083, "speedIndex": 905, "firstInteractive": 1073, "firstVisualChange": 576, "lastVisualChange": 1243, "firstMeaningfulPaint": 1239, "firstCPUIdle": 1239, "timeToFirstByte": 69, "firstPaint": 559, "estimatedInputLatency": 16, "firstContentfulPaint": 630, "largestContentfulPait": 480, "cumulativeLayoutShift": 0, "totalBlockingTime": 22, "score": 0.9947067807295903, "domContentLoaded": 1073 }
Assert Metrics Using a Performance Budget
A performance budget is a set of limits imposed on metrics that affect site performance. You can define these limits for various pages in a separate file. For example:
{ "https://saucelabs.com/": { "speedIndex": 2300, "lastVisualChange": 2200, "load": 4200 }, "https://saucelabs.com/platform/analytics-performance/sauce-performance": { "score": 0.78 } }
You'll import this file in your test script and dynamically assert against the values:
const budgets = require('./budget.json') for (const [url, budget] of Object.entries(budgets)) { await browser.url(url) const performanceLogs = await browser.execute( 'sauce:log', { type: 'sauce:performance' }) for (const [metric, value] of Object.keys(budget)) { assert.ok(performanceLogs[metric] < value `metric ${metric} is over the performance budget`) } }
This way you can dynamically automate your performance tests to cover performance quality over a lot of pages.
Access Performance Report
When your test completes, it generates a set of metrics and that you can access through the Performance tab of your Test Details page. If you detect a regression in your website's performance, you can download a Full Trace report, or you can use the Chrome DevTool, which records Javascript method signatures in a hierarchical view for each thread in each process, to get a better idea of how the Chrome browser is interacting with your website. Please see Page Load Metrics Reference for more details about the data in this report.
The full Performance Report provides you access to all performance metrics over time: