From runnings tests in parallel to how to use small, atomic, autonomous tests check out this comprehensive guide on best practices for running tests.
Avoid External Test Dependencies
If there are are "prerequisite" tasks that need to be taken care of before your test runs, you should include a
setup
section in your script that executes them before the actual testing begins. For example, you may need to to log in to the application, or dismiss an introductory dialog that pops up before getting into the application functionality that you want to test.- Similarly, if there are "post requisite" tasks that need to occur, like closing the browser, logging out, or terminating the remote session, you should have a
teardown
section that takes care of them for you.
Don't Hard Code Dependencies on External Accounts or Data
Development and testing environments can change significantly in the time between the writing of you test scripts and when they run, especially if you have a standard set of tests that you run as part of your overall testing cycle. For this reason, you should avoid building into your scripts any hard coded dependencies on specific accounts or data, Instead, use API requests to dynamically provide the external inputs you need for your tests.
Avoid Dependencies between Tests to Run Tests in Parallel
What are dependencies? Imagine a test suite with two tests:
In both of these examples, testLogin()
triggers the browser to log in and asserts that the login was successful. The second test clicks a button on the logged-in page and asserts that a certain result occurred.
This test suite works fine as long as the tests run in order. But second test makes an assumption that you are already logged in, which creates a dependency on the first test. If these tests run at the same time, or if the second one runs before the first one, the browser's cookies will not yet allow Selenium to access the logged-in page, and the second test fails. You can get rid of this dependency by making sure that each test can run independently independently of the others, as shown in these examples.
The main point is that it is dangerous to assume any state when developing tests for your app. Instead, you should find ways to quickly generate desired states for individual tests. In the example, this is accomplished with the doLogin()
function, which generates a logged-in state instead of assuming it. You might even want to develop an API for the development and test versions of your app that provides URL shortcuts that generate common states. For example, a URL that's only available in test that creates a random user account and logs it in automatically.
Don't Use Brittle Locators in Your Tests
//body/div/div/*[@class="someClass"]
or CSS selectors like #content .wrapper .main
. While these might work when you are developing your tests, they will almost certainly break when you make unrelated refactoring changes to your HTML output.Instead, use sensible semantics for CSS IDs and form element names, and try to restrict yourself to using these semantic identifiers. For example, in Java you could designate elements with driver.findElement(By.id("someId"
)); or driver.findElement(By.name("someName"));
or, in the example of PHP, you could use $this->byId()
or $this->byName()
. This makes it much less likely that you'll inadvertently break your page by shuffling around some lines of code.
Have a Retry Strategy for Handling Flakes
There will always be flaky tests, and tests that once breezed through with no problem can fail for what seems like no reason. The trick is figuring out whether a test that fails does so because it found a real problem in your app functionality, or because there was an issue with the test itself.Â
The best way to handle this problem is to log your failing tests into a database and then analyze them. Even tests that fail intermittently with no apparent cause may turn out to have a pattern when you are able to analyze them in detail and as a larger data set. If this is beyond the scope of your testing setup, the next best strategy is to log your failing cases into a log file that records the browser, version, and operating system for those tests, and then retry those tests. If they continue to fail after a second or third retry, chances are that the issue is with the functionality you're testing, rather than the test itself. This isn't a total solution for dealing with flakes, but it should help you get closer to the source of the problem.Â
Keep Functional Tests Separate from Performance Tests
- Functional tests should, as the name indicates, test some functionality or feature of your application. The output of these tests should generally be a simple "pass" or "fail" - either your functionality worked as expected, or it didn't. While running functional tests, it can also be advantageous to run front end performance tests that can help identify any regressions in JavaScript logic executed in the browser. When you use Sauce Labs for functional testing, you can also use custom extensions for WebDriver that will allow you test the performance of your website under specific network conditions, and also collect network and application-related metrics.
- Load tests, in contrast, should gauge and output network and server performance metrics. For example, can your application server handle a particular load, and does it behave as expected when you push it to its limit? These types of tests are better undertaken with a testing infrastructure that has been specifically developed for load testing, so all baseline performance metrics are well established and understood before you start the test.
Use Build IDs, Tags, and Names to Identify Your Tests
You can set these capabilities to be any combination of letters and numbers. To differentiate between builds, it's also a good practice to add a timestamp or CI job/build number at the end of your build tag.
See the following sections for more information:
Please note: the build
name and tags
capabilities are not supported in automated real device testing at this time, please check back for future updates with regards to this functionality.
Warning
While it's technically possible to use the same build name for multiple test runs, this will cause all of your test results to appear incorrectly as part of a single run. This, in turn, will cause your test results for those builds to be inaccurate.
Code Examples: Build, Tags, and Name
String username = System.getenv("SAUCE_USERNAME"); String accessKey = System.getenv("SAUCE_ACCESS_KEY"); MutableCapabilities sauceOptions = new MutableCapabilities(); sauceOptions.setCapability("name", "Web Driver demo Test"); sauceOptions.setCapability("tags", "tag1"); sauceOptions.setCapability("build", "build-1234"); sauceOptions.setCapability("username", username); sauceOptions.setCapability("accessKey", accessKey); FirefoxOptions firefoxOptions = new FirefoxOptions(); firefoxOptions.setCapability("platformName", "Windows 10"); firefoxOptions.setCapability("browserVersion", "79.0"); firefoxOptions.setCapability("sauce:options", sauceOptions); WebDriver driver = new RemoteWebDriver( new URL("https://ondemand.saucelabs.com/wd/hub"), firefoxOptions);
string _sauceUsername = Environment.GetEnvironmentVariable("SAUCE_USERNAME", EnvironmentVariableTarget.User); string _sauceAccessKey = = Environment.GetEnvironmentVariable("SAUCE_ACCESS_KEY", EnvironmentVariableTarget.User); var sauceOptions = new Dictionary<string, object> { ["username"] = _sauceUsername, ["accessKey"] = _sauceAccessKey, ["name"] = "Web Driver demo Test", ["build"] = "build-1234", ["tags"] = "tag1" }; var firefoxOptions = new FirefoxOptions() { BrowserVersion = "79.0", PlatformName = "Windows 10", UseSpecCompliantProtocol = true }; firefoxOptions.AddAdditionalCapability("sauce:options", sauceOptions, true); IWebDriver driver = new RemoteWebDriver(new Uri("https://ondemand.saucelabs.com/wd/hub"), firefoxOptions.ToCapabilities(), TimeSpan.FromSeconds(600));
const username = process.env.SAUCE_USERNAME; const accessKey = process.env.SAUCE_ACCESS_KEY; const tags = ["tag1", "tag2", "tag3" ] const driver = new webdriver.Builder() .withCapabilities({ 'browserName': 'firefox', 'platform': 'Windows 10', 'version': '79.0', 'sauce:options': { 'name': 'Web Driver demo Test', 'build': 'build-1234', 'tags': tags, 'username': username, 'accessKey': accessKey } }) .usingServer("https://" + username + ":" + accessKey + "@ondemand.saucelabs.com:443/wd/hub") .build();
sauce_username = os.environ["SAUCE_USERNAME"] sauce_access_key = os.environ["SAUCE_ACCESS_KEY"] sauceOptions = { "build": "build-1234", “name”: “Web Driver demo Test”, “tags”: [ "tag1", "tag2", "tag3" ] } browserOptions = { 'platformName':"Windows 10", 'browserName': "firefox", 'browserVersion': '79.0', 'sauce:options': sauceOptions } browser = webdriver.Remote(“https://ondemand.saucelabs.com/wd/hub”, desired_capabilities=browserOptions)
caps = { browser_name: 'firefox', platform_name: 'windows 10', browser_version: '79.0', "sauce:options" => { name: 'Web Driver demo Test', build: 'build-1234', tags: 'tag1', username: ENV['SAUCE_USERNAME'], access_key: ENV['SAUCE_ACCESS_KEY'] } } driver = Selenium::WebDriver.for(:remote, url: 'https://ondemand.saucelabs.com:443/wd/hub', desired_capabilities: caps)
Video: Organize Tests by Build
This video shows you how you can associate your tests with Builds in Sauce Labs, making it easier to understand how your tests are performing within your CI pipeline.
More Information
Use Environment Variables for Authentication Credentials
What You'll Need
- The SAUCE_USERNAME and SAUCE_ACCESS_KEY specific to your Sauce Labs account. You can find them by logging into saucelabs.com and going to Account > User Settings.
Setting Up Environment Variables on macOS and Linux Systems
- In Terminal mode, enter
vi ~/
.bash_profile
, and then press Enter. - Press
Enter these lines:
export SAUCE_USERNAME="your Sauce username" export SAUCE_ACCESS_KEY="your Sauce access key"
- Press Escape.
Hold Shift and press Z twice (z z) to save your file and quit
vi
.In the terminal, enter
source ~/.bash_profile
.
Setting Up Environment Variables on Windows Systems
- Click Start on the task bar.
- For Search programs and fields, enter E
nvironment Variables
. - Click Edit the environment variables.
This will open the System Properties dialog. - Click Environment Variables.
This will open the Environment Variables dialog. - In the User variables section, click New.
This will open the New System Variable dialog. - For Variable name, enter
SAUCE_USERNAME
. - For Variable value, enter your Sauce username.
- Click OK.
- Repeat 4 - 8 to set up the
SAUCE_ACCESS_KEY
.
Referencing Environment Variables in Test Scripts
Once you've set up the environment variables for your credentials, you need to reference them within the test scripts that you want to run on Sauce. You can find examples of test scripts that use environment variables for authentication in the demo directory for each language in the Sauce Labs Training repo on GitHub.
Below are examples of how to set environment variables in a given language/framework:
String sauceUserName = System.getenv("SAUCE_USERNAME"); String sauceAccessKey = System.getenv("SAUCE_ACCESS_KEY");
String sauceUserName = System.getenv("SAUCE_USERNAME"); String sauceAccessKey = System.getenv("SAUCE_ACCESS_KEY");
var sauceUserName = Environment.GetEnvironmentVariable("SAUCE_USERNAME", EnvironmentVariableTarget.User); var sauceAccessKey = Environment.GetEnvironmentVariable("SAUCE_ACCESS_KEY", EnvironmentVariableTarget.User);
let username = process.env.SAUCE_USERNAME, accessKey = process.env.SAUCE_ACCESS_KEY,
exports.config = { sauceUser: process.env.SAUCE_USERNAME, sauceKey: process.env.SAUCE_ACCESS_KEY,
username: ENV['SAUCE_USERNAME'], accessKey: ENV['SAUCE_ACCESS_KEY']
sauce_username = os.environ["SAUCE_USERNAME"] sauce_access_key = os.environ["SAUCE_ACCESS_KEY"]
sauce_username = os.environ["SAUCE_USERNAME"] sauce_access_key = os.environ["SAUCE_ACCESS_KEY"]
Use Explicit Waits
There are many situations in which your test script may run ahead of the website or application you're testing, resulting in timeouts and a failing test. For example, you may have a dynamic content element that, after a user clicks on it, a loading appears for five seconds. If your script isn't written in such a way as to account for that five second load time, it may fail because the next interactive element isn't available yet.
The general advice from the Selenium community on how to handle this is to use explicit waits. While you could also use implicit waits, an implicit wait only waits for the appearance of certain elements on the page, while an explicit wait can be set to wait for broader conditions. Selenium guru Dave Haeffner provides an excellent example of why you should use explicit waits on his Elemental Selenium blog. Whether you use explicit or implicit waits, you should not mix the two types in the same test.
These code samples, from the SeleniumHQ documentation on explicit and implicit waits, shows how you would use an explicit wait. In their words, this sample shows how you would use an explicit wait that "waits up to 10 seconds before throwing a TimeoutException
, or, if it finds the element, will return it in 0 - 10 seconds. WebDriverWait
by default calls the ExpectedCondition
every 500 milliseconds until it returns successfully. A successful return for ExpectedCondition
type is Boolean return true,
or a not null return value for all other ExpectedCondition
types."
Use the Latest Version of Selenium Client Bindings
Use Small, Atomic, Autonomous Tests
Small
Small refers to the idea that your tests should be short and succinct. If you have a test suite of 100 tests running concurrently on 100 VMs, then the time it will take to run the entire suite will be determined by the longest/slowest test case. Keeping your tests small ensures that your suite will run efficiently and provide you with results faster.
Atomic
An atomic test is one that focuses on testing a single feature, and which makes clear exactly what it is that you're testing. If the test fails, then you should also have a very clear idea of what needs to be fixed.
Autonomous
An autonomous test is one that runs completely independently of other tests, and is not dependent on the results of one test to run successfully. In addition, an autonomous test should use its own data to test against, and not create potential conflicts with other tests over the same data.
Use Page Objects to Model Repeated Interactions and Elements
- The SeleniumHQ documentation of page objects, hosted on Google Code
- The documentation for the Intern testing framework provides a good explanation of page objects and an example in JavaScript
- The cheezy/page-object GitHub repository includes the page-object gem for Ruby, as well as a good tutorial on how to create page objects
Use Breakpoints to Diagnose Flaky Tests
sauce: break
Selenium command, and by using the Pause button on the Test Details page while the test is running. sauce: break
sauce: break
is a JavaScript statement that you can insert into your Selenium execute_script
command to both identify and interrupt flaky tests for further diagnosis. You can find more information in the topics Annotating Tests with Selenium's JavaScript Executor and Live Testing on Virtual Mobile Devices.
The Pause Button
When your test is running, you can use the Pause button on the Test Details page to interrupt the test and assume manual control of the browser. When you're done investigating, click Stop, and the test that you breakpointed will be marked as such on the Test Details page.
Use New Accounts for Each Test
Reusing an account between test runs can lead to:
- Problems with account state before testing starts
- Failures when account setup code has changed
- Failures that only show on other accounts (like your production customers)
- Parallelisation problems between tests using the same account
When should you create a new account?
Roughly speaking, whenever your tests aren't interacting with previous or future tests:
- Running a test on different platforms
- Running a test that doesn't depend on other test state (tests with shared state are also an antipattern)
- Every time you run a test suite
Avoid Leakage of Credentials
Solution - Don't use real credentials
The best way to avoid this is to avoid using "real" credentials in tests, through the creation of temporary accounts.
Workaround - Transmit session tokens only
You can also avoid sensitive credentials using Selenium's ability to extract and inject cookies into accounts:
- Create a session in your environment, either directly in the application engine, or by using a local Selenium session or headless browser.
- Extract the session tokens (local storage objects, credentials, cookies, etc.).
- Use Selenium to push these objects and tokens into the browser under Sauce Labs' control
This technique avoids sending plain text passwords, however, the sent tokens and cookies are still logged. If your session tokens are not time-sensitive, this provides only security through obscurity. We recommend using time-sensitive session tokens.
Workaround - Change passwords after tests
If generating tokens and using unique temporary accounts is not possible, we recommend you have test actions your suite always takes, in order to change to a new, randomly generated password.
After each test, use a locally automated browser, a direct connection to your application database or a headless browser to change your test account's password to a new, randomly generated password. Ensure this password is stored in your CI environment, a credential store, or some other method.
In order to prevent credential loss from blocking test suites, you may want to start each test suite by changing the password, again, either by using a headless browser or local Selenium session to perform your password recovery process, or by directly interacting with your application's database.
Be Aware of the Load on Your Servers
As you move into fully automated testing and builds, with tests running in parallel and against multiple device/browser/platform/operating system combinations, be aware of the load that this will place on both your CI/CD server and the site under test. A best practice in this situation is make sure that both the CI/CD server and the site under test are on machines that are not running additional processes or open to additional network traffic, and can handle the additional number of simultaneous jobs and tests.
Imperative v. Declarative Test Scenarios
- Imperative testing or programming is essentially spelling out with as much detail as necessary how to accomplish something.
- Declarative testing or programming is only specifying (or declaring) what needs to be accomplished.
This is seen acutely in BDD circles because the goal of BDD is to get all of the interested parties (Project, Dev, Test, Business, etc) to collaborate on the requirements of a feature before anyone begins working on the implementation. Many testers have latched on to BDD tools as glorified test runners rather than a way to actually facilitate BDD practices. This results in features that include actual code and data structures. Less problematic, but still usually missing the point, is a heavy reliance on imperative scenarios. For example:
- Given I open a browser
- And I navigate to
http://example.com/login
- When I type in the username field
bob97
- And I type in the password field
F1d0
- And I click on Submit button
- Then I should see the message
Welcome Back Bob
This scenario is not focused solely on the business requirements, and actually needs to have knowledge implementation specific details in order to work. The fact that the user’s username is bob97
has nothing to do with the business requirements of the company. If BDD features are designed to represent the business logic, then they should only be changed if the business requirements change. If bob97
changed his password to 1<3MyD0g
, the page location changes ,or the success message changes, this test would fail, even though the business needs are exactly the same.
A declarative example of the same functionality looks like this:
- Given I am on the Login Page
- When I sign in with correct credentials
- Then I should see a welcome message
This is all information that the business cares about, is easier to read, and leaves it to the implementation to specify how a successful login is accomplished.
This principle can be applied to any language or test runner. Tests should largely focus on what needs to be accomplished, not the details of how it is done. They should mostly be understandable when read by non-developers. This approach goes very well with using the Page Object Pattern. Keep the business logic in the test, and put all of the information about the drivers, the element locators, the timing, etc in the Page Object.
Use Maven to Manage Project Dependencies
How Does Maven Manage Dependencies?
You add dependencies for your project to your Maven configuration file (also known as the pom.xml file, for Project Object Model). As you build your project using Maven, it resolves these dependencies and downloads the dependencies to your local repository folder. This folder is usually located in your user’s home folder and is named .m2. Each dependency downloaded from the repository is a project itself, and has its own dependencies. Maven recursively resolves all of these dependencies for you, and then merges shared dependencies and downloads them. At the end of the process you end up with a list of dependencies that are needed to run your project on your local machine. For full details about how this process works, check out the Maven documentation.
How Do I Get the Dependencies for a Project?
From this brief description of how Maven dependencies work, you may notice a problem: How do you know exactly what dependencies are required for a project, and if you don’t have Internet access or are trying to run your project offline, how do you make sure you have all the dependencies you need available locally? Fortunately, Maven includes several commands that you can use to make sure you have all the dependencies and repositories set up so that your project will build and run with no errors.
First, check for version updates to your dependencies, and then update the outdated dependencies in your pom.xml file as necessary.
$ mvn versions:display-dependency-updates
Get a list of your repositories, and make sure they’re pointing to all the correct dependencies.
$ mvn dependency:list-repositories
Get a list of your plugin and project dependencies, and make sure they’re all available in your private or local repository.
$ mvn dependency:go-offline
If you want to automate the process, or just get a cleaner output of your dependencies, you can also use this bash command:
$ mvn -o dependency:go-offline|grep ":*.jar"|awk '{split($0,a,":");print a[2]}'