Skip to end of metadata
Go to start of metadata

The Builds table in the Analytics dashboard includes a metric, Efficiency, that indicates the level of parallelization achieved by the tests in each build. This topic will help you interpret that metric to achieve greater efficiency in your build process. 

Benchmarking Efficiency

The Efficiency metric is expressed as a percentage because it is based on how long the build took to run as compared to the duration of the longest test within it. For example, let's say that Build A contains four tests with these run times:

TestRun Time
T130 secs
T260 secs
T345 secs
T430 secs

In this build, T2 would serve as the benchmark test because it takes the longest to run at 60 seconds. If the entire build takes 60 seconds to run, then it has achieved full efficiency, because all the tests are running in parallel, and the Efficiency metric would be 100%.

Consider another example, Build B:

TestRun Time
T115 secs
T220 secs
T310 secs

T4

45 secs
T530 secs
T615 secs
T710 secs
T820 secs
T915 secs

In this example, T4 would serve as the benchmark for the build efficiency, because it takes the longest to run at 45 seconds. However, as described in the next section, given that the majority of the tests in this build take less than 30 seconds to run, the overall efficiency of this build could be greatly improved by reducing the run times of T4 and possibly T5.

Improving Efficiency

If the Efficiency metric for a build is less than 100%, that means that the entire build took longer to run than the longest test within it. For example, if the build used as the benchmarking example ran for 115 seconds instead of 60, then it's efficiency would be around 52%. The range of this percentage indicates the degree of parallelization achieved by the tests in your build. There are several ways to improve the efficiency of your builds that are outlined in Best Practices for Running Tests. Depending on the efficiency rating of your test, there are some specific steps you can take to get started. 

EfficiencyDegree of ParallelizationCalculation
0%SequentialThe build took as long to run as the total run times of all the tests within it, which means that the tests ran in sequential order. You should consider using a test framework to run your tests in parallel. 
1 - 90%Semi-parallelThe build took less time to run than the total run times of all the tests within it, which means that some tests ran in parallel and some ran in sequential order. You should follow the best practice of keeping your tests small, atomic, and autonomous to make sure they aren't dependent on one another to complete before they can execute.
91 - 100%ParallelThe build took approximately the same amount of time to run as the longest test within it, meaning that most, if not all, the tests ran in parallel. To improve the overall efficiency of your build you should look at breaking down long-running tests into smaller, shorter tests. In the benchmarking example for Build A, if T2 could be broken down into two tests that ran for 30 seconds each, you would improve the efficiency of that build by 25%, since now the longest running test within it takes 45 seconds. 
  • No labels