B2G/QA/2014-11-14 Performance Acceptance

From MozillaWiki
< B2G‎ | QA
Jump to: navigation, search

2014-11-14 Performance Acceptance Results

Overview

These are the results of performance release acceptance testing for FxOS 2.1, as of the Nov 14, 2014 build.

Our acceptance metric is startup time from launch to visually-complete, as metered via the Gaia Performance Tests, with the system initialized to make reference-workload-light.

For this release, there are two baselines being compared to: 2.0 performance and our responsiveness guidelines targeting no more than 1000ms for startup time.

The Gecko and Gaia revisions of the builds being compared are:

2.0:

  • Gecko: mozilla-b2g32_v2_0/82a6ed695964
  • Gaia: 7b8df9941700c1f6d6d51ff464f0c8ae32008cd2

2.1:

  • Gecko: mozilla-b2g34_v2_1/cbed31eda0d0
  • Gaia: 3210b4c4a9b7272820ab1d40835217e3de440652

Startup -> Visually Complete

Startup -> Visually Complete times the interval from launch when the application is not already loaded in memory (cold launch) until the application has initialized all initial onscreen content. Data might still be loading in the background, but only minor UI elements related to this background load such as proportional scroll bar thumbs may be changing at this time.

This is equivalent to Above the Fold in web development terms.

More information about this timing can be found on MDN.

Execution

These results were generated from 480 application data points per release, generated over 16 different runs of make test-perf as follows:

  1. Flash to base build
  2. Flash stable FxOS build from tinderbox
  3. Constrain phone to 319MB via bootloader
  4. Clone gaia
  5. Check out the gaia revision referenced in the build's sources.xml
  6. GAIA_OPTIMIZE=1 NOFTU=1 make reset-gaia
  7. make reference-workload-light
  8. For up to 16 repetitions:
    1. Reboot the phone
    2. Wait for the phone to appear to adb, and an additional 30 seconds for it to settle.
    3. Run make test-perf with 31 replicates

Result Analysis

First, any repetitions showing app errors are thrown out.

Then, the first data point is eliminated from each repetition, as it has been shown to be a consistent outlier likely due to being the first launch after reboot. The balance of the results are typically consistent within a repetition, leaving 30 data points per repetition.

These are combined into a large data point set. Each set has been graphed as a 32-bin histogram so that its distribution is apparent, with comparable sets from 2.0 and 2.1 plotted on the same graph.

For each set, the median and the 95th percentile results have been calculated. These are real-world significant as follows:

Median
50% of launches are faster than this. This can be considered typical performance, but it's important to note that 50% of launches are slower than this, and they could be much slower. The shape of the distribution is important.
95th Percentile (p95)
95% of launches are faster than this. This is a more quality-oriented statistic commonly used for page load and other task-time measurements. It is not dependent on the shape of the distribution and better represents a performance guarantee.

Distributions for launch times are positive-skewed asymmetric, rather than normal. This is typical of load-time and other task-time tests where a hard lower-bound to completion time applies. Therefore, other statistics that apply to normal distributions such as mean, standard deviation, confidence intervals, etc., are potentially misleading and are not reported here. They are available in the summary data sets, but their validity is questionable.

On each graph, the solid line represents median and the broken line represents p95.

Result Criteria

Results are determined as OVER or UNDER the listed target in the documented release acceptance criteria, or INDETERMINATE if it is unclear whether they meet the criteria.

To be OVER or UNDER, the result must vary by at least 25 ms from the criteria. Within 25 ms of the criteria either way, the result is INDETERMINATE. This 25 ms margin accounts for noise in the results. This is a conservative estimate; based on accuracy studies with similar numbers of data points, our noise level is probably significantly under that.

At release acceptance time, all results should be UNDER or at least INDETERMINATE. Results significantly OVER may not qualify for release acceptance.

Median launch time has been used for this determination, per current convention. p95 launch time might better capture a guaranteed level of quality for the user. In cases where this is significantly over the target, more investigation might be warranted.

Results

Calendar

FxOS Performance Comparison Results, 2.1 2014-11-14 Calendar


Previous Comparison

2.0

  • 210 data points
  • Median: 1017 ms
  • p95: 1301 ms

2.1

  • 480 data points
  • Median: 1228 ms
  • p95: 1348 ms

Result: OVER (target 1150 ms)

Comment: Results are fundamentally the same as the last comparison.

Camera

FxOS Performance Comparison Results, 2.1 2014-11-14 Camera


Previous Comparison

2.0

  • 330 data points
  • Median: 1416 ms
  • p95: 1886 ms

2.1

  • 480 data points
  • Median: 1557 ms
  • p95: 1668 ms

Result: INDETERMINATE (target 1550 ms)

Comment: Results are fundamentally the same as the last comparison.

Clock

FxOS Performance Comparison Results, 2.1 2014-11-14 Clock


Previous Comparison

2.0

  • 480 data points
  • Median: 901 ms
  • p95: 1162 ms

2.1

  • 480 data points
  • Median: 1016 ms
  • p95: 1135 ms

Result: INDETERMINATE (target 1000 ms)

Comment: Results are fundamentally the same as the last comparison.

Contacts

FxOS Performance Comparison Results, 2.1 2014-11-14 Contacts


Previous Comparison

2.0

  • 480 data points
  • Median: 747 ms
  • p95: 856 ms

2.1

  • 480 data points
  • Median: 876 ms
  • p95: 995 ms

Result: UNDER (target 1000 ms)

Comment: Results are fundamentally unchanged from the last run.

Cost Control

FxOS Performance Comparison Results, 2.1 2014-11-14 Cost Control


Previous Comparison

2.0

  • 450 data points
  • Median: 1603 ms
  • p95: 1831 ms

2.1

  • 480 data points
  • Median: 2606 ms
  • p95: 2794 ms

Result: OVER (target 1000 ms)

Comment: Results are fundamentally the same as the last comparison.

Dialer

FxOS Performance Comparison Results, 2.1 2014-11-14 Dialer


Previous Comparison

2.0

  • 480 data points
  • Median: 469 ms
  • p95: 591 ms

2.1

  • 480 data points
  • Median: 559 ms
  • p95: 619 ms

Result: UNDER (target 1000 ms)

Comment: Results are fundamentally the same as in the last comparison.

FM Radio

FxOS Performance Comparison Results, 2.1 2014-11-14 FM Radio


Previous Comparison

2.0

  • 480 data points
  • Median: 462 ms
  • p95: 717 ms

2.1

  • 480 data points
  • Median: 524 ms
  • p95: 712 ms

Result: UNDER (target 1000 ms)

Comment: Results are fundamentally the same as in the last comparison.

Gallery

FxOS Performance Comparison Results, 2.1 2014-11-14 Gallery


Previous Comparison

2.0

  • 480 data points
  • Median: 873 ms
  • p95: 1113 ms

2.1

  • 480 data points
  • Median: 943 ms
  • p95: 1062 ms

Result: UNDER (target 1000 ms)

Comment: Results are fundamentally the same as in the last comparison.

Music

FxOS Performance Comparison Results, 2.1 2014-11-14 Music


Previous Comparison

2.1

  • 480 data points
  • Median: 917 ms
  • p95: 1087 ms

Result: UNDER (target 1000 ms)

Comment: Results are fundamentally the same as the last comparison.

Settings

FxOS Performance Comparison Results, 2.1 2014-11-14 Settings


Previous Comparison

2.0

  • 480 data points
  • Median: 3391 ms
  • p95: 3735 ms

2.1

  • 450 data points
  • Median: 2591 ms
  • p95: 2980 ms

Result: INDETERMINATE (target 2600 ms)

Comment: Settings is showing a result approximately 35 ms slower than last time. It's unclear whether this is significant.

SMS

FxOS Performance Comparison Results, 2.1 2014-11-14 SMS


Previous Comparison

2.0

  • 480 data points
  • Median: 1100 ms
  • p95: 1279 ms

2.1

  • 480 data points
  • Median: 1254 ms
  • p95: 1460 ms

Result: OVER (target 1200 ms)

Comment: Results are fundamentally the same as the last comparison.

Video

FxOS Performance Comparison Results, 2.1 2014-11-14 Video


Previous Comparison

2.0

  • 450 data points
  • Median: 923 ms
  • p95: 1138 ms

2.1

  • 480 data points
  • Median: 941 ms
  • p95: 1077 ms

Result: UNDER (target 1000 ms)

Comment: Results are fundamentally the same as in the last comparison.