B2G/QA/2014-11-21 Performance Acceptance
2014-11-21 Performance Acceptance Results
Overview
These are the results of performance release acceptance testing for FxOS 2.1, as of the Nov 21, 2014 build.
Our acceptance metric is startup time from launch to visually-complete, as metered via the Gaia Performance Tests, with the system initialized to make reference-workload-light.
For this release, there are two baselines being compared to: 2.0 performance and our responsiveness guidelines targeting no more than 1000ms for startup time.
The Gecko and Gaia revisions of the builds being compared are:
2.0:
- Gecko: mozilla-b2g32_v2_0/82a6ed695964
- Gaia: 7b8df9941700c1f6d6d51ff464f0c8ae32008cd2
2.1:
- Gecko: TBA
- Gaia: TBA
Startup -> Visually Complete
Startup -> Visually Complete times the interval from launch when the application is not already loaded in memory (cold launch) until the application has initialized all initial onscreen content. Data might still be loading in the background, but only minor UI elements related to this background load such as proportional scroll bar thumbs may be changing at this time.
This is equivalent to Above the Fold in web development terms.
More information about this timing can be found on MDN.
Execution
These results were generated from 480 application data points per release, generated over 16 different runs of make test-perf as follows:
- Flash to base build
- Flash stable FxOS build from tinderbox
- Constrain phone to 319MB via bootloader
- Clone gaia
- Check out the gaia revision referenced in the build's sources.xml
- GAIA_OPTIMIZE=1 NOFTU=1 make reset-gaia
- make reference-workload-light
- For up to 16 repetitions:
- Reboot the phone
- Wait for the phone to appear to adb, and an additional 30 seconds for it to settle.
- Run make test-perf with 31 replicates
Result Analysis
First, any repetitions showing app errors are thrown out.
Then, the first data point is eliminated from each repetition, as it has been shown to be a consistent outlier likely due to being the first launch after reboot. The balance of the results are typically consistent within a repetition, leaving 30 data points per repetition.
These are combined into a large data point set. Each set has been graphed as a 32-bin histogram so that its distribution is apparent, with comparable sets from 2.0 and 2.1 plotted on the same graph.
For each set, the median and the 95th percentile results have been calculated. These are real-world significant as follows:
- Median
- 50% of launches are faster than this. This can be considered typical performance, but it's important to note that 50% of launches are slower than this, and they could be much slower. The shape of the distribution is important.
- 95th Percentile (p95)
- 95% of launches are faster than this. This is a more quality-oriented statistic commonly used for page load and other task-time measurements. It is not dependent on the shape of the distribution and better represents a performance guarantee.
Distributions for launch times are positive-skewed asymmetric, rather than normal. This is typical of load-time and other task-time tests where a hard lower-bound to completion time applies. Therefore, other statistics that apply to normal distributions such as mean, standard deviation, confidence intervals, etc., are potentially misleading and are not reported here. They are available in the summary data sets, but their validity is questionable.
On each graph, the solid line represents median and the broken line represents p95.
Result Criteria
Results are determined as OVER or UNDER the listed target in the documented release acceptance criteria, or INDETERMINATE if it is unclear whether they meet the criteria.
To be OVER or UNDER, the result must vary by at least 25 ms from the criteria. Within 25 ms of the criteria either way, the result is INDETERMINATE. This 25 ms margin accounts for noise in the results. This is a conservative estimate; based on accuracy studies with similar numbers of data points, our noise level is probably significantly under that.
At release acceptance time, all results should be UNDER or at least INDETERMINATE. Results significantly OVER may not qualify for release acceptance.
Median launch time has been used for this determination, per current convention. p95 launch time might better capture a guaranteed level of quality for the user. In cases where this is significantly over the target, more investigation might be warranted.
Results
Calendar
2.0
- 210 data points
- Median: 1017 ms
- p95: 1301 ms
2.1
- 480 data points
- Median: 1240 ms
- p95: 1380 ms
Result: OVER (target 1150 ms)
Comment: Results are fundamentally the same as in the last comparison.
Camera
2.0
- 330 data points
- Median: 1416 ms
- p95: 1886 ms
2.1
- 480 data points
- Median: 1590 ms
- p95: 1721 ms
Result: OVER (target 1550 ms)
Comment: Results are slightly worse than the last comparison, bringing Camera over the 25 ms margin. However, there are no code changes to explain a specific regression here, and this might be natural variation.
Clock
2.0
- 480 data points
- Median: 901 ms
- p95: 1162 ms
2.1
- 480 data points
- Median: 1021 ms
- p95: 1201 ms
Result: INDETERMINATE (target 1000 ms)
Comment: Results are fundamentally the same as in the last comparison.
Contacts
2.0
- 480 data points
- Median: 747 ms
- p95: 856 ms
2.1
- 480 data points
- Median: 882 ms
- p95: 999 ms
Result: UNDER (target 1000 ms)
Comment: Results are fundamentally the same as in the last comparison.
Cost Control
2.0
- 450 data points
- Median: 1603 ms
- p95: 1831 ms
2.1
- 480 data points
- Median: 2642 ms
- p95: 2814 ms
Result: OVER (target 1000 ms)
Comment: Results are fundamentally the same as the last comparison.
Dialer
2.0
- 480 data points
- Median: 469 ms
- p95: 591 ms
2.1
- 480 data points
- Median: 561 ms
- p95: 617 ms
Result: UNDER (target 1000 ms)
Comment: Results are fundamentally the same as in the last comparison.
FM Radio
2.0
- 480 data points
- Median: 462 ms
- p95: 717 ms
2.1
- 480 data points
- Median: 521 ms
- p95: 737 ms
Result: UNDER (target 1000 ms)
Comment: Results are fundamentally the same as in the last comparison.
Gallery
2.0
- 480 data points
- Median: 873 ms
- p95: 1113 ms
2.1
- 480 data points
- Median: 963 ms
- p95: 1098 ms
Result: UNDER (target 1000 ms)
Comment: Results are fundamentally the same as in the last comparison.
Music
2.1
- 480 data points
- Median: 925 ms
- p95: 1071 ms
Result: UNDER (target 1000 ms)
Comment: Results are fundamentally the same as the last comparison.
Settings
2.0
- 480 data points
- Median: 3391 ms
- p95: 3735 ms
2.1
- 450 data points
- Median: 2589 ms
- p95: 3038 ms
Result: INDETERMINATE (target 2600 ms)
Comment: Results are fundamentally the same as in the last comparison.
SMS
2.0
- 480 data points
- Median: 1100 ms
- p95: 1279 ms
2.1
- 480 data points
- Median: 1268 ms
- p95: 1438 ms
Result: OVER (target 1200 ms)
Comment: Results are fundamentally the same as the last comparison.
Video
2.0
- 450 data points
- Median: 923 ms
- p95: 1138 ms
2.1
- 480 data points
- Median: 956 ms
- p95: 1084 ms
Result: UNDER (target 1000 ms)
Comment: Results are fundamentally the same as in the last comparison.