- Responsible: chutten
- Accountable: bsmedberg
- Supporting: data team, RyanVM, rvitillo, avih, Softvision
- Informed: cpeterson, elan, release management
Note from billm: bug 1228147 seems invalid because it considers users outside the experiment.
This measures the delay from when the underlying platform informs us of a vsync edge to when we handle it on the main thread of the stated process. As such, this is a reasonable measure of how main thread lag influences perceived jank (since it measures some of how long it takes changed pixels to show on-screen).
- In non-e10s, CHROME is the only measure with values.
- In e10s, CHROME and CONTENT both have values.
These metrics are only useful if the distribution of paint requests prompting the measures remains comparable. Both e10s and APZ change the distribution (the first by splitting the work and changing what work is performed in what process, the second by changing how many scrolling events are measured by these metrics), meaning that these measures are not able to be meaningfully compared between cohorts that have different e10s or APZ settings.
We have concerns about the accuracy of the data being collected for each of these measures, see bug 1240887. But we have agreed to accept the existing analysis which says that BHR and chromehangs improved with e10s and consider this requirement PASSed.
Followup may be required if BHR data is used to validate future addon-related jank.
- for Beta45ex1 - calculates hangs_per_minute, shows an improvement in parent-only hangs, no statistically-significant change in child+parent hangs.
- for Beta45ex1 - shows top hang stacks for parent process in e10s-enabled cohort
- Original "measure e10s jank" bug - I sidetracked the discussion at the end to INPUT_EVENT_RESPONSE_MS instead of hangs_per_minute. For more on that measure, see the "Event loop lag" section
Event loop lag
INPUT_EVENT_RESPONSE_MS is the better measure for e10s/nonE10s comparisons than the originally proposed EVENTLOOP_UI_ACTIVITY_EXP_MS. INPUT_EVENT_RESPONSE_MS is valid across more than one OS and more than one process (EVENTLOOP_UI_ACTIVITY_EXP_MS is valid on Windows only and in the chrome process only). I was using the analysis of that measure as the primary reason for closing bug 1223780 ( https://bugzilla.mozilla.org/show_bug.cgi?id=1223780 ) using analyses on beta45ex1 (prelim analysis was done here: https://gist.github.com/chutten/9b9e29df10e0f7306f99 analysis on the later data was performed, but not published, as it was largely identical) and prelim data from beta45ex2 ( https://gist.github.com/chutten/3129baf8d5e0f10ef54a )
This metric has been manually verified to have the following characteristics: chrome script slows down both parent and content events, content script slows down only content events.
Holistic camera-based responsiveness will detect problems that manifest on that one machine. Jank is a feature of distributions recorded by populations of users, not just experienced by one user at one time.
I have heard no concerns. I consider this a pass.
jank per minute of active usage
This is a combined metric, bug 1198650. We have made the decision that this no longer blocks e10s, because we are looking at the individual components.
- e10s comparison validated: jimm
- Current e10s diff: much better - ~90% on all platforms
- No results on OS X
- Note: measures browser responsiveness during page load. In e10s measured only at the chrome process, therefore the improvement seems real. It would still be useful to also collect data for the content process. TBD.
- bug 631571 - add the test to talos
- bug 710296 - enable the test in e10s (later comments)
These two metrics are better in e10s than in non-e10s. As are the other
CYCLE_COLLECTOR.*PAUSE metrics. This is to be expected, as they no longer contend for process resources. Analysis was performed on Beta45ex1. Analysis on Beta45ex2 will be here once out of review.
0 Total; 0 Open (0%); 0 Resolved (0%); 0 Verified (0%);