QA/Platform/Graphics/Features/Planning/Release Criteria Checklist

From MozillaWiki
Jump to: navigation, search

The following is based on the Electrolysis Release Criteria plan.

Important!

It is critical that criteria continues to be measured throughout the process, especially after the criteria has been met.

Rollout Criteria

Criteria to roll out to 100% of users on Release

Components

  • Who is going to get your feature?
  • When are they going to get it?
  • How are you going to measure the impact?
  • Where are you tracking the roll-out?
  • Who is responsible for conducting, tracking, and ultimately signing off the roll-out?

Criteria

  • Addons - do you ship to users with/without addons?
  • Plugins - do you ship to users with/without plugins?
  • Accessibility - do you ship to users with/without a11y enabled
  • Platform - do you ship to users on specific platforms?
  • Legacy - do you ship to users on old hardware/drivers?
  • Support - what is your support plan once the roll-out happens?

Release Criteria

Criteria to begin staged roll out to Release users

Components

People recommended to have assigned to each criteria

  • Responsible - a person who is ultimately responsible for the specific criteria
  • Accountable - a person who is accountable for the success/failure of meeting a specific criteria
  • Supporting - one or more people who are providing support to ensure specific criteria is met
  • Consultant(s) - one or more people who are consulted during the development of specific criteria
  • Informees - one or more people who need to be informed about the status of specific criteria
  • Metrics - one or more people responsible for providing measurement reports

Information to track for each criteria

  • Description - brief description of the measurement
  • Metric - equation/formula to be used
  • Analysis - link(s) to measurement analysis reports
  • Analyst - the person responsible for authoring the measurement analysis reports
  • Control Value - latest measurement value for your control group (those not using the feature)
  • Feature Value - latest measurement value for your feature group (those using the feature)
  • Criteria Met - whether the measurement meets the criteria
  • Authority - the person responsible for signing off the measurement

Additional information you may want to track for each criteria

  • meta bugs
  • documentation
  • reports and analysis
  • caveats to be aware of

Recommended Criteria

  • Inheritance - is this feature a component of a project/initiative? How does this need to be tracked against project/initiative release criteria?
  • Stability - are users crashing more or less when using your feature?
  • Engagement - are users more or less engaged when using your feature?
  • Performance - do tests detect a regression or progression in performance?
  • Page Load - what is the impact on page load time?
  • Startup/Shutdown - what is the impact on startup/shutdown time?
  • Scrolling - what is the impact on scrolling performance?
  • Plugins - what is the impact when using plugins?
  • Memory Usage - what is the impact on memory usage?
  • UI Smoothness - what is the impact on chrome performance?
  • Graphics - what is the impact on graphics benchmarks?
  • Slow Scripts - what is the impact on slow script warning frequency/severity?
  • Tests - does your feature add tests? does your feature regress existing tests? are there blindspots that need tests?
  • Blocking Bugs - a list of all bugs blocking Release, Beta, and Aurora migration
  • Accessibility - what is the impact on accessibility?
  • Add-ons - what is the impact on add-ons?
  • Correctness - does your feature introduce visual glitches? does it stray far from the spec? Get top bug filers to look at it (eg. font rendering)
  • Legacy - how does your feature impact users on old hardware/drivers? do we need to blacklist? Are there any users we're going to leave behind?
  • Thresholds - what are your thresholds for Aurora, Beta, and Release?
  • Regression Awareness - how long does it take to find out about regressions (ie. delta between date bug filed and date change landed)
  • Experiments - make sure some (if not all) of your criteria are proven using A/B experimentation
  • Dogfooding - make sure you have a manual testing plan including internal dogfooding, community testing, user feedback, and engagement with top bug filers

Experimentation Criteria

Telemetry Experiments, conducted when a feature is in Nightly/Aurora, should identify the following criteria:

  • Product
  • Version
  • OS
  • Channel
  • Build ID
  • Language
  • Sample ratio

and include the following info:

  • experiment ID
  • specific conditions for running the experiment (ie. specific user profiles you want excluded)
  • experiment start date & end date
  • experiment duration
  • XPI URL & hash

See also E10S Experiments Wiki.

Process Criteria

Do we need to use this process or just land?

  • Need to be able to answer, will we lose users if we ship this feature as is?
  • Make this light-weight enough so engineers can just ask for review