From MozillaWiki
< QA‎ | Platform‎ | Graphics
Jump to: navigation, search


As part of Mozilla's Quantum initiative, the Graphics team is interested in developing a feature qualification process based on the model used by the Electrolysis team. The general idea is to have a well-documented, clear path to success for any Graphics feature that is part of this initiative. The goal is to ship major features without losing users.


The development of this model is being driven by Anthony Hughes (:ashughes), the QA Engineer for the GFX team.


This project is currently in the Exploratory Phase, expecting to be completed by July 31, 2016.

Exploratory Phase

The purpose of this phase is to consult with those involved in the Electrolysis project to gain awareness of the pros, cons, and evolution of the release model.

Discussion Notes

  • Clearly define the KPIs for shipping a feature (eg. performance, stability, correctness, etc) [1][2]
  • Democratize decision making and information needed to make decisions, this will enforce quality and prevent high-risk changes from backfiring.
  • Clearly document what does and does not block shipping a feature
  • Clearly document any flaws in the data being used to make decisions, people need to be able to trust the data
  • Before shipping a feature be able to clearly answer, Are we going to lose users if we ship? If the answer is a clear no then you can ship. The e10s team used dau:mau ratio as a key metric to answer this question.
  • Conduct many experiments to (dis)prove theories [3]
  • Resist the temptation to increase scope or uplift risky changes given a longer release cycle, treat it as if you still just have 6 weeks.
  • Make sure automated testing has a clearly defined role in the release criteria.
  • Make sure status is clearly and frequently communicated to all stakeholders [4]
  • When conflict breaks out between stakeholders break that out into a mediated meeting, this creates a safe space to work things out and prevents arguments from making the project look bad.
  • Raise red flags early, often, and clearly
  • System addons can be used to both experiment on Beta and release to a specific sub-population
  • Clearly define owners/approvers for each feature and quality area, e10s team used the RASCI Responsibility Matrix
  • Clearly define and track metrics for users utilizing your feature and users still on the old path
  • Clearly define when a feature is good enough to ship to millions of users and how each bug gets you closer to that goal, not doing this can lead to developer stress/burnout
  • Each release criteria should be backed up with data to prove assumptions
  • Be sure to conduct a broad-spectrum analysis to make sure blindspots are not being overlooked, this includes looking for negative and positive impacts of the feature (eg. issues were found in Session Restore and Networking that were outside the scope of e10s) [5]
  • Release criteria should be subject to change. Poor data, lack of understanding, and lack of awareness can all play in to initial assumptions of good release criteria in the beginning. As more information comes to light be willing to evolve this criteria.
  • Ensure that release criteria remains met throughout the process, don't go blind once it's checked off.
  • Ensure meeting release criteria is a shared responsibility.
  • Tests are critical - make it a release criteria.
  • Automate as much as possible, including data collection/analyses, and make sure time to automate is factored in to schedule
  • Clear structure with well-defined roles at every level (PM, Dev lead, Dev team, QA)
  • Clear Development plan with work split up in milestones (reviewed weekly)
  • Weekly meeting with everyone
  • Well defined (and well monitored) release criteria
  • Experiments on Aurora/Beta were well communicated via email
  • QA managed to achieve decent test coverage (mainly thanks to amount of time available)
  • Don't push fixes and experiments in very late in the cycle and try to bake this attitude in to all at the table
  • Hard to QA due to the size of the feature (covering almost the whole browser) – this caused some confusion at times as to what Firefox version should be used to run our Full Test set (which was run across several months)
  • High level of uncertainty at times as to when we want to release (including when we want it enabled on each channel) – at times this was clearly communicated via email, but then got contradictory information in meetings
  • Need clearer communication from QA – this was hard to track for most managers not directly involved in e10s - this should have included an overall view: what was tested, what remains to be tested, future plans (what, when, on which version)

Key Takeaways

  • Clearly defined and documented release criteria
  • Define tests as part of release criteria
  • Make sure all stakeholders buy-in to and respect release criteria
  • Make sure all work in costed from conception to development to testing to release to support
  • Make sure messaging is clear, concise, and uniform across all messaging
  • Frequent check-in of all stakeholders on progress toward release criteria
  • Experiment frequently, measuring against release criteria and a "control" user base
  • Be prepared to change release criteria as project evolves
  • Periodically conduct a broad-spectrum analysis to make sure blindspots are visible
  • Use system addon for experimentation and release roll-out
  • Make sure everything has a documented owner and approver
  • See e10s Experiments Wiki, Release Criteria, and Planning Doc for more information
  • Make sure testing covers seemingly unrelated browser use-cases to catch unforeseen interaction issues

Prototyping Phase

The purpose of this phase is to prototype a model for qualify Graphics features based on the artifacts obtained from consulting with Electrolysis team members in the Exploratory phase.