QA/Platform/Graphics/Features/Planning

From MozillaWiki
Jump to navigation Jump to search

Summary

As part of Mozilla's Quantum initiative, the Graphics team is interested in developing a feature qualification process based on the model used by the Electrolysis team. The general idea is to have a well-documented, clear path to success for any Graphics feature that is part of this initiative. The goal is to ship major features without losing users.

Owner

The development of this model is being driven by Anthony Hughes (:ashughes), the QA Engineer for the GFX team.

Status

This project is currently in the Exploratory Phase, expecting to be completed by July 31, 2016.

Exploratory Phase

The purpose of this phase is to consult with those involved in the Electrolysis project to gain awareness of the pros, cons, and evolution of the release model.

Discussion Notes

  • Clearly define the KPIs for shipping a feature (eg. performance, stability, correctness, etc) [1][2]
  • Democratize decision making and information needed to make decisions, this will enforce quality and prevent high-risk changes from backfiring.
  • Clearly document what does and does not block shipping a feature
  • Clearly document any flaws in the data being used to make decisions, people need to be able to trust the data
  • Before shipping a feature be able to clearly answer, Are we going to lose users if we ship? If the answer is a clear no then you can ship. The e10s team used dau:mau ratio as a key metric to answer this question.
  • Conduct many experiments to (dis)prove theories [3]
  • Resist the temptation to increase scope or uplift risky changes given a longer release cycle, treat it as if you still just have 6 weeks.
  • Make sure automated testing has a clearly defined role in the release criteria.
  • Make sure status is clearly and frequently communicated to all stakeholders [4]
  • When conflict breaks out between stakeholders break that out into a mediated meeting, this creates a safe space to work things out and prevents arguments from making the project look bad.
  • Raise red flags early, often, and clearly
  • System addons can be used to both experiment on Beta and release to a specific sub-population
  • Clearly define owners/approvers for each feature and quality area, e10s team used the RASCI Responsibility Matrix
  • Clearly define and track metrics for users utilizing your feature and users still on the old path
  • Clearly define when a feature is good enough to ship to millions of users and how each bug gets you closer to that goal, not doing this can lead to developer stress/burnout
  • Each release criteria should be backed up with data to prove assumptions
  • Be sure to conduct a broad-spectrum analysis to make sure blindspots are not being overlooked, this includes looking for negative and positive impacts of the feature (eg. issues were found in Session Restore and Networking that were outside the scope of e10s) [5]
  • Release criteria should be subject to change. Poor data, lack of understanding, and lack of awareness can all play in to initial assumptions of good release criteria in the beginning. As more information comes to light be willing to evolve this criteria.
  • Ensure that release criteria remains met throughout the process, don't go blind once it's checked off.
  • Ensure meeting release criteria is a shared responsibility.
  • Tests are critical - make it a release criteria.
  • Automate as much as possible, including data collection/analyses, and make sure time to automate is factored in to schedule
  • Clear structure with well-defined roles at every level (PM, Dev lead, Dev team, QA)
  • Clear Development plan with work split up in milestones (reviewed weekly)
  • Weekly meeting with everyone
  • Well defined (and well monitored) release criteria
  • Experiments on Aurora/Beta were well communicated via email
  • QA managed to achieve decent test coverage (mainly thanks to amount of time available)
  • The dev team often pushed to get fixes and experiments in very late in the cycle – RelMan did a good job rejecting these most of the time due to high risk (so I’d say this went well, but the e10s team should have done this assessment with more responsibility, instead of pushing everything every time)
  • Hard to QA due to the size of the feature (covering almost the whole browser) – this caused some confusion at times as to what Firefox version should be used to run our Full Test set (which was run across several months)
  • High level of uncertainty at times as to when we want to release (including when we want it enabled on each channel) – at times this was clearly communicated via email, but then got contradictory information in meetings
  • Need clearer communication from QA – this was hard to track for most managers not directly involved in e10s - this should have included an overall view: what was tested, what remains to be tested, future plans (what, when, on which version)