Firefox/Go Faster/QA Guidelines
Jump to navigation
Jump to search
General guidelines:
- Whatever lands need to have some USEFUL unit/integration test coverage.
- Unit testing: Ideally, every conditional should have a test.
- Functional tests: At the least, the happy path for that feature should have coverage. A robust regression test suite is what we want to aim for. What are the flows that HAVE TO work all the time are the ones we start with. What makes tests useful? Reliable, fast and modular (as in, every test is checking for just one thing. That way, if a test fails, you know what the problem is without having to dig too deep)
- No code lands on dev without a code review ++
- Does this include tests must all pass before code is allowed to land on dev or is there a subset of tests? A specific suite?
- For new features which are landing, there may not be existing integration tests. If there are tests, they should pass. At the least, a code review should happen.
- Nobody should ever break any build, even -dev.
- If a patch breaks tests or dev -
- Why did it break? Was there a way that this issue could have been caught during code review or in unit tests?
- Can you fix it within an hour? If not, back it out.
- If the person who committed it is AFK, anyone has the power to revert after an hour and is expected to do so.
Questions/checklist to go through
Feature-specific:
- Determine what features are going to be part of the add-on (existing features in Firefox? Are they adding new features?)
- Is there Product and UX involved in the feature? if yes, is everyone aware who the contacts are for UAT?
- What telemetry are we capturing? Who owns that part?
- Are we capturing different telemetry across different release channels?
- What things are we trying to measure? what is the time period we want to measure it for?
- What conclusions do we want to reach to at the end of the trial period?
- What amount of data is required to be statistically relevant?
- If a feature/add-on is not successful, how do we go about end-of-life'ing it?
- Easy way for users to give feedback on new features?
- Will this require additional QA? Have we notified the right people?
- Has Firefox release engineering and release management been appraised of the upcoming add-ons?
- Who else do we need to inform (SUMO articles, PR etc)?
- At what points should QA be included in on discussions? Is QA needed for spec testing, feature ideation, etc?
- If there is a QA resource, they should be engaged in the process as early as possible but i'm not assuming we have that luxury.
Flow-specific:
- Run all the unit tests on every commit + docs!
- Where will these tests be run?
- How will failures be reported to stakeholders? (Assuming treeherder so far)
- Let's assume that localizations can break tests. For one, that's a requirements on how tests are written and run, but it also poses a challenge on reporting.
- Can we have something like git blame which makes it easier to identify what line of code is causing the tests to fail?
- Determine what versions of Firefox are used to run the functional test
- this should be determined by where we ship. Whatever we ship needs to be tested.
- Who owns writing/maintaining the tests?
- Determine if the functionality varies between the different release channels
- Can we run the AMO validator against every add-on build?
- [St8 comment]: This needs to be changed so that developers can run the same tests (it'd be interesting and advantageous to see if the relevant rules could be moved to an eslint plugin). Anything that isn't easily capable of being run by developers as tests shouldn't be run as tests - it just gets frustrating for the devs.
- Will the daily test builds be hosted on addons-dev?
- How do we approve and sign the daily builds? Laura said add-ons will be approved by default in the future but we need an interim plan.
- What is the release cadence going to be like for Hello?
- per meeting on August 25, the update cadence is gated by measurements on the prior deployment having signal
- Will there be daily builds of hello add-on?
- yes. <-- how will we manage the reviewing of these daily builds?
- smoke test suite <-- prime candidate for automation in the future
- Testing add-on updates
- Will there be a dashboard to track what add-on is enabled in what channel (great idea, we should do this)
- Set up IRCbots which can help identify what change landed in what add-on build as well as notify users on test failures?
- Data collection is great. But we need an alchemist who will turn this data into valuable learns on a regular basis. Is there an identified owner for each add-on that is shipping? (we can make this a requirement for each)
- We shipped a big bug! Now what?
- Can we push out a fix or disable the feature until then?
- Need to figure out how we want to do this.
- Add a test.
Acceptance (Entry) criteria for QA/exploratory testing:
- QA has been given sufficient headsup for any feature which requires exploratory testing
- Unit tests are passing
- All functional tests are passing
- AMO Validator tests passed with no errors
- The test build has been uploaded to AMO dev and approved (?) OR is made available in some easily accessible channel in a scheduled manner. This should work perfectly since most of the testing will be happening in Romania who are working during our nights. We should have a "candidate" build ready for them by our EOD.
- Regression-prone OR high-risk areas are clearly identified by developers
- Should a defined risk assessment occur periodically so developers know when to engage QA?
- Risk assessment should be a part of every decision made and not an external thing you return to periodically. It should be intrinsic to every decision being made. ++
- Developers should be trusted to determine and share what risky behaviors they are engaging in :)
Exit Criteria (to push to prod):
- Functional testrun has been green for a pre-determined length of time (?)
8 Update tests have passed for the add-on across all release channels on which we plan to ship
- There is test coverage (manual or automated) tests for all happy paths
- We have made sure that telemetry has been hooked up and works
- We have a Risk analysis/mitigation plan in place.
- Check if there is a way to seamlessly switch off the feature and if there is, that flow has been tested.
- If we are releasing the feature to a certain demographic, that option is tested on dev/stage.
- No open P1-P2 bugs for the feature (excluding enhancements) <-- an enhancement should never be P1 for a release and if it is, it is not an enhancement at that point.
- Someone has reviewed the tag/changelist and there are no redflags
- There is a regression testsuite for Firefox which has been run with the add-on enabled and the add-on disabled
- There are performance tests run on Firefox with the add-on enabled and the add-on disabled
- The feature being shipped doesn't conflict with any Firefox release plans/chemspills
- Security review has been done for the add-on - for v1 features, correct? yes.
Future ships
- Feature to be shipped is clearly defined and is signed off by product and UX and if applicable, the QA lead.
- Push karma, anyone? Would like to implement this with help from releng.
- Dashboard to display the lessons learned from the telemetry we gathered
- Summary of lessons learned every time a new add-on is made available or updated.