Post-mortems:2.0.0.2

From MozillaWiki
Jump to navigation Jump to search

What worked

How did the process go – what did we plan to do, what did we actually do? What did you like about the process?

Things we did well/better:

  1. Improved communication with build team to minimize QA waiting time.
  2. Worked more closely with dev. to understand and test Vista changes (rstrong and sspitzer did an awesome job!)
  3. Verified bugs sooner than later... at least the ones we could with testcases or through regression testing.
  4. Involved more community members by creating Vista users list and holding our regular test days
  5. Spent time testing top extensions and plugins (something we have overlooked in the past)
  6. Involved web-dev and product teams early to ensure that release day went smoothly with release notes and website changes.

What we could improve

  • Changes such as Bug 366113 should not silently land on branches.
  • For every integration late in the release (after freeze) and for all the major groups of changes... make sure that there is good communication between developers and QA to ensure thorough focused testing.

Things we need to improve:

  1. More input and testing from developers: if a testcase is not provided and QA is not able to create one, developers should either create them or think of cases that might break and let QA do exploratory testing around the fix.
  2. Develop relationships with partners and web sites/services/toolkits to share some of the QA tasks
  3. Coordinating release day QA testing... having download checker ready to run, QA resources available to test live bits/updates, better automation for everything to minimize manual tests.
  4. Evaluating risks that might not be apparent from looking at bug patches and descriptions (have dev. spend a bit more time *thinking* about what their fix might break)
  5. Communication with the community/partners amid the chaos of regressions and respins. Both by giving regular updates and by training/informing them where to go for information.
  6. Build automation/turnaround time... this is in the works and hopefully will make for a smoother release/QA handoff for each RC.
  7. Better delegation of QA tasks, buy-in from other QA members to own certain areas (addons/plugins, topsite, js, partner, etc)... so 1-3 people are not trying to do everything at once.

What do we want to change for 2.0.0.4?

Ideas for change:

  • Throttle back changes to the branches, be more strict with blocker/wanted criteria
  1. this has not really been an issue, since dvedtiz's research has shown that most of the "extra" stuff has not been the cause of regressions
  2. still, this sometimes takes away from QA resources to test/verify more bugs.
  3. we might be better off with less bug fixes and more time for focused testing on a few critical areas
  • Do an official RC for all point releases.... or at least more often.
  1. the only way to ensure we don't "break the web" is to have the world try out our RC builds.
  2. we get decent community testing with our informal RC builds, but it has not been enough
  3. we can do auto update for those on the "beta" channel for RC builds and do more to get people to try them out before official release.
  4. this will cost us in terms of build/qa time, but i have a feeling that an RC will save us from 1-2 respins if planned and executed well.
  5. get the candidate builds out there for more community testing as well as allow more bake time to flesh out possible issues [marcia]
  • Experiment with QA team assignments and approach to bug verifications
  1. Test execution team can handle the majority of bugs with solid testcases.
  2. More experienced/technical QA folks can follow up on bugs that are difficult to reproduce or need testcases (will be able to work better with dev in understanding the root of the issue)
  3. 1-2 people focused on compatibility issues (topsites, ajax, addons/plugins, etc)
  4. Create a specific test plan for the release based on the bugs that are going in. I have done this for the 2.0.0.3 release. It involved following up with the developers and getting more specific info about what to test [marcia]

Interested Attendees