QA/Firefox/Iteration Development/Process Review September 2014
Jump to navigation
Jump to search
Reference
- Etherpad
- Dashboard
- Graph (QA workload)
- Graph (QA Success Ratios)
- Development Performance Report 34.3
What's Working
- it's easier to understand and visualize our work load
- it's easier to understand what is coming up and what is important
- flags make it easier to query for triage and dashboarding, compared to whiteboard tags
- firefox-backlog flag has been useful to drawing developer attention to newly confirmed bugs
- allowed us to create a dashboard for staying on top of work and measuring workload
Issues
- adding bugs to the iteration after they've been fixed, particularly just before the sprint is ending, prevents effective QE planning (7 bugs in the last sprint)
- some bugs are missing priority which makes it hard to prioritize our resources
- it's not easy to identify and defer bugs needing QE but which are not immediately testable, thus wasting time in triage (ex. bug 1052530)
- some bugs (and their impact) are hard to understand on the face of it which greatly increases the time spent on certain bugs
- testers feel pressured to verify all qe+ bugs in the iteration, not following as we originally agreed
- greater focus on Firefox product bugs has created some blindspots in other areas (ex. Platform, EME, etc)
- QE is working at full capacity to keep up with current volume, raises concerns about scale
Questions
- What qualifies a bug as eligible to be nominated for firefox-backlog?
- How does/can QE scale as the volume of bugs increases, and as other teams buy in to the process?
- What are the expectations towards and the value of verification? (QE thinks it's more exploratory in nature)
- What do developers, managers, designers, and testers expect of each other?
- What do we mean by "in-testsuite+"? (covered by tests doesn't necessarily mean that we automatically qe-verify- a bug)
- What do we mean by "at-risk component"? (bugs fixed, lines of code changed, historical data, severity of the implication of the changes)?
Ideas
- be more selective about the bugs we deem qe+ to focus more on the most important, highest risk fixes
- push harder to get community involved in verifying those bugs we don't have time for
- we'd like to see clearer steps, testcases, screenshots, videos, and user stories on more bugs
- developers should qe-verify+ bugs when they know a bug needs testing
- developers should qe-verify- bugs when they know a bug does not need testing
- developers/sherriffs should in-testsuite+ bugs when a bug has good test coverage (it's better since Anthony reached out but could be improved)
- set up some default QA contacts for components/features to minimize hunting down owners
- set up a comment tag to more easily identify details for testing (would also make it easier to discover bugs missing vital information)
- don't verify all the bugs instead organize testing sprints to flesh out regressions in at-risk components (ex. many small changes, one or more high risk changes, one or more high impact changes)
Recommendations
- Liz Henry has had really good interactions with DevTools developers (receptive to attention, excellent MDN docs [ex. WebIDE], excellent at explaining new features) -- we should look to this example
- it would be great to have this sort of workflow for Core bugs though scale is an obvious concern
Notes from Work Week Session
- has enabled more focus, not necessarily faster development, less diffused focus
- nominating for backlog should generally be something you think contributes to our goals
- QE workload is allowed to carry over to the next iteration, as workload increases this will become more important to emphasize and measure
- points are a measurement of workload capacity not of product impact (we should start measuring points on qe+ bugs)
- QE needs a system of points to better measure impact and workload (# of bugs isn't a true measure)
- bugs with --- points need to be assigned something so they can be measured and not slip through the cracks (see also gdoc)