This is a pilot project starting in 2015. The goal is to refine the process enough to understand what kind of time load this requires, what kind of latency we can accomplish, and collect enough data for a retrospective about best ways to measure the progress, any possible changes and next steps.
- This is a rotating duty. Each individual will be in charge of a week worth of new bugs, assigned to nobody, starting Saturday, ending Friday. You would then have one extra week to act on them, so that a bug is at most 14 days old by the time somebody looks at it.
- At this pace, everybody will get this duty once per quarter. The schedule is in the shared calendar (see above), and it should be self-managing - if you want to trade your week with somebody else, you should be able to just move the item around.
- The goal is to make sure we don’t miss something important, completely or until “late”, and also notice any trends we may have with crashes or intermittent failures, or in any particular areas of the code. The idea is to categorize the bugs as they come in so that we know which ones need a jump on, which ones can wait a bit, maybe ask for some information that is missing, maybe CC the right people, etc.
- We will cover these components: Canvas: 2D, Canvas: WebGL, GFX: Color Management, Graphics, Graphics: Layers, Graphics: Text, Image Blocking, ImageLib.
- Some guidelines:
- A good guideline should be ~15 minutes per bug, which is probably about hour and a half a day for the two weeks, but lets see what we really need as we get going.
- This isn’t about finding a cause, and it isn’t about the full prioritization.
- This is about noticing things sooner.
- This is about asking the bug author for info that may be missing or would help with the triage.
- This is about asking for a regression range, or even getting one if you can reproduce the problem and you have time.
- This is about CC-ing the people on the team (or elsewhere) you’re guessing could shed more light on the issue.
- This is about doing an occasional needinfo, and should be reserved for what you deem is a high priority.
- Some types of bugs would be handled outside of this triage process; for example, intermittent test errors can get a gfx-noted with a quick check if it's something obvious, not really spending time to resolve the issue if it takes a larger effort.
- Add the relevant keywords:
- "crash" if it's a crash;
- "hang" if it's a hang;
- "perf" if it's a performance related issue;
- "reproducible" if it's reproducible
- "feature" if it's new code, doing something that wasn't done before; note that a "feature" can block a "crash", we want a wide definition;
- "regression" - not quite sure about this, we may want to save it for really bad and immediate regressions only?
- Clean up the bug:
- set the correct platform if it's obvious and we're reasonably certain (e.g., DirectX issue is going to be Windows);
- if we know how to reproduce it, set the "Has STR" field; if there is a regression range, set that as well.
- Set the priority field (P1-P5 under importance) at the time that you make it gfx-noted.
- If you are not sure, set it to P3.
- If you are going to fix it in this release, set it to P1. Note - this is not the same as thinking it should be fixed in this release. It's a scheduling note, not a priority setting.
- If you are going to fix it in the next release, set it to P2. Are you sure though? Do you know you will have time?
- If you don't think we will ever have the time to spend on this, can ship for years without fixing it, and will take a patch if a contributor produces it, set it to P5.
- Consider the Severity value (blocker, critical, major, etc. under Importance)
- If it's already set to anything higher than normal, please CC Milan.
- If you think it should be set to higher than normal, please do so, then CC Milan.
The schedule is tracked in a shared calendar, ID firstname.lastname@example.org and in case of difference with that and the table below, the shared calendar wins.
There is also a dashboard tracking how we're doing.
|2018 Q1||2018 Q2||2018 Q3||2018 Q4|
|2017 Q1||2017 Q2||2017 Q3||2017 Q4|
|2016 Q1||2016 Q2||2016 Q3||2016 Q4|
|2015 Q1||2015 Q2||2015 Q3||2015 Q4|
This is something JS team did at one point; when we're considering the next steps on this, we will want to consider it:
JS team tried shared-triage-responsibility a few years ago. It didn't last very long, but it was not scheduled or enforced. Eventually managers/project managers/tech leads took over for the sub-components they were responsible for. Before JS did coordinated triage, Dave Mandelin measured that there were about 11 new bugs per day, half of which were internally generated by the team and didn't need triage (developers triage their own bugs). So that was about 5/6 bugs a day across the component. Of those, the most serious ones (~2 a week, I think?) were already getting fixed within a release cycle. Based on the distribution we ended up with three priority tags: p1 = must do p2 = want to do <- general bucket p3 = may do <- usually idea/investigation/research bugs And two follow-up tags: investigate = someone needs to spend a few minutes investigating nonactionable = nothing to do
Thoughts and comments about the first round
- (Milan) Worth revisiting the query for the bugs you've triaged a few days, or a week after you've reduced the number to zero - sometimes the new ones show up because of the component change or bug getting reopened, or some such.
- (Kats) Current method gives people exposure to other parts of the the code, but without sufficient context to properly triage bugs (no history of what landed recently, or if other similar bugs were reported in the past week). I would still prefer a component-watching approach
- (Kats) Intermittents are more challenging to deal with - if it's a low-volume initially and later increases in volume who is responsible for it?