Confirmed users
14,525
edits
Line 34: | Line 34: | ||
* Tests are critical - make it a release criteria. | * Tests are critical - make it a release criteria. | ||
* Automate as much as possible, including data collection/analyses, and make sure time to automate is factored in to schedule | * Automate as much as possible, including data collection/analyses, and make sure time to automate is factored in to schedule | ||
* Clear structure with well-defined roles at every level (PM, Dev lead, Dev team, QA) | |||
* Clear Development plan with work split up in milestones (reviewed weekly) | |||
* Weekly meeting with everyone | |||
* Well defined (and well monitored) release criteria | |||
* Experiments on Aurora/Beta were well communicated via email | |||
* QA managed to achieve decent test coverage (mainly thanks to amount of time available) | |||
* The dev team often pushed to get fixes and experiments in very late in the cycle – RelMan did a good job rejecting these most of the time due to high risk (so I’d say this went well, but the e10s team should have done this assessment with more responsibility, instead of pushing everything every time) | |||
* Hard to QA due to the size of the feature (covering almost the whole browser) – this caused some confusion at times as to what Firefox version should be used to run our Full Test set (which was run across several months) | |||
* High level of uncertainty at times as to when we want to release (including when we want it enabled on each channel) – at times this was clearly communicated via email, but then got contradictory information in meetings | |||
* Need clearer communication from QA – this was hard to track for most managers not directly involved in e10s - this should have included an overall view: what was tested, what remains to be tested, future plans (what, when, on which version) |