QA/2008Story

From MozillaWiki
< QA(Redirected from QA:2008Story)
Jump to: navigation, search

Here is the QA story for 2008. The idea here is to paint a picture of how we hope that 2008 will end up. The will be used to plan resource requirements and goals to get there. This is meant to be a hopeful vision of the future and not a commitment of specific deliverables. I took a theme oriented approach with the themes summarized in the first paragraph.

Any and all comments are welcome!

QA Story for 2008 - V1.0

In 2008 we achieved high confidence in the quality of our products with reliable test tools, targeted test development, focused user oriented testing, communication of our results, and by successful marshaling large numbers of community testers.

The testing infrastructure became largely self-sufficient and easily-maintained by IT with little intervention from us. The performance network is more robust and includes more platforms and more browsers. The unit testing farm now contains hundreds of thousands of tests, including fully automated javascript tests and several vendor-provided test suites.

We setup the test infrastructure for Moz2 and have focused test tools and resources for Tamarin. We will have investigated test coverage and assessed high risk areas for the Firefox application and the layout/graphics areas and increased test coverage in two major areas of each along with developed thousands of additional tests cases.

We have shifted our user oriented testing efforts from largely manual test execution to a good balance of test execution and test case development in order to have a much more consistent and deeper test coverage. We leveraged website test tools like selenium and windmill to dramatically increase and automate out web compatibility testing.

We are now communicating our results in more helpful and effective ways. We have moved away from the standard Tinderbox waterfall and moved to a more concise, dashboard-like status displays for all cared-about branches. Unit test reporting mechanisms have been improved to provide historical data and log storage and are incorporated into the dashboard displays for drill-down views.

We are able to mine a repository of all user oriented test results rather then results being in a variety of forms including wiki pages, etc. This has enabled us to know what tests were run, what configurations were tested and what bugs were found. We were therefore able to target people and testing activities in a much more refined and effective manner.

We are out there spreading the word. We've honed our testing story and presented it internally and at conferences. We've also worked with the evangelism team to get our message out to the community and beyond. We have presented workshops and presentations on buildbot automation and testing, application performance testing and administering large networks of testing infrastructure and seen greater community involvement in these areas.

We have leveraged the community with the Test Extension, building test communities in multiple countries, clearly articulated test activities (test execution, test development, and tools projects). No community member was left wondering "what I can do" or "where do I start", for an overall more welcoming experience. We were able to target community testers for specific focused testing and knew what coverage was provided by the community at large.