Litmus:Requirements: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 3: Line 3:


== Purpose ==
== Purpose ==
 
The purpose of this document is to capture in one place all the various requirements for the Litmus quality assurance (henceforth, QA) tool. In the past, many of the Netscape/Mozilla webtools have grown organically without much supporting documentation. While this document does not necessarily preclude this from happening with Litmus, it will at least give us an initial point of reference from which we can start design/development.  
The purpose of this document is to capture in one place all the various requirements for the Litmus QA tool. In the past, many of the Netscape/Mozilla webtools have grown organically without much supporting documentation. While this document does not necessarily preclude this from happening with Litmus, it will at least give us an initial point of reference from which we can start design/development.  


== Document conventions ==
== Document conventions ==
TBD.
TBD.


== Intended audience ==
== Intended audience ==
This document is intended for QA staff, developers, build personnel, and sysadmins from the Mozilla Foundation, as well as community members interested in helping to improve the QA process for Mozilla products.
This document is intended for QA staff, developers, build personnel, and sysadmins from the Mozilla Foundation, as well as community members interested in helping to improve the QA process for Mozilla products.


Line 20: Line 17:


== Contact Info ==
== Contact Info ==
[http://developer-test.mozilla.org/en/docs/User:ChrisCooper Chris Cooper]
[http://developer-test.mozilla.org/en/docs/User:ChrisCooper Chris Cooper]


== References ==
== References ==
* Le Vie, Jr., Donn. "Writing Software Requirements Specifications" <em>TECHWR-L</em> 7 July 2002. &lt;http://www.techwr-l.com/techwhirl/magazine/writing/softwarerequirementspecs.html&gt;.
* Le Vie, Jr., Donn. "Writing Software Requirements Specifications" <em>TECHWR-L</em> 7 July 2002. &lt;http://www.techwr-l.com/techwhirl/magazine/writing/softwarerequirementspecs.html&gt;.


Line 30: Line 25:


== Perspective ==
== Perspective ==
Mozilla testing resources, i.e. staff, are spread pretty thin. Even with some community support, the turnaround time for smoke testing and basic functional testing (BFT) for release candidates can take several days (the smoketests and BFTs are not currently automated). If regressions or new bugs are found during the testing process, the cycle can be even longer.
An existing tool, [[Testrunner]], helps with the administration of this process, but the tool is somewhat limited. Testrunner has the concept of a "test run" as a single instance of testing, but these test runs must be manually cloned for each new testing cycle on a per-platform basis. Testrunner also does not let multiple users combine their efforts to work on a single test run; each user must have a separate test run, or have their results collated by a single "superuser."
The individual tests that make up a test run are not stored anywhere in Testrunner. Instead, test lists must me be kept in sync with external test repositories manually. This has made it impossible for any kind of automation to be built into Testrunner.
There is also no way to do any meaningful querying or reporting on historical test results using Testrunner. On top of all this, Testrunner is tied intimately to specific versions of Bugzilla; small changes to Bugzilla can cause Testrunner to stop working.
Bob Clary has a XUL-based test harness, called Spider, which he has used to automate the testing of many Document Object Model (DOM) and Javascript (JS) engine tests, but there has never been a central repository for test results, so his results have been posted to [http://test.bclary.com/results/ his personal testing website].
Developers often would like to have testing done to verify a given change or patch. Historically, this has not often been possible due to the constant demand s on the QA team.
The new tool, Litmus, is meant to address these problems by:
* serving as a repository for test cases, with all the inherent management abilities that implies;
* serving as a repository for test results, carrying over the best features of Testrunner, e.g. test lists, division of labor, etc.;
* providing a query interface for viewing, reporting on, and comparing test results;
* providing a request interface whereby developers can queue testing requests for patches, fixes, and regressions;
* managing the automation of testing requests &mdash; one-time, and recurring (e.g. from [http://tinderbox.mozilla.org/showbuilds.cgi tinderbox]) &mdash; on a new group of dedicated testing servers, managing request priorities appropriately;
* exposing an API to allow developers to work with the various tools easily outside of a graphical environment;
* making it easier for casual testers to assist with testing Mozilla products.


== Functions ==
== Functions ==
Line 48: Line 64:


== Hardware interfaces ==
== Hardware interfaces ==
At the recent Mozilla QA summit meeting (2005/06/21), it was decided to invest in a small cluster of machines that would serve as a test farm, similar in concept to the collection of machines that current perform builds for [http://tinderbox.mozilla.org Tinderbox].
The test farm will be made up of the following machines:
* Head Node (likely running Linux
** Linux Boxes (#?)
** Mac XServes (#?)
** Windows Boxes (#?)


== Software interfaces ==
== Software interfaces ==
Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.
This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon will live on the head node. This daemon will be responsible for coordinating testing on the test farm machines, and collating results as they come in.


== Communication protocols and interfaces ==
== Communication protocols and interfaces ==

Revision as of 23:01, 7 July 2005

Introduction

Purpose

The purpose of this document is to capture in one place all the various requirements for the Litmus quality assurance (henceforth, QA) tool. In the past, many of the Netscape/Mozilla webtools have grown organically without much supporting documentation. While this document does not necessarily preclude this from happening with Litmus, it will at least give us an initial point of reference from which we can start design/development.

Document conventions

TBD.

Intended audience

This document is intended for QA staff, developers, build personnel, and sysadmins from the Mozilla Foundation, as well as community members interested in helping to improve the QA process for Mozilla products.

Additional information

Contact Info

Chris Cooper

References

Overall Description

Perspective

Mozilla testing resources, i.e. staff, are spread pretty thin. Even with some community support, the turnaround time for smoke testing and basic functional testing (BFT) for release candidates can take several days (the smoketests and BFTs are not currently automated). If regressions or new bugs are found during the testing process, the cycle can be even longer.

An existing tool, Testrunner, helps with the administration of this process, but the tool is somewhat limited. Testrunner has the concept of a "test run" as a single instance of testing, but these test runs must be manually cloned for each new testing cycle on a per-platform basis. Testrunner also does not let multiple users combine their efforts to work on a single test run; each user must have a separate test run, or have their results collated by a single "superuser."

The individual tests that make up a test run are not stored anywhere in Testrunner. Instead, test lists must me be kept in sync with external test repositories manually. This has made it impossible for any kind of automation to be built into Testrunner.

There is also no way to do any meaningful querying or reporting on historical test results using Testrunner. On top of all this, Testrunner is tied intimately to specific versions of Bugzilla; small changes to Bugzilla can cause Testrunner to stop working.

Bob Clary has a XUL-based test harness, called Spider, which he has used to automate the testing of many Document Object Model (DOM) and Javascript (JS) engine tests, but there has never been a central repository for test results, so his results have been posted to his personal testing website.

Developers often would like to have testing done to verify a given change or patch. Historically, this has not often been possible due to the constant demand s on the QA team.

The new tool, Litmus, is meant to address these problems by:

  • serving as a repository for test cases, with all the inherent management abilities that implies;
  • serving as a repository for test results, carrying over the best features of Testrunner, e.g. test lists, division of labor, etc.;
  • providing a query interface for viewing, reporting on, and comparing test results;
  • providing a request interface whereby developers can queue testing requests for patches, fixes, and regressions;
  • managing the automation of testing requests — one-time, and recurring (e.g. from tinderbox) — on a new group of dedicated testing servers, managing request priorities appropriately;
  • exposing an API to allow developers to work with the various tools easily outside of a graphical environment;
  • making it easier for casual testers to assist with testing Mozilla products.

Functions

User classes and characteristics

Operating environment

User environment

Design/implementation constraints

Assumptions and dependencies

External Interface Requirements

User interfaces

Hardware interfaces

At the recent Mozilla QA summit meeting (2005/06/21), it was decided to invest in a small cluster of machines that would serve as a test farm, similar in concept to the collection of machines that current perform builds for Tinderbox.

The test farm will be made up of the following machines:

  • Head Node (likely running Linux
    • Linux Boxes (#?)
    • Mac XServes (#?)
    • Windows Boxes (#?)

Software interfaces

Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.

This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon will live on the head node. This daemon will be responsible for coordinating testing on the test farm machines, and collating results as they come in.

Communication protocols and interfaces

System Features

Feature A

Description and priority

Action/result

Functional requirements

Feature B

Other Nonfunctional Requirements

Performance requirements

Safety requirements

Security requirements

Software quality attributes

Project documentation

User documentation

Other Requirements

Appendix A: Terminology/Glossary/Definitions list