Litmus:Requirements

From MozillaWiki
Revision as of 14:27, 11 July 2005 by ChrisCooper (talk | contribs)
Jump to navigation Jump to search

Introduction

Purpose

The purpose of this document is to capture in one place all the various requirements for the Litmus quality assurance (henceforth, QA) tool. In the past, many of the Netscape/Mozilla webtools have grown organically without much supporting documentation. While this document does not necessarily preclude this from happening with Litmus, it will at least give us an initial point of reference from which we can start design/development.

Document conventions

TBD.

Intended audience

This document is intended for QA staff, developers, build/release personnel, and sysadmins from the Mozilla Foundation, as well as community members interested in helping to improve the QA process for Mozilla products.

Additional information

Contact Info

Chris Cooper

References

Overall Description

Perspective

Mozilla testing resources are spread pretty thin. Even with some community support, the turnaround time for smoke testing and basic functional testing (BFT) for release candidates can take several days (the smoketests and BFTs are not currently automated). If regressions or new bugs are found during the testing process, the cycle can be even longer.

An existing tool, Testrunner, helps with the administration of this process, but the tool is somewhat limited. Testrunner has the concept of a "test run" as a single instance of testing, but these test runs must be manually cloned for each new testing cycle on a per-platform basis, and tests cannot be re-ordered within test runs. Testrunner also does not let multiple users combine their efforts to work on a single test run; each user must have a separate test run, or have their results collated by a single "superuser."

The individual tests that make up a test run are not stored anywhere in Testrunner. Instead, test lists must me be kept in sync with external test repositories manually. This has made it impossible for any kind of automation to be built into Testrunner.

There is also no way to do any meaningful querying or reporting on historical test results using Testrunner. On top of all this, Testrunner is tied intimately to specific versions of Bugzilla; small changes to Bugzilla can cause Testrunner to stop working.

Bob Clary has a XUL-based test harness, called Spider, which he has used to automate the testing of many Document Object Model (DOM) and Javascript (JS) engine tests, but there has never been a central repository for test results, so his results have been posted to his personal testing website.

Developers often would like to have testing done to verify a given change or patch. Historically, this has not often been possible due to the constant demands on the QA team.

Addressing these shortcomings in the current tools (or the lack of tools, in general) will do much to streamline the QA process for Mozilla. This should have the desirable side effect of freeing up QA staff to work on more interesting things, e.g. harder edge-case testing, incoming bug verification and triage, community interaction, etc.

Functions

The new QA tool, Litmus, is meant to address these problems by:

  • serving as a repository for test cases, with all the inherent management abilities that implies;
  • serving as a repository for test results, carrying over the best features of Testrunner, e.g. test lists, division of labor, etc.;
  • providing a query interface for viewing, reporting on, and comparing test results;
  • providing a request interface whereby developers can queue testing requests for patches, fixes, and regressions;
  • managing the automation of testing requests — one-time, and recurring (e.g. from tinderbox) — on a new group of dedicated testing servers, managing request priorities appropriately;
  • exposing an API to allow developers to work with the various tools easily outside of a graphical environment;
  • making it easier for casual testers to assist with testing Mozilla products.

User classes and characteristics

Litmus will attract the following types of users:

  • Sysadmins
    These power users will be responsible for the maintenance of the underlying machines, and will likely be doing so from the command line. They will be primarily interested in how easy Litmus is to setup and install, CPU/disk space/network usage/database usage by the Litmus daemon and web tool, and any security implications that Litmus exposes.
  • Litmus Maintainers
    This is a class of sysadmins who are solely responsible for the upkeep of the Litmus tool itself. They will likely have intimate knowledge of its inner working and will be responsible for fixing bugs in Litmus itself.
  • Build/Release Engineers
    Given their role, these users will be primarily interested in the status of automated testing for builds/release candidates, with the ability to compare test results between two different release candidates. They will also want the ability to pre-empt tests in progress if release testing is needed immediately. These users will have a history of using various existing web tools, e.g. tinderbox, bonsai, LXR, so they can be expected to adapt to a new web tool quickly.
  • QA Staff
    Existing QA staff will already be familiar with Testrunner, which should ease the transition to a new web tool. This user class will have experience running tests both by hand and using the automated Spider tool. Because of this, most of these users will have developed an intuitive feel for what constitutes a valid testing result. These users will expect to be able to do the same things that they currently can with Testrunner.
  • Core Mozilla Developers
    Core developers will already be familiar with web tools such as Bugzilla and tinderbox. Due to their familiarity with Bugzilla, they will expect to see the same Product and Component categories in Litmus. This group might correspond to the set of developers with superreview and/or review status in Bugzilla. These users might expect to receive higher priority for testing requests that they submit.
  • Mozilla Developers (including add-ons and extensions), Localizers
    These developers will already be familiar with web tools such as Bugzilla and tinderbox. Due to their familiarity with Bugzilla, they will expect to see the same Product and Component categories in Litmus.
  • Testers
    This user class will be familiar with using a web browser, but may not necessarily be familiar with the suite of Mozilla web tools used by developers. With proper instruction, they can be expected to submit testing results automatically if the process is not too complicated. These users might be interested in seeing test results that they themselves have contributed, and comparisons of the test runs that those results belong to.
  • Community-at-large
    Anyone with a web browser could find Litmus on the web. Some of these people will want to see quality reports (partners, journalists, competitors), others may just want to poke around. Like Bugzilla, basic querying will be open to all, but users will need to register with the system in order to do much else.

Operating environment

The main Litmus daemon and web tool will reside on an as-of-yet unpurchased machine. This machine will likely be running Linux (RHEL3?) to facilitate remote administration. The daemon and web tool will need to be designed to use the existing Linux Virtual Server (LVS) cluster.

User environment

The primary human interface for the Litmus tools will be web-based: QA staff, developers, and testers will access the web interface to report manual test results, check automated test results, schedule testing requests, and report on past test runs.

There will also be a command-line interface to the daemon/tools. This interface will be used by the automation processes for submitting results remotely, but can also be used by testers to do the same.

Design/implementation constraints

The following constraints exist:

  • despite its limitations, Testrunner is being actively used by the Mozilla QA team on a day-to-day basis. Litmus must replicate the useful functionality of Testrunner, and make it easier to accomplish the same tasks the team is doing today. If it does not, then Litmus will have failed.
  • Mozilla web services current reside behind an LVS cluster. Litmus must be designed to work with and take advantage of this setup.
  • Litmus must be Bugzilla-aware, i.e. component/product lists must match, bug numbers should be marked up appropriately, etc.
  • documentation for Litmus must be written and maintained, in order to avoid the documentation void that exists for other Mozilla web tools.

Assumptions and dependencies

The following assumptions and dependencies are known to exist:

  • the Spider tool can be successfully changed to run smoketests and BFTs in an automated manner;
  • machines for the new test farm will be bought and installed in the colo, as has already been decided;
  • Mozilla sysadmins have enough time to setup and manage these new machines. Note: some of the management responsibility for these machines will be shared by the Litmus maintainers;

External Interface Requirements

User interfaces

In progress.

Hardware interfaces

At the recent Mozilla QA summit meeting (2005/06/21), it was decided to invest in a small cluster of machines that would serve as a test farm, similar in concept to the collection of machines that currently perform builds for Tinderbox.

The test farm will be made up of the following machines:

  • Head Node (likely running Linux);
    • Linux Boxes (#?);
    • Mac XServes (#?);
    • Windows Boxes (#?).

Software interfaces

Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.

This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon will live on the head node. This daemon will be responsible for coordinating automated testing on the test farm machines, and collating results as they come in.

Communication protocols and interfaces

System Features

Replicate Testrunner functionality

Description

Testrunner is a test run management system that works as an add-on over Bugzilla. More information can be found at Testrunner web site.

Note: Testrunner's concept of test case management is somewhat limited, which is why I have referred to it instead as 'test run management' above. Litmus will have a somewhat some holistic concept of test case management. See below.

Priority

Testrunner is currently being used by Mozilla QA staff to track smoketest and BFT results. The QA team can continue to use Testrunner in this capacity until the replacement is ready. It should be possible to implement some of the test case management and automation pieces before it is necessary to build the Testrunner functionality.

Action/result

Functional requirements

Test Case Management

Description

Priority

Action/result

Functional requirements

Automated Testing

Description

Priority

Action/result

Functional requirements

Reporting (Result Querying)

Description

Priority

Action/result

Functional requirements

Testing Requests

Description

Priority

Action/result

Functional requirements

Other Nonfunctional Requirements

Performance requirements

Safety requirements

Security requirements

Proper access control is essential.

Software quality attributes

Project documentation

User documentation

Other Requirements

Appendix A: Terminology/Glossary/Definitions list