Litmus:Requirements
Contents
External Interface Requirements
User interfaces
The primary human interface for the Litmus tools will be web-based: QA staff, developers, and testers will access the web interface to report manual test results, check automated test results, schedule testing requests, and report on past test runs.
We want the Litmus web front-end to be easy to use and the user experience to be positive. This is a tool that we expect the Mozilla QA staff to be using every day. The QA staff has some experience with the limitations of Testrunner, and we will be mining that experience to avoid making the same mistakes again.
In general, we want to design the web tool so that:
- the default display or report provides the most useful set of basic information for the user;
- common tasks are easily access from the default display;
- the path to more complicated tasks is easy discovered
- some degree of customization is possible, so that users are able to streamline their own experience.
There will also be a command-line interface to the daemon/tools. This interface will be used by the automation processes for submitting results remotely via web services, but can also be used by testers to do the same.
We will want the remote APIs for the command-line interface to be fully documented (with examples) so it can be easily used by developers and QA staff.
If we do end up using a Spider browser extension to facilitate widespread automatic testing, we must provide the tester with some configurable options:
- limit tests by group/type;
- disable extension completely.
Hardware interfaces
At the recent Mozilla QA summit meeting (2005/06/21), it was decided to invest in a small cluster of machines that would serve as a test farm, similar in concept to the collection of machines that currently perform builds for Tinderbox.
The test farm will be made up of the following machines:
- Head Node (likely running Linux);
- Linux Boxes (2);
- Mac XServes (2);
- Windows Boxes (2).
Software interfaces
Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.
This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon and web server will live on the head node. This daemon will be responsible for collating results as they come in. Depending on whether testing request scheduling proves feasible, the Litmus daemon will also be responsible for that scheduling.
The Spider test tool, or any other test tool we want to promote, should be packaged as a browser extension to facilitate widespread adoption.
Communication protocols and interfaces
Since Litmus is designed primarily as a web tool, the main protocol of record will be HTTP.
The command-line interface will need to accept remote procedure calls in order to manage automation. Both XML-RPC and SOAP have been proposed for this.
System Features
Test Runs (was Replicate Testrunner functionality)
Testrunner history
Testrunner is a test run management system that works as an add-on over Bugzilla. More information can be found at Testrunner web site.
Note: Testrunner's concept of test case management is somewhat limited, which is why I have referred to it instead as 'test run management' above. Litmus will have a somewhat some holistic concept of test case management. See below.
Description
Test runs are the primary means for us to focus community testing efforts.
To that end, Litmus admins can add special test runs (e.g. for testdays, releases, etc.). There will also be ongoing test runs for the common test groups that exist already.
Test runs will replace the existing test groups as the top level of organization for running tests. This provides the extra (3rd) level of hierarchical organization that people have been asking for.
Since test runs are more focused, test run results can be more usefully used to report on testing progress. Special reports will be available automatically for each test run, and can be further customized using existing search functionality.
There will also need to be a suite of administration tools to manage and maintain test runs, and all the various parts that make them up.
In the future, we would also like to allow users to create their own custom test runs.
Priority
Replacing Testrunner for use in test runs is now the #1 priority for Litmus.
Once the core test run functionality is in place, we would also like to be able to allow regular users to create their own custom test runs. However, this is a secondary goal. We will design the basic test run system with this in mind, but it will not be implemented until the core test run behavior is in place.
Functional requirements
Basic Functionality:
- Run Tests interface
- Test runs will replace test groups in the current interface;
- Special test runs and test runs in progress will be highlighted and presented first;
- Continuous (ongoing) test runs for certain test groups will also be displayed, e.g. Smoketests, BFTs;
- overall progression of the interface (sys config->test run->subgroup->testcases) will remain the same
- Reporting
- The existing reporting tools will need to be made aware of test runs:
- Users should be able to navigate the list of currently available test runs;
- Admin should have the same functionality, but with the added ability to see/choose test runs that have been disabled or marked as out-of-date;
- information about specific test runs will include the following statistics:
- coverage (percentages);
- testcases remaining to be run, with a link to generate a special testlist of the outstanding testcases for running;
- list of failures;
- list of results with comments;
- All of the above will be properly interlinked so that more information about results and testcases can be found where appropriate.
- The existing reporting tools will need to be made aware of test runs:
Admin Functionality:
- tools to add/clone/modify/disable/delete the various testing-related entities: products, platforms, operating systems, branches, and locales.
- add/clone/modify/disable/delete testcases;
- add/clone/modify/disable/delete subgroups;
- add/clone/modify/disable/delete test groups;
- add/clone/modify/disable/delete test runs;
- tools to create test runs from test groups, including the ability to change the scheduling of test runs;
- new searching tools to limit results by test runs;
- new reporting tools for test runs. This includes automated reports for testdays;
Test Case Management
Description
Testrunner currently contains some metadata about test cases, and tracks results based on that metadata. However, Testrunner does not contain a copy (or even a link to) the test case itself. Updating test cases is a two-step procedure with no guarantee that both steps will be executed.
As much as possible, Litmus will act as a repository for test cases. This will allow for metadata to be associated directly with test cases. For external tests that cannot be brought into the repository, there will be sufficient information given to acquire the test case(s) from the remote source, e.g. download URL.
Priority
Since there does not currently exist a central repository for test cases, this feature has the highest priority. If we can get a test case management interface up quickly, we can ensure that all testers are running the exact same set of tests, and implement automation from there.
Functional requirements
Test case management requires the following:
- test case storage:
- for as many tests as possible, this will hopefully mean storing fully automated test cases in whatever syntax is appropriate for use with the Spider tool;
- links to externals test cases when they cannot be stored by the system, including access information;
- full instructions for running tests that cannot be automated. Note: this is similar to the existing test case functionality in Testrunner;
- version control for test cases, with trackable history and commentary;
- linking of test results to individual test cases with version information;
- ability to check out/download groups of test cases, based on test runs, or functionality group, or platform;
- full access control restrictions for the test case repository. Security-related test cases should only be visible/downloadable to those with sufficient privileges;
- web-based administration (there will be some overlap with the Testrunner functionality outlined above):
- add/modify/delete test cases;
- add/modify/delete test runs;
- add/modify/remove privileges for users;
- ability to view recent test case activity (additions/updates/deletions);
- ability to search for and display testcases based on:
- grouping (product/branch/etc.)
- ID
- text/regexp
- recent activity
- tag
Web Services (was Automated Testing)
Description
We need to create a framework for submitting/receiving automated test results that can be used by the current QA test farm, but that can also accommodate receiving test results from other sources. We will implement a well-defined web services API to accommodate this.
Priority
This is a high-priority, and is next in line for implementation after the test run functionality.
Functional requirements
Litmus web services will require:
- a well-defined reporting format, complete with a means to validate incoming results. XML and DTD have been proposed for this;
- a processing script to accept and parse incoming results;
- an authentication component to ensure submissions come from trusted sources only;
- a method to avoid both report spam and genuine duplicate reports;
- logging of all submissions for auditing and debugging purposes.
Reporting (Result Querying)
Description
Testing automation will generate an ongoing stream of test results. These results will be useless unless the proper tools are in place to query and compare them. This will address a current void in Testrunner, wherein there is no way to perform a head-to-head comparison between the results from two separate test runs. This makes it harder to spot regressions.
We also have the opportunity as we move forward to begin collecting (and reporting on) performance and defect data. This will allow us to create meaningful trend data.
Priority
Only some of the required reports are known at the time of writing. The various reports share a core set of functionality which can be put in place initially, and new or more complicated reports can be added over time.
The test run comparison reports will likely be the first to be implemented.
Functional requirements
The reporting interface will require the following features:
- proper limiting for the number of results returned on a single page. This should also be configurable with some appropriate upper bound. The user should be able to navigate through result sets that span more that a single page;
- ability to limit results based on certain criteria;
- ability to sort/reverse results based on certain criteria;
The following specific reports are needed:
- single test case: results from a single test case are marked-up for viewing;
- test run: test case results from a single test run are marked-up and presented in synopsis form;
- test case comparison: head-to-head comparison between two test case results, with differences highlighted;
- test run comparison: synopsis views for two test runs are compared head-to-head, with differences highlighted;
Automation Control (future)
Description
Some automated testing is already occuring using Bob Clary's Spider tool. Our goal with automated testing is two-fold:
- automate the automation: get the automated testing running continuously in an environment where it can be monitored, queried, and updated;
- automate as much regular testing as possible: this includes both smoketests and BFTs. Tests that cannot be run automatically should require as little interaction as possible, and this interaction must be standardized.
Priority
Automation control is not in the short-term critical path for Litmus.
Functional requirements
These requirements are out-of-date, but they represent our initial thinking and discussion on the subject of automation control in relation to Litmus. Spider is only one piece of a larger automation picture which now includes eggplant, jssh, and no doubt more in the future.
There are three facets here. The first are the test automation processes/daemons that will run on the individual testing machines in the test farm. The second is the Spider browser extension that will actually be running the tests. The final piece is t