Litmus:Requirements: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
 
(40 intermediate revisions by 7 users not shown)
Line 1: Line 1:
'''[[Litmus|« back to Litmus main page]]'''
__TOC__
__TOC__
= Introduction =


== Purpose ==
= External Interface Requirements =
The purpose of this document is to capture in one place all the various requirements for the Litmus quality assurance (henceforth, QA) tool. In the past, many of the Netscape/Mozilla webtools have grown organically without much supporting documentation. While this document does not necessarily preclude this from happening with Litmus, it will at least give us an initial point of reference from which we can start design/development.


== Document conventions ==
== User interfaces ==
TBD.
 
== Intended audience ==
This document is intended for QA staff, developers, build/release personnel, and sysadmins from the Mozilla Foundation, as well as community members interested in helping to improve the QA process for Mozilla products.
 
== Additional information ==
* [[Litmus Litmus Main Page]]
* [[Litmus:Test Result Format DTD|Test Result Format (DTD)]]
* [[TestcaseManagementIdeas|Test case management ideas]]
 
== Contact Info ==
[http://developer-test.mozilla.org/en/docs/User:ChrisCooper Chris Cooper]
 
== References ==
* Le Vie, Jr., Donn. "Writing Software Requirements Specifications" <em>TECHWR-L</em> 7 July 2002. &lt;http://www.techwr-l.com/techwhirl/magazine/writing/softwarerequirementspecs.html&gt;.
 
= Overall Description =
 
== Perspective ==
Mozilla testing resources are spread pretty thin. Even with some community support, the turnaround time for smoke testing and basic functional testing (BFT) for release candidates can take several days (the smoketests and BFTs are not currently automated). If regressions or new bugs are found during the testing process, the cycle can be even longer.
 
An existing tool, [[Testrunner]], helps with the administration of this process, but the tool is somewhat limited. Testrunner has the concept of a "test run" as a single instance of testing, but these test runs must be manually cloned for each new testing cycle on a per-platform basis, and tests cannot be re-ordered within test runs. Testrunner also does not let multiple users combine their efforts to work on a single test run; each user must have a separate test run, or have their results collated by a single "superuser."
 
The individual tests that make up a test run are not stored anywhere in Testrunner. Instead, test lists must me be kept in sync with external test repositories manually. This has made it impossible for any kind of automation to be built into Testrunner.
 
There is also no way to do any meaningful querying or reporting on historical test results using Testrunner. On top of all this, Testrunner is tied intimately to specific versions of Bugzilla; small changes to Bugzilla can cause Testrunner to stop working.
 
Bob Clary has a XUL-based test harness, called [http://bclary.com/2004/07/10/mozilla-spiders Spider], which he has used to automate the testing of many Document Object Model (DOM) and Javascript (JS) engine tests, but there has never been a central repository for test results, so his results have been posted to [http://test.bclary.com/results/ his personal testing website].
 
Developers often would like to have testing done to verify a given change or patch. Historically, this has not often been possible due to the constant demands on the QA team.
 
Addressing these shortcomings in the current tools (or the lack of tools, in general) will do much to streamline the QA process for Mozilla. This should have  the desirable side effect of freeing up QA staff to work on more interesting things, e.g. harder edge-case testing, incoming bug verification and triage, community interaction, etc.
 
== Functions ==
The new QA tool, Litmus, is meant to address these problems by:
 
* serving as a repository for test cases, with all the inherent management abilities that implies;
* serving as a repository for test results, carrying over the best features of Testrunner, e.g. test lists, division of labor, etc.;
* providing a query interface for viewing, reporting on, and comparing test results;
* providing a request interface whereby developers can queue testing requests for patches, fixes, and regressions;
* managing the automation of testing requests &mdash; one-time, and recurring (e.g. from [http://tinderbox.mozilla.org/showbuilds.cgi tinderbox]) &mdash; on a new group of dedicated testing servers, managing request priorities appropriately;
* exposing an API to allow developers to work with the various tools easily outside of a graphical environment;
* making it easier for casual testers to assist with testing Mozilla products.


== User classes and characteristics ==
Litmus will attract the following types of users:
* '''Sysadmins'''<br/>These power users will be responsible for the maintenance of the underlying machines, and will likely be doing so from the command line. They will be primarily interested in how easy Litmus is to setup and install, CPU/disk space/network usage/database usage by the Litmus daemon and web tool, and any security implications that Litmus exposes.
* '''Litmus Maintainers'''<br/>This is a class of sysadmins who are solely responsible for the upkeep of the Litmus tool itself. They will likely have intimate knowledge of its inner working and will be responsible for fixing bugs in Litmus itself. 
* '''Build/Release Engineers'''<br/>Given their role, these users will be primarily interested in the status of automated testing for builds/release candidates, with the ability to compare test results between two different release candidates. They will also want the ability to pre-empt tests in progress if release testing is needed immediately. These users will have a history of using various existing web tools, e.g. tinderbox, [http://bonsai.mozilla.org/ bonsai], [http://lxr.mozilla.org/ LXR], so they can be expected to adapt to a new web tool quickly.
* '''QA Staff'''<br/>Existing QA staff will already be familiar with Testrunner, which should ease the transition to a new web tool. This user class will have experience running tests both by hand and using the automated Spider tool. Because of this, most of these users will have developed an intuitive feel for what constitutes a valid testing result. These users will expect to be able to do the same things that they can do currently with Testrunner.
* '''Core Mozilla Developers'''<br/>Core developers will already be familiar with web tools such as [https://bugzilla.mozilla.org/ Bugzilla] and tinderbox. Due to their familiarity with Bugzilla, they will expect to see the same Product and Component categories in Litmus. This group might correspond to the set of developers with superreview and/or review status in Bugzilla. These users might expect to receive higher priority for testing requests that they submit.
* '''Mozilla Developers (including add-ons and extensions), Localizers'''<br/>These developers will already be familiar with web tools such as [https://bugzilla.mozilla.org/ Bugzilla] and tinderbox. Due to their familiarity with Bugzilla, they will expect to see the same Product and Component categories in Litmus.
* '''Testers'''<br/>This user class will be familiar with using a web browser, but may not necessarily be familiar with the suite of Mozilla web tools used by developers. With proper instruction, they can be expected to submit testing results automatically if the process is not too complicated. These users might be interested in seeing test results that they themselves have contributed, and comparisons of the test runs that those results belong to.
* '''Community-at-large'''<br/>Anyone with a web browser could find Litmus on the web. Some of these people will want to see quality reports (partners, journalists, competitors), others may just want to poke around. Like Bugzilla, basic querying will be open to all, but users will need to register with the system in order to do much else.
== Operating environment ==
The main Litmus daemon and web tool will reside on an as-of-yet unpurchased machine. This machine will likely be running Linux (RHEL3?) to facilitate remote administration. The daemon and web tool will need to be designed to use the existing [http://www.linuxvirtualserver.org/ Linux Virtual Server (LVS)] cluster.
== User environment ==
The primary human interface for the Litmus tools will be web-based: QA staff, developers, and testers will access the web interface to report manual test results, check automated test results, schedule testing requests, and report on past test runs.
The primary human interface for the Litmus tools will be web-based: QA staff, developers, and testers will access the web interface to report manual test results, check automated test results, schedule testing requests, and report on past test runs.


There will also be a command-line interface to the daemon/tools. This interface will be used by the automation processes for submitting results remotely, but can also be used by testers to do the same.
We want the Litmus web front-end to be easy to use and the user experience to be positive. This is a tool that we expect the Mozilla QA staff to be using every day. The QA staff has some experience with the limitations of Testrunner, and we will be mining that experience to avoid making the same mistakes again.


== Design/implementation constraints ==
In general, we want to design the web tool so that:
The following constraints exist:
* the default display or report provides the most useful set of basic information for the user;
* despite its limitations, Testrunner is being actively used by the Mozilla QA team on a day-to-day basis. Litmus must replicate the useful functionality of Testrunner, and make it easier to accomplish the same tasks the team is doing today. If it does not, then Litmus will have failed.
* common tasks are easily access from the default display;
* Mozilla web services current reside behind an LVS cluster. Litmus must be designed to work with and take advantage of this setup.
* the path to more complicated tasks is easy discovered
* Litmus must be Bugzilla-aware, i.e. component/product lists must match, bug numbers should be marked up appropriately, etc.
* some degree of customization is possible, so that users are able to streamline their own experience.
* documentation for Litmus <em>must</em> be written and maintained, in order to avoid the documentation void that exists for other Mozilla web tools.


== Assumptions and dependencies ==
There will also be a command-line interface to the daemon/tools. This interface will be used by the automation processes for submitting results remotely via web services, but can also be used by testers to do the same.
The following assumptions and dependencies are known to exist:
* the Spider tool can be successfully changed to run smoketests and BFTs in an automated manner;
* machines for the new test farm will be bought and installed in the colo, as has already been decided;
* Mozilla sysadmins have enough time to setup and manage these new machines. Note: some of the management responsibility for these machines will be shared by the Litmus maintainers;
 
= External Interface Requirements =


== User interfaces ==
We will want the remote APIs for the command-line interface to be fully documented (with examples) so it can be easily used by developers and QA staff.


In progress.
If we do end up using a Spider browser extension to facilitate widespread automatic testing, we must provide the tester with some configurable options:
* limit tests by group/type;
* disable extension completely.


== Hardware interfaces ==
== Hardware interfaces ==
Line 93: Line 30:
The test farm will be made up of the following machines:
The test farm will be made up of the following machines:
* Head Node (likely running Linux);
* Head Node (likely running Linux);
** Linux Boxes (#?);
** Linux Boxes (2);
** Mac XServes (#?);
** Mac XServes (2);
** Windows Boxes (#?).
** Windows Boxes (2).


== Software interfaces ==
== Software interfaces ==
Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.
Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.


This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon will live on the head node. This daemon will be responsible for coordinating automated testing on the test farm machines, and collating results as they come in.
This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon and web server will live on the head node. This daemon will be responsible for collating results as they come in. Depending on whether testing request scheduling proves feasible, the Litmus daemon will also be responsible for that scheduling.
 
The Spider test tool, or any other test tool we want to promote, should be packaged as a browser extension to facilitate widespread adoption.


== Communication protocols and interfaces ==
== Communication protocols and interfaces ==
Line 109: Line 48:
= System Features =
= System Features =


== Replicate Testrunner functionality ==
== Test Runs (was Replicate Testrunner functionality) ==
=== Description ===
=== Testrunner history ===
Testrunner is a test run management system that works as an add-on over Bugzilla. More information can be found at [http://www.willowriver.net/products/testrunner.php Testrunner web site].  
Testrunner is a test run management system that works as an add-on over Bugzilla. More information can be found at [http://www.willowriver.net/products/testrunner.php Testrunner web site].  


Note: Testrunner's concept of test case management is somewhat limited, which is why I have referred to it instead as 'test run management' above. Litmus will have a somewhat some holistic concept of test case management. See below.
Note: Testrunner's concept of test case management is somewhat limited, which is why I have referred to it instead as 'test run management' above. Litmus will have a somewhat some holistic concept of test case management. See below.
=== Description ===
Test runs are the primary means for us to focus community testing efforts.
To that end, Litmus admins can add special test runs (e.g. for testdays, releases, etc.). There will also be ongoing test runs for the common test groups that exist already.
Test runs will replace the existing test groups as the top level of organization for running tests. This provides the extra (3rd) level of hierarchical organization that people have been asking for.
Since test runs are more focused, test run results can be more usefully used to report on testing progress. Special reports will be available automatically for each test run, and can be further customized using existing search functionality.
There will also need to be a suite of administration tools to manage and maintain test runs, and all the various parts that make them up.
In the future, we would also like to allow users to create their own custom test runs.


=== Priority ===
=== Priority ===
Testrunner is currently being used by Mozilla QA staff to track smoketest and BFT results. The QA team can continue to use Testrunner in this capacity until the replacement is ready. It should be possible to implement some of the test case management and automation pieces before it is necessary to build the Testrunner functionality.  
Replacing Testrunner for use in test runs is now the #1 priority for Litmus.
 
Once the core test run functionality is in place, we would also like to be able to allow regular users to create their own custom test runs. However, this is a secondary goal. We will design the basic test run system with this in mind, but it will not be implemented until the core test run behavior is in place.


=== Functional requirements ===
=== Functional requirements ===
Testrunner currently performs the following functions:
Basic Functionality:
* displays lists of existing test runs;
* Run Tests interface
* for each test run, displays a list of the component test cases, sortable by group or status;
** Test runs will replace test groups in the current interface;  
* individual test cases in a test run can be marked as PASSED, FAILED, or NOT RUN. Test cases can also be marked with a bug number;
** Special test runs and test runs in progress will be highlighted and presented first;
* test cases can be added or removed from test runs;
** Continuous (ongoing) test runs for certain test groups will also be displayed, e.g. Smoketests, BFTs;
* testers and watchers can be associated with test runs;
** overall progression of the interface (sys config->test run->subgroup->testcases) will remain the same
* test cases can be assigned to components;
* test cases can be assigned to functional groups;
* functional groups can be added, modified and deleted;
* test cases can added, modified, deleted, and cloned;
* test runs can added, modified, deleted, and cloned;
* each test run can include a test plan document;
* a rudimentary testing request interface is present.


As noted previously, [http://wiki.mozilla.org/Testrunner Testrunner is not perfect]. In order to address these shortcomings, the following functionality is also required:
* Reporting
* maintain synchronized versions of products, components, and users with Bugzilla;
** The existing reporting tools will need to be made aware of test runs:
* maintain a single copy of each test case, and create test runs as lists of test cases rather than strictly cloning cases for new runs. Note: test cases can still be "cloned" to create genuinely new test cases;
*** Users should be able to navigate the list of currently available test runs;
* allow for review/certification flags for individual test results and runs, e.g. to aid in localization testing;
*** Admin should have the same functionality, but with the added ability to see/choose test runs that have been disabled or marked as out-of-date;
* robust permissions system, with distinctions between who can:
** information about specific test runs will include the following statistics:
** view test cases/runs;
*** coverage (percentages);
** run test cases/runs;
*** testcases remaining to be run, with a link to generate a special testlist of the outstanding testcases for running;
** create test cases/runs, groups, components;
*** list of failures;
** edit test cases/runs, groups, components;
*** list of results with comments;
** all of the above, but for security-related test cases;
** All of the above will be properly interlinked so that more information about results and testcases can be found where appropriate.
* allow for re-ordering of test cases within a test run;
 
* ability to change status of completed test results and runs as new information becomes available;
Admin Functionality:
* allow comments to be track for test cases and runs;
* tools to add/clone/modify/disable/delete the various testing-related entities: products, platforms, operating systems, branches, and locales.
* track changes of test cases and runs;
* add/clone/modify/disable/delete testcases;
* integrate with [http://talkback-public.mozilla.org/ Talkback]: tracking build ids, test results, crash bugs, etc.
* add/clone/modify/disable/delete subgroups;
* user documentation, including a tutorial.
* add/clone/modify/disable/delete test groups;
* add/clone/modify/disable/delete test runs;
* tools to create test runs from test groups, including the ability to change the scheduling of test runs;
* new searching tools to limit results by test runs;
* new reporting tools for test runs. This includes automated reports for testdays;


== Test Case Management ==
== Test Case Management ==
Line 175: Line 126:
** add/modify/remove privileges for users;
** add/modify/remove privileges for users;
** ability to view recent test case activity (additions/updates/deletions);
** ability to view recent test case activity (additions/updates/deletions);
* ability to search for and display testcases based on:
** grouping (product/branch/etc.)
** ID
** text/regexp
** recent activity
** tag


== Automated Testing ==
== Web Services (was Automated Testing) ==
=== Description ===
=== Description ===
Some automated testing is already occuring using Bob Clary's Spider tool. Our goal with automated testing is two-fold:
We need to create a framework for submitting/receiving automated test results that can be used by the current QA test farm, but that can also accommodate receiving test results from other sources. We will implement a well-defined web services API to accommodate this.
# automate the automation: get the automated testing running continuously in an environment where it can be monitored, queried, and updated;
# automate as much regular testing as possible: this includes both smoketests and BFTs. Tests that cannot be run automatically should require as little interaction as possible, and this interaction must be standardized.
 
Note: this document does not cover the necessary changes to Spider or the test cases themselves to allow for automation.


=== Priority ===
=== Priority ===
Once we have a central repository for test cases, we can begin designing automation tools to draw on that repository.
This is a high-priority, and is next in line for implementation after the test run functionality.
 
I understand that efforts to convert the existing smoketests and BFTs into a Spider-ready format are already under way.


=== Functional requirements ===
=== Functional requirements ===
There are two facets here. The first are the test automation processes/daemons that will run on the individual testing machines in the test farm. The second is the test result collating process/daemon that will live on the main Litmus server.
Litmus web services will require:
 
* a well-defined reporting format, complete with a means to validate incoming results. [[Litmus:Web_Services|XML]] and [[Litmus:Test_Result_Format_DTD|DTD]] have been proposed for this;
The automation processes must be:
* a processing script to accept and parse incoming results;
* able to run all our tier 1 platforms: Windows, Mac, Linux.
* an authentication component to ensure submissions come from trusted sources only;
* written to be as platform-agnostic as possible to minimize maintenance;
* a method to avoid both report spam and genuine duplicate reports;
* able to respond to remote queries for:
* logging of all submissions for auditing and debugging purposes.
** current status;
** start/stop/restart/pause;
** self-update;
** automatic installation of new product builds;
** process a specific test request;
* maintain current state locally to allow for stop/restart/pause without affecting and testing-in-progress. This also means maintaining a list of testing requests that have already been run on the local testing machine to avoid duplication;
* able to fail gracefully, e.g. during network interruptions. (Perhaps we want some default local test run to proceed in the case?)
* able to send back testing results to the main processing/database server;
* able to query the main server to get the latest testing requests off the request queue;
 
The main test result collating process/daemon must be able to:
* process incoming results (perhaps in parallel?);
* weed out common errors at a pre-processing stage:
** incomplete results;
** invalid formatting of results (easy with a DTD);
* automatically append information to test results that match certain criteria, e.g. known bugs;
* send notifications of breakages (test, system, and network failures) as appropriate, and make this configurable.


== Reporting (Result Querying) ==
== Reporting (Result Querying) ==
Line 237: Line 171:
* test run comparison: synopsis views for two test runs are compared head-to-head, with differences highlighted;
* test run comparison: synopsis views for two test runs are compared head-to-head, with differences highlighted;


== Testing Requests ==
== Automation Control (future) ==
=== Description ===
=== Description ===
Some automated testing is already occuring using Bob Clary's Spider tool. Our goal with automated testing is two-fold:
# automate the automation: get the automated testing running continuously in an environment where it can be monitored, queried, and updated;
# automate as much regular testing as possible: this includes both smoketests and BFTs. Tests that cannot be run automatically should require as little interaction as possible, and this interaction must be standardized.
=== Priority ===
=== Priority ===
Automation control is not in the short-term critical path for Litmus.
=== Functional requirements ===
=== Functional requirements ===
These requirements are out-of-date, but they represent our initial thinking and discussion on the subject of automation control in relation to Litmus. Spider is only one piece of a larger automation picture which now includes eggplant, [http://www.daveliebreich.com/blog/?p=53 jssh], and no doubt more in the future.


= Other Nonfunctional Requirements =
There are three facets here. The first are the test automation processes/daemons that will run on the individual testing machines in the test farm. The second is the Spider browser extension that will actually be running the tests. The final piece is t
 
== Performance requirements ==
 
== Safety requirements ==
Due to the sensitive nature of some of the security-related test cases, there may be liability issues surrounding access control. See Security requirements below.
 
== Security requirements ==
Proper access control is essential, especially due to the presence of security-related test cases in the test case repository. The Bugzilla authentication model should be extensible for use with Litmus. Security-related testcases and results can be invisible (or stubbed) for users with inadequate permissions.
 
== Software quality attributes ==
Just like the software it is testing, Litmus is itself a software tool, subject to the same flaws and limitations.
 
Bugs can be filed against [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&product=Webtools&component=Litmus&long_desc_type=substring&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&resolution=DUPLICATE&resolution=---&emailassigned_to1=1&emailtype1=exact&email1=&emailassigned_to2=1&emailreporter2=1&emailqa_contact2=1&emailtype2=exact&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&field0-0-0=noop&type0-0-0=noop&value0-0-0= Litmus in Bugzilla] using the product '''Webtools''' and the component '''Litmus'''.
 
== Project documentation ==
All Litmus project documentation will reside under the Litmus hierarchy on the Mozilla wiki: http://wiki.mozilla.org/Litmus
 
Note: this may be migrated to [http://developer.mozilla.org/ DevMo] in the future.
 
== User documentation ==
All Litmus user documentation will also reside under the Litmus hierarchy on the Mozilla wiki: http://wiki.mozilla.org/Litmus
 
Note: this may be migrated to [http://developer.mozilla.org/ DevMo] in the future.
 
= Other Requirements =
 
== Appendix A: Terminology/Glossary/Definitions list ==
 
[[Category:QA]]

Latest revision as of 21:04, 8 January 2008

« back to Litmus main page

External Interface Requirements

User interfaces

The primary human interface for the Litmus tools will be web-based: QA staff, developers, and testers will access the web interface to report manual test results, check automated test results, schedule testing requests, and report on past test runs.

We want the Litmus web front-end to be easy to use and the user experience to be positive. This is a tool that we expect the Mozilla QA staff to be using every day. The QA staff has some experience with the limitations of Testrunner, and we will be mining that experience to avoid making the same mistakes again.

In general, we want to design the web tool so that:

  • the default display or report provides the most useful set of basic information for the user;
  • common tasks are easily access from the default display;
  • the path to more complicated tasks is easy discovered
  • some degree of customization is possible, so that users are able to streamline their own experience.

There will also be a command-line interface to the daemon/tools. This interface will be used by the automation processes for submitting results remotely via web services, but can also be used by testers to do the same.

We will want the remote APIs for the command-line interface to be fully documented (with examples) so it can be easily used by developers and QA staff.

If we do end up using a Spider browser extension to facilitate widespread automatic testing, we must provide the tester with some configurable options:

  • limit tests by group/type;
  • disable extension completely.

Hardware interfaces

At the recent Mozilla QA summit meeting (2005/06/21), it was decided to invest in a small cluster of machines that would serve as a test farm, similar in concept to the collection of machines that currently perform builds for Tinderbox.

The test farm will be made up of the following machines:

  • Head Node (likely running Linux);
    • Linux Boxes (2);
    • Mac XServes (2);
    • Windows Boxes (2).

Software interfaces

Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.

This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon and web server will live on the head node. This daemon will be responsible for collating results as they come in. Depending on whether testing request scheduling proves feasible, the Litmus daemon will also be responsible for that scheduling.

The Spider test tool, or any other test tool we want to promote, should be packaged as a browser extension to facilitate widespread adoption.

Communication protocols and interfaces

Since Litmus is designed primarily as a web tool, the main protocol of record will be HTTP.

The command-line interface will need to accept remote procedure calls in order to manage automation. Both XML-RPC and SOAP have been proposed for this.

System Features

Test Runs (was Replicate Testrunner functionality)

Testrunner history

Testrunner is a test run management system that works as an add-on over Bugzilla. More information can be found at Testrunner web site.

Note: Testrunner's concept of test case management is somewhat limited, which is why I have referred to it instead as 'test run management' above. Litmus will have a somewhat some holistic concept of test case management. See below.

Description

Test runs are the primary means for us to focus community testing efforts.

To that end, Litmus admins can add special test runs (e.g. for testdays, releases, etc.). There will also be ongoing test runs for the common test groups that exist already.

Test runs will replace the existing test groups as the top level of organization for running tests. This provides the extra (3rd) level of hierarchical organization that people have been asking for.

Since test runs are more focused, test run results can be more usefully used to report on testing progress. Special reports will be available automatically for each test run, and can be further customized using existing search functionality.

There will also need to be a suite of administration tools to manage and maintain test runs, and all the various parts that make them up.

In the future, we would also like to allow users to create their own custom test runs.

Priority

Replacing Testrunner for use in test runs is now the #1 priority for Litmus.

Once the core test run functionality is in place, we would also like to be able to allow regular users to create their own custom test runs. However, this is a secondary goal. We will design the basic test run system with this in mind, but it will not be implemented until the core test run behavior is in place.

Functional requirements

Basic Functionality:

  • Run Tests interface
    • Test runs will replace test groups in the current interface;
    • Special test runs and test runs in progress will be highlighted and presented first;
    • Continuous (ongoing) test runs for certain test groups will also be displayed, e.g. Smoketests, BFTs;
    • overall progression of the interface (sys config->test run->subgroup->testcases) will remain the same
  • Reporting
    • The existing reporting tools will need to be made aware of test runs:
      • Users should be able to navigate the list of currently available test runs;
      • Admin should have the same functionality, but with the added ability to see/choose test runs that have been disabled or marked as out-of-date;
    • information about specific test runs will include the following statistics:
      • coverage (percentages);
      • testcases remaining to be run, with a link to generate a special testlist of the outstanding testcases for running;
      • list of failures;
      • list of results with comments;
    • All of the above will be properly interlinked so that more information about results and testcases can be found where appropriate.

Admin Functionality:

  • tools to add/clone/modify/disable/delete the various testing-related entities: products, platforms, operating systems, branches, and locales.
  • add/clone/modify/disable/delete testcases;
  • add/clone/modify/disable/delete subgroups;
  • add/clone/modify/disable/delete test groups;
  • add/clone/modify/disable/delete test runs;
  • tools to create test runs from test groups, including the ability to change the scheduling of test runs;
  • new searching tools to limit results by test runs;
  • new reporting tools for test runs. This includes automated reports for testdays;

Test Case Management

Description

Testrunner currently contains some metadata about test cases, and tracks results based on that metadata. However, Testrunner does not contain a copy (or even a link to) the test case itself. Updating test cases is a two-step procedure with no guarantee that both steps will be executed.

As much as possible, Litmus will act as a repository for test cases. This will allow for metadata to be associated directly with test cases. For external tests that cannot be brought into the repository, there will be sufficient information given to acquire the test case(s) from the remote source, e.g. download URL.

Priority

Since there does not currently exist a central repository for test cases, this feature has the highest priority. If we can get a test case management interface up quickly, we can ensure that all testers are running the exact same set of tests, and implement automation from there.

Functional requirements

Test case management requires the following:

  • test case storage:
    • for as many tests as possible, this will hopefully mean storing fully automated test cases in whatever syntax is appropriate for use with the Spider tool;
    • links to externals test cases when they cannot be stored by the system, including access information;
    • full instructions for running tests that cannot be automated. Note: this is similar to the existing test case functionality in Testrunner;
  • version control for test cases, with trackable history and commentary;
  • linking of test results to individual test cases with version information;
  • ability to check out/download groups of test cases, based on test runs, or functionality group, or platform;
  • full access control restrictions for the test case repository. Security-related test cases should only be visible/downloadable to those with sufficient privileges;
  • web-based administration (there will be some overlap with the Testrunner functionality outlined above):
    • add/modify/delete test cases;
    • add/modify/delete test runs;
    • add/modify/remove privileges for users;
    • ability to view recent test case activity (additions/updates/deletions);
  • ability to search for and display testcases based on:
    • grouping (product/branch/etc.)
    • ID
    • text/regexp
    • recent activity
    • tag

Web Services (was Automated Testing)

Description

We need to create a framework for submitting/receiving automated test results that can be used by the current QA test farm, but that can also accommodate receiving test results from other sources. We will implement a well-defined web services API to accommodate this.

Priority

This is a high-priority, and is next in line for implementation after the test run functionality.

Functional requirements

Litmus web services will require:

  • a well-defined reporting format, complete with a means to validate incoming results. XML and DTD have been proposed for this;
  • a processing script to accept and parse incoming results;
  • an authentication component to ensure submissions come from trusted sources only;
  • a method to avoid both report spam and genuine duplicate reports;
  • logging of all submissions for auditing and debugging purposes.

Reporting (Result Querying)

Description

Testing automation will generate an ongoing stream of test results. These results will be useless unless the proper tools are in place to query and compare them. This will address a current void in Testrunner, wherein there is no way to perform a head-to-head comparison between the results from two separate test runs. This makes it harder to spot regressions.

We also have the opportunity as we move forward to begin collecting (and reporting on) performance and defect data. This will allow us to create meaningful trend data.

Priority

Only some of the required reports are known at the time of writing. The various reports share a core set of functionality which can be put in place initially, and new or more complicated reports can be added over time.

The test run comparison reports will likely be the first to be implemented.

Functional requirements

The reporting interface will require the following features:

  • proper limiting for the number of results returned on a single page. This should also be configurable with some appropriate upper bound. The user should be able to navigate through result sets that span more that a single page;
  • ability to limit results based on certain criteria;
  • ability to sort/reverse results based on certain criteria;

The following specific reports are needed:

  • single test case: results from a single test case are marked-up for viewing;
  • test run: test case results from a single test run are marked-up and presented in synopsis form;
  • test case comparison: head-to-head comparison between two test case results, with differences highlighted;
  • test run comparison: synopsis views for two test runs are compared head-to-head, with differences highlighted;

Automation Control (future)

Description

Some automated testing is already occuring using Bob Clary's Spider tool. Our goal with automated testing is two-fold:

  1. automate the automation: get the automated testing running continuously in an environment where it can be monitored, queried, and updated;
  2. automate as much regular testing as possible: this includes both smoketests and BFTs. Tests that cannot be run automatically should require as little interaction as possible, and this interaction must be standardized.

Priority

Automation control is not in the short-term critical path for Litmus.

Functional requirements

These requirements are out-of-date, but they represent our initial thinking and discussion on the subject of automation control in relation to Litmus. Spider is only one piece of a larger automation picture which now includes eggplant, jssh, and no doubt more in the future.

There are three facets here. The first are the test automation processes/daemons that will run on the individual testing machines in the test farm. The second is the Spider browser extension that will actually be running the tests. The final piece is t