Litmus:Requirements: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
No edit summary
Line 53: Line 53:


=== Priority ===
=== Priority ===
Testrunner is currently being used by Mozilla QA staff to track smoketest and BFT results. The QA team can continue to use Testrunner in this capacity until the replacement is ready. It should be possible to implement some of the test case management before it is necessary to build the Testrunner functionality.  
Replacing Testrunner for use in test runs is now the #1 priority for Litmus.


=== Functional requirements ===
=== Functional requirements ===
Testrunner currently performs the following functions:
Basic Functionality:
* displays lists of existing test runs;
* Run Tests interface
* for each test run, displays a list of the component test cases, sortable by group or status;
** Test runs will replace Test Groups in the current interface;  
* individual test cases in a test run can be marked as PASSED, FAILED, or NOT RUN. Test cases can also be marked with a bug number;
** Special test runs and test runs in progress will be highlighted and presented first;
* test cases can be added or removed from test runs;
** Continuous (ongoing) Test Runs for certain Test Groups will also be displayed, e.g. Smoketests, BFTs.
* testers and watchers can be associated with test runs;
* Reporting
* test cases can be assigned to components;
** special Test Run reporting tools will be required. Thes tools will need to display certain key statistics:
* test cases can be assigned to functional groups;
*** coverage (percentages);
* functional groups can be added, modified and deleted;
*** testcases remaining to be run, perhaps presented as a testlist for running;
* test cases can added, modified, deleted, and cloned;
*** list of failures;
* test runs can added, modified, deleted, and cloned;
*** list of results with comments;
* each test run can include a test plan document;
* a rudimentary testing request interface is present.


As noted previously, [http://wiki.mozilla.org/Testrunner Testrunner is not perfect]. In order to address these shortcomings, the following functionality is also required:
 
* maintain a single copy of each test case, and create test runs as lists of test cases rather than strictly cloning cases for new runs. Note: test cases can still be "cloned" to create genuinely new test cases;
 
* allow for review/certification flags for individual test results and runs, e.g. to aid in localization testing;
 
* robust permissions system, with distinctions between who can:
Admin Functionality:
** view test cases/runs;
* tools to add/clone/modify/disable/delete the various testing-related entities: products, platforms, operating systemss, branches, and locales.  
** run test cases/runs;
* add/clone/modify/disable/delete testcases;
** create test cases/runs, groups, components;
* add/clone/modify/disable/delete subgroups;
** edit test cases/runs, groups, components;
* add/clone/modify/disable/delete test runs;
** all of the above, but for security-related test cases;
* new searching tools to limit results by test runs;
* allow for re-ordering of test cases within a test run;
* new reporting tools for test runs. This includes automated reports for testdays;
* ability to change status of completed test results and runs as new information becomes available;
* allow comments to be tracked for test cases and runs;
* track changes of test cases and runs;
* integrate with [http://talkback-public.mozilla.org/ Talkback]: tracking build ids, test results, crash bugs, etc.
* user documentation, including a tutorial.


== Test Case Management ==
== Test Case Management ==

Revision as of 20:34, 6 March 2006

External Interface Requirements

User interfaces

The primary human interface for the Litmus tools will be web-based: QA staff, developers, and testers will access the web interface to report manual test results, check automated test results, schedule testing requests, and report on past test runs.

We want the Litmus web front-end to be easy to use and the user experience to be positive. This is a tool that we expect the Mozilla QA staff to be using every day. The QA staff has some experience with the limitations of Testrunner, and we will be mining that experience to avoid making the same mistakes again.

In general, we want to design the web tool so that:

  • the default display or report provides the most useful set of basic information for the user;
  • common tasks are easily access from the default display;
  • the path to more complicated tasks is easy discovered
  • some degree of customization is possible, so that users are able to streamline their own experience.

There will also be a command-line interface to the daemon/tools. This interface will be used by the automation processes for submitting results remotely via web services, but can also be used by testers to do the same.

We will want the remote APIs for the command-line interface to be fully documented (with examples) so it can be easily used by developers and QA staff.

If we do end up using a Spider browser extension to facilitate widespread automatic testing, we must provide the tester with some configurable options:

  • limit tests by group/type;
  • disable extension completely.

Hardware interfaces

At the recent Mozilla QA summit meeting (2005/06/21), it was decided to invest in a small cluster of machines that would serve as a test farm, similar in concept to the collection of machines that currently perform builds for Tinderbox.

The test farm will be made up of the following machines:

  • Head Node (likely running Linux);
    • Linux Boxes (2);
    • Mac XServes (2);
    • Windows Boxes (2).

Software interfaces

Adding more machines won't do anything to aid the testing burden in and of itself. Indeed, in the short-term, it will simply add more system administration overhead.

This is where we hope to see the biggest payoff in terms of automation. A main Litmus daemon and web server will live on the head node. This daemon will be responsible for collating results as they come in. Depending on whether testing request scheduling proves feasible, the Litmus daemon will also be responsible for that scheduling.

The Spider test tool will need to be packaged as a browser extension to facilitate widespread adoption.

Communication protocols and interfaces

Since Litmus is designed primarily as a web tool, the main protocol of record will be HTTP.

The command-line interface will need to accept remote procedure calls in order to manage automation. Both XML-RPC and SOAP have been proposed for this.

System Features

Test Runs (was Replicate Testrunner functionality)

Description

Testrunner is a test run management system that works as an add-on over Bugzilla. More information can be found at Testrunner web site.

Note: Testrunner's concept of test case management is somewhat limited, which is why I have referred to it instead as 'test run management' above. Litmus will have a somewhat some holistic concept of test case management. See below.

Priority

Replacing Testrunner for use in test runs is now the #1 priority for Litmus.

Functional requirements

Basic Functionality:

  • Run Tests interface
    • Test runs will replace Test Groups in the current interface;
    • Special test runs and test runs in progress will be highlighted and presented first;
    • Continuous (ongoing) Test Runs for certain Test Groups will also be displayed, e.g. Smoketests, BFTs.
  • Reporting
    • special Test Run reporting tools will be required. Thes tools will need to display certain key statistics:
      • coverage (percentages);
      • testcases remaining to be run, perhaps presented as a testlist for running;
      • list of failures;
      • list of results with comments;



Admin Functionality:

  • tools to add/clone/modify/disable/delete the various testing-related entities: products, platforms, operating systemss, branches, and locales.
  • add/clone/modify/disable/delete testcases;
  • add/clone/modify/disable/delete subgroups;
  • add/clone/modify/disable/delete test runs;
  • new searching tools to limit results by test runs;
  • new reporting tools for test runs. This includes automated reports for testdays;

Test Case Management

Description

Testrunner currently contains some metadata about test cases, and tracks results based on that metadata. However, Testrunner does not contain a copy (or even a link to) the test case itself. Updating test cases is a two-step procedure with no guarantee that both steps will be executed.

As much as possible, Litmus will act as a repository for test cases. This will allow for metadata to be associated directly with test cases. For external tests that cannot be brought into the repository, there will be sufficient information given to acquire the test case(s) from the remote source, e.g. download URL.

Priority

Since there does not currently exist a central repository for test cases, this feature has the highest priority. If we can get a test case management interface up quickly, we can ensure that all testers are running the exact same set of tests, and implement automation from there.

Functional requirements

Test case management requires the following:

  • test case storage:
    • for as many tests as possible, this will hopefully mean storing fully automated test cases in whatever syntax is appropriate for use with the Spider tool;
    • links to externals test cases when they cannot be stored by the system, including access information;
    • full instructions for running tests that cannot be automated. Note: this is similar to the existing test case functionality in Testrunner;
  • version control for test cases, with trackable history and commentary;
  • linking of test results to individual test cases with version information;
  • ability to check out/download groups of test cases, based on test runs, or functionality group, or platform;
  • full access control restrictions for the test case repository. Security-related test cases should only be visible/downloadable to those with sufficient privileges;
  • web-based administration (there will be some overlap with the Testrunner functionality outlined above):
    • add/modify/delete test cases;
    • add/modify/delete test runs;
    • add/modify/remove privileges for users;
    • ability to view recent test case activity (additions/updates/deletions);

Web Services (was Automated Testing)

Description

Some automated testing is already occuring using Bob Clary's Spider tool. Our goal with automated testing is three-fold:

  1. automate the automation: get the automated testing running continuously in an environment where it can be monitored, queried, and updated;
  2. automate as much regular testing as possible: this includes both smoketests and BFTs. Tests that cannot be run automatically should require as little interaction as possible, and this interaction must be standardized.
  3. create a framework for submitting/receiving automated test results that can be used by the current QA test farm, but that can also accommodate receiving test results from other sources. We will implement a well-defined web services API to accommodate this.

Priority

Once we have a central repository for test cases, we can begin designing automation tools to draw on that repository.

I understand that efforts to convert the existing smoketests and BFTs into a Spider-ready format are already under way.

We intend to get the Spider tool running automated tests for us internally first, then we can look at opening the Spider tool up (probably with some modifications) to the general testing community.

Functional requirements

There are three facets here. The first are the test automation processes/daemons that will run on the individual testing machines in the test farm. The second is the Spider browser extension that will actually be running the tests. The final piece is the test result collating process/daemon that will live on the main Litmus server.

The automation processes must be:

  • able to run all our tier 1 platforms: Windows, Mac, Linux.
  • written to be as platform-agnostic as possible to minimize maintenance;
  • able to respond to remote queries for:
    • current status;
    • start/stop/restart/pause;
    • self-update;
    • automatic installation of new product builds;
    • process a specific test request;
  • maintain current state locally to allow for stop/restart/pause without affecting and testing-in-progress. This also means maintaining a list of testing requests that have already been run on the local testing machine to avoid duplication;
  • able to fail gracefully, e.g. during network interruptions. (Perhaps we want some default local test run to proceed in the case?)
  • able to send back testing results to the main processing/database server;
  • able to query the main server to get the latest testing requests off the request queue;

The Spider testing tool must be:

  • packaged as a browser extension, or somehow provide similar functionality;
  • updatable using the extensions update mechanism;
  • able to track testing progress, with the ability to resume a test list in progress after a crash;

The main test result collating process/daemon must be able to:

  • process incoming results (perhaps in parallel?);
  • weed out common errors at a pre-processing stage:
    • incomplete results;
    • invalid formatting of results (easy with a DTD);
  • automatically append information to test results that match certain criteria, e.g. known bugs;
  • send notifications of breakages (test, system, and network failures) as appropriate, and make this configurable.

Reporting (Result Querying)

Description

Testing automation will generate an ongoing stream of test results. These results will be useless unless the proper tools are in place to query and compare them. This will address a current void in Testrunner, wherein there is no way to perform a head-to-head comparison between the results from two separate test runs. This makes it harder to spot regressions.

We also have the opportunity as we move forward to begin collecting (and reporting on) performance and defect data. This will allow us to create meaningful trend data.

Priority

Only some of the required reports are known at the time of writing. The various reports share a core set of functionality which can be put in place initially, and new or more complicated reports can be added over time.

The test run comparison reports will likely be the first to be implemented.

Functional requirements

The reporting interface will require the following features:

  • proper limiting for the number of results returned on a single page. This should also be configurable with some appropriate upper bound. The user should be able to navigate through result sets that span more that a single page;
  • ability to limit results based on certain criteria;
  • ability to sort/reverse results based on certain criteria;

The following specific reports are needed:

  • single test case: results from a single test case are marked-up for viewing;
  • test run: test case results from a single test run are marked-up and presented in synopsis form;
  • test case comparison: head-to-head comparison between two test case results, with differences highlighted;
  • test run comparison: synopsis views for two test runs are compared head-to-head, with differences highlighted;

Testing Requests

Description

Build/release engineers need to be able to run (and re-run) specific lists of tests against certain builds/release candidates. Testrunner currently allows users to make testing requests for certain products/components, but this is a simple list of requests, i.e. the tests are not automated in any way.

Priority

Testing requests are not in the short-term critical path for Litmus. Once basic testing automation is running, the request interface can be developed and integrated with the rest of the tool.

Functional requirements

Testing requests need to have the following information associated with them:

  • product and version required, possibly specified via links for downloading;
  • lists of test cases sorted in the order in which they should be run;
  • submitter info (email, etc.);
  • submission time;
  • priority;
  • time after which the results are meaningless, i.e. if request has not been run by time X, don't bother running it: mark it as "sunset" or some such, and move on;

The testing request system needs the following general functionality:

  • restricted access to a small subset of maintainers, QA staff, build/release engineers, and developers;
  • priority system for submitted requests, i.e. requests from maintainers trump requests from QA staff trump build...;
  • ability for maintainers to re-prioritize requests that are already in the queue, to force requests to run immediately, or to cancel requests;
  • allow users to modify or delete requests that they have already submitted, provided they have not yet been run;

Other Nonfunctional Requirements

Performance requirements

There are several performance-related aspects to be considered.

The first aspect is the performance of the Litmus web front-end. This concern is partially addressed by the existing LVS cluster. If Litmus is designed with LVS in mind, one initial performance bottleneck will be pushed back. We also don't expect a very high degree of concurrent access for the system.

As we accumulate test results, we may reach a point in the future when the size of the results database becomes a limiting factor, and query speed becomes bogged down. To mitigate this, we should come up with a suitable data retention policy and consider backing up historical test data offline when it is not longer useful.

Another aspect to consider is the performance of the automation daemon with regards to turnaround time for test runs. Depending on the speed of the test machines, there will be a little bit of trial-and-error involved here in order to get test runs designed that can complete in a given amount under normal circumstances. We can tweak these test lists/runs based on the testing loads we end up seeing. It may also be necessary to tweak these lists on a per-platform basis.

Test cases should be run and monitored on the test machines with a suitable time limit. This time limit should be based on historical performance, and will serve to "time out" tests under abnormal circumstances. Of course, we won't actually have historical performance data to begin with, so again there will be an initial period of trial-and-error.

Safety requirements

Due to the sensitive nature of some of the security-related test cases, there may be liability issues surrounding access control. See Security requirements below.

Security requirements

Assets here are:

  1. user information
  2. test cases
    1. security related
    2. not related to security
  3. test results
    1. for security related tests
    2. for tests not related to security
  4. notes/comments
    1. for test cases
      1. security related
      2. other
    2. for test results
      1. security related
      2. other

Are some assets missing?

The Bugzilla authentication model should be extensible for use with Litmus.

Confidentiality

Confidentiality should be assured for user information, security related test cases, test results for security related tests and all notes for security related tests.

Confidentiality of security-related test cases and results will be protected by using access control. Only assigned users will have access.

Security-related testcases and results can be invisible (or stubbed) for users with inadequate permissions.

Integrity

Integrity should be assured for all assets.

All assets should be changed only by an authenticated users.

There should be a way to assure that results are submited by a user who he claims to be and are not tampered with during submission or storage.

Digital signature?

Availability

Test cases not related to security and corresponding results and notes should be available.

Software quality attributes

Just like the software it is testing, Litmus is itself a software tool, subject to the same flaws and limitations.

Bugs can be filed against Litmus in Bugzilla using the product Webtools and the component Litmus.

It would also be nice to track some basic Litmus usage statistics, e.g. type and frequency of queries, but this is not a high priority.

Project documentation

All Litmus project documentation will reside under the Litmus hierarchy on the Mozilla wiki: http://wiki.mozilla.org/Litmus

Note: this may be migrated to MDC in the future.

User documentation

All Litmus user documentation will also reside under the Litmus hierarchy on the Mozilla wiki: http://wiki.mozilla.org/Litmus

Note: this may be migrated to MDC in the future.


--coop 08:00, 14 Jul 2005 (PDT)