Litmus:Design

From MozillaWiki
Jump to: navigation, search

« back to Litmus main page

Authentication

See the Litmus Authentication System Design Document.

Test Runs (Was Replicate Testrunner Functionality)

The most important concept here is that of a test run. At the fundamental level, a test run is simply a collection of test results that share a common set of criteria. In common usage, a test run is usually made up of a series of tests (a test group/list) for a given product and set of platform-specific build IDs, and will also generally be delimited by time.

In Testrunner, existing test runs or lists are cloned to create new test runs. Each test result in the test run is created in advance from the corresponding testcase, and that result is updated when the result is submitted. Aside from the overhead of creating all the results for each run at the outset, this also precludes having multiple results for the same testcase in the same test run, i.e. a new test run must be created for each separate tester.

With Litmus, we have a chance to make test runs both more lightweight and (hopefully) more powerful.

We will implement each test run as a set of search criteria. Results that match the search criteria are included for display in any test run reports. Test runs can be created specifically for certain events, e.g. a testday. This also allows us to create test runs after-the-fact that can automatically be matched up against existing reports in the database.

If not limited by date, test runs can also be ongoing. This will allow us to have have long-lived test runs that can take the place of the test groups that are currently used to aggregate testing in Litmus. Used in this context, test runs allow the Mozilla QA staff to focus community testing efforts

This meshes well with the current search and reporting structure of Litmus, although this structure will also need to be expanded to include knowledge of test runs.

Design

Database Changes

  • create test_runs table with the following schema:
CREATE TABLE `test_runs` (
  `test_run_id` int(11) NOT NULL auto_increment,
  `name` varchar(64) collate latin1_bin NOT NULL default ,
  `description` varchar(255) collate latin1_bin default NULL,
  `last_updated` timestamp NOT NULL default CURRENT_TIMESTAMP on update CURRENT_TIMESTAMP,
  `creation_date` timestamp NOT NULL default '0000-00-00 00:00:00',
  `start_timestamp` timestamp NOT NULL default '0000-00-00 00:00:00',
  `finish_timestamp` timestamp NOT NULL default '0000-00-00 00:00:00',
  `enabled` tinyint(1) NOT NULL default '1',
  `product_id` tinyint(4) NOT NULL default '0',
  `branch_id` smallint(6) NOT NULL default '0',
  `author_id` int(11) NOT NULL default '0',
  `recommended` tinyint(1) NOT NULL default '0',
  `version` smallint(6) NOT NULL default '1',
  PRIMARY KEY  (`test_run_id`),
  KEY `name` (`name`),
  KEY `description` (`description`),
  KEY `start_timestamp` (`start_timestamp`),
  KEY `finish_timestamp` (`finish_timestamp`),
  KEY `enabled` (`enabled`),
  KEY `product_id` (`product_id`),
  KEY `branch_id` (`branch_id`),
  KEY `creation_date` (`creation_date`),
  KEY `last_updated` (`last_updated`),
  KEY `author_id` (`author_id`),
  KEY `recommended` (`recommended`),
  KEY `version` (`version`)
);
  • create the following tables used for defining test_runs:
CREATE TABLE `test_run_testgroups` (
  `test_run_id` int(11) NOT NULL default '0',
  `testgroup_id` smallint(6) NOT NULL default '0',
  `sort_order` smallint(6) NOT NULL default '1',
  PRIMARY KEY  (`test_run_id`,`testgroup_id`),
  KEY `sort_order` (`sort_order`)
)

CREATE TABLE `test_run_criteria` (
  `test_run_id` int(11) NOT NULL default '0',
  `build_id` int(10) unsigned NOT NULL default '0',
  `platform_id` smallint(6) NOT NULL default '0',
  `opsys_id` smallint(6) NOT NULL default '0',
  PRIMARY KEY  (`test_run_id`,`build_id`,`platform_id`,`opsys_id`)
)

Perl Modules

  • Add the following new perl modules:
    • Litmus::DB::TestRun.pm

Test Case Management

Design

I'm going to structure this design as a series of ordered tasks so that I can easily check elements off as I finish them.

Database Changes

Update database schema to allow for more complex testcase relationships (bug 323768). This will require the following schema changes:

  • normalize platforms table
    • create platform_products join table
    • drop product ref from platforms table
    • update test_results with new platform info
  • normalize opsys table
    • update test_results with new opsys info
  • normalize subgroups table
    • create subgroup_testgroups join table
    • drop testgroup ref from subgroups table
  • normalize testcases (tests) table
    • rename table from tests to testcases (not critical, but I've wanted to do this for ages, for clarity's sake)
    • replace status_id with simple boolean enabled
    • drop test_status_lookup table
  • normalize test_results table
    • create testcase_subgroups table
    • drop subgroup ref from testcases table
  • add product ref to testcases and subgroups tables
  • standardize build ID to 10 digits

Perl modules

  • Update affected Perl modules to reflect the database schema changes, preserving existing functionality. Affected modules:
    • Litmus::DB::Testcase.pm
    • Litmus::DB::Testgroup.pm
    • Litmus::DB::Subgroup.pm
    • Litmus::DB::Testresult.pm
    • Litmus::DB::Test.pm

One of the problems with the current Litmus Class::DBI-based design is that many times fully populated entity objects (e.g. a testcase) are looked up from the database to satisfy trivial conditions (e.g. getting a count of the # of testcases in a subgroup). One particularly egregious example is the current select testgroup/subgroup page from the Run Tests interface and the Percentage Coverage functions. If there are any more than a small number of testgroups present, the page takes a very long time to load, e.g. for a product with 7 testgroups (firefox), the page takes 15-20 seconds to load.

Given the new many-to-many relationships, it makes sense to replace some of these slow, generic lookups with more targetted methods for each class.

Since Class::DBI does not have a very good method for handling many-to-many relationships (join tables), it also makes sense to create methods for each class that will perform the necessary linkages. Here is a (hopefully) complete list of the linkages that need to occur, and the class in which they should live:

  • Litmus::DB::Testgroup
    • sql method EnabledByBranch (interim step before Test Runs)
    • sql method EnabledBySubgroup
    • replace exisiting coverage functions with targetted functions
  • Litmus::DB::Subgroup
    • sql method EnabledByTestgroup
    • sql method EnabledByTestcase
    • sql method NumEnabledTestcases (returns simple count)
    • replace exisiting coverage functions with targetted functions
  • Litmus::DB::Testcase
    • sql method EnabledBySubgroup
    • sql method CommunityEnabledBySubgroup
    • sql method MatchesByName (for full-text matching by summary)
    • sql method FullTextMatches (for full-text matching by all text fields)
  • Litmus::DB::Platform
    • sql method ByProduct
  • Litmus::DB::Testresult
    • sql method Completed
    • sql method CompletedByUser

I will also be taking this opportunity to normalize as much of the nomenclature in the perl modules as I can. The focus will be on accurate naming that agrees with the database nomenclature, and as such will require renaming of some key fields. Many Class::DBI field aliases will be removed in favor of using the original database field name; I feel the proliferation of variable names is unnecessary.

CGI Scripts

Management tools are the big missing piece right now; QA staff are reliant on database admins to get new testcases added to Litmus. Our first step is to get management interface scripts in place for each of the following entities:

  • testcases
  • subgroups
  • testgroups

A quick note on nomenclature

These scripts should be named to match the titles used in the interface. I am suggesting "manage_xxx.cgi" for each. This will hopefully help stem the proliferation of script names in Litmus that don't match the intended functionality of the script.

What does management entail?

Management implies 4 different functions:

  • add a new entity;
  • edit an existing entity;
  • clone an existing entity;
  • delete an entity;

The same interface can be used for all the management functions since the fields will be the same in each case. Editing of fields can be disabled when the user is not in edit mode.

AJAX will be used to make the interface interactive and avoid previous reliance on static JavaScript arrays.

For subgroups and testgroups, the interface will also included a component to reorder testcases within the subgroup, or subgroups within the testgroup.

HTML Templates and CSS

JavaScript Libraries

We will be using Mochikit to provide the AJAX functionality for the new management interfaces.

Web Services (was Automated Testing)

See Litmus:Web_Services for Web Services design.

Reporting (Result Querying)

High-Level Design

Access to reporting/querying

The Litmus start page will display a small table of recent test results. There will be links from this display that will take the user to a full reporting interface. There will also be links from the full reporting interface to access other parts of Litmus, including an option to return to the start page.

Tabular display

Much like Bugzilla, we are trying to display as much useful information as possible in a small space, hopefully without overwhelming the user.

The basic results display for querying will be a tabular display of all the relevant results that match the user's query. The tabular display will have the following basic layout:

Date Product Platform Test #/Name Status State Branch

The Test #/Name field will contain the shortest meaningful descriptor for a given test. To borrow some useful functionality from Tinderbox, the test name will be clickable, and when clicked, a floating popup will appear that will contain a longer description of the test, as well as any notes associated with the test. A link from this popup will the take user the full result display for that single test.

Sorting and limiting search results

In the results display above each column heading would be clickable. Clicking the link would cause the results to be sorted by the relevant column. Clicking the link again would cause the results to sorted in reverse order by the same column.

At the bottom of the results display will be query form that the user can use to limit their search results by all the fields in the display. Each field will have a drop-down selection list that will either be prepopulated from the database, or or a static list for infrequently changing fields.

Infrequently changing fields, and the values associated with them:

  • Product: Firefox, Thunderbird, Seamonkey
  • Platform: Windows, MacOS, Linux
  • Status: PASS/FAIL/UNTESTED
  • State: DISABLED/?
  • Date: (Results in the) Last Day, Last 2 Days, Last Week, Last 2 Weeks

Dynamic lists:

  • Test#/Name
  • Branch

The query form will contain sort controls, with the ability to toggle the sort between ascending and descending.

A text comparison field will allow the user to limit their query by a text-based matched. An associated comparison type list will allow the user to select whether they want an exact or a partial match.

All of the above query form elements will be usable together, i.e. a user can select limiting criteria based on field, can sort their results, and can also perform tet-based matching in the same query.

Displaying a single result

Clicking on the link to display a single test result will take a user to a new page where the complete details of that test result are displayed.

For each result parameter, there will be a generated link that will take the user to a display of test results that share the same parameter value, e.g. all results for the same product, all results for a given branch, etc.

There will also be a special link to display all the test results for the test run to which this result belongs.

The single result display will also allow for the addition of notes/comments to that testing result. Note: this will likely not be possible until proper authentication is in place.

Testcase Tagging

ref: bug 375987

Adding new tags

Only Litmus admins will be able to add tags to any testcase. Product admins will be able to tag testcases for their own product. This will be enforced via a config switch so that we could turn on tagging for all users in the future should we choose to.

Here are the various scenarios under which admin users will be able to enter tags:

  1. Adding a testcase: provide a comma-delimited list of tags to apply to the testcase. Existing tags will be mapped and new tags will be added.
  2. Editing a testcase: existing tags will be displayed. New tags will be added. This will be done via AJAX and the page updated in place.
  3. Search testcases: ability to add a tag for all/some returned search results. Will need checkboxes to enable selection/exclusion of individual testcases within the list.

The interface for all these options will only be visible to admin users.

Tag search

An option to search by tag will be added to the View/Search Testcases page. There will be an exact/regexp toggle for the search. A short list of popular tags will be provided, and each tag in the list will be linked to display the list of testcases with that tag. There will be an option to show a page containing all available tags. Each tag wil be similarly linked as above.

Tags will also be listed on the individual testcase display pages, and will also be linked as above.

Testing Requests (future)

TBD

Automation Control (future)

TBD