Litmus:Design
Authentication
Test Runs (Was Replicate Testrunner Functionality)
The most important concept here is that of a test run. At the fundamental level, a test run is simply a collection of test results that share a common set of criteria. In common usage, a test run is usually made up of a series of tests (a test group/list) for a given product and set of platform-specific build IDs, and will also generally be delimited by time.
In Testrunner, existing test runs or lists are cloned to create new test runs. Each test result in the test run is created in advance from the corresponding testcase, and that result is updated when the result is submitted. Aside from the overhead of creating all the results for each run at the outset, this also precludes having multiple results for the same testcase in the same test run, i.e. a new test run must be created for each separate tester.
With Litmus, we have a chance to make test runs both more lightweight and (hopefully) more powerful.
We will implement each test run as a set of search criteria. Results that match the search criteria are included for display in any test run reports. Test runs can be created specifically for certain events, e.g. a testday. This also allows us to create test runs after-the-fact that can automatically be matched up against existing reports in the database.
If not limited by date, test runs can also be ongoing. This will allow us to have have long-lived test runs that can take the place of the test groups that are currently used to aggregate testing in Litmus.
This meshes well with the current search and reporting structure of Litmus.
Design
I'm going to structure this design as a series of ordered tasks so that I can easily check elements off as I finish them.
Database Changes
- Update database schema to allow for more complex testcase relationships (bug 323768). This will require the following schema changes:
- normalize platforms table
- create platform_products join table
- drop product ref from platforms table
- update test_results with new platform info
 
- normalize opsys table
- update test_results with new opsys info
 
- normalize subgroups table
- create subgroup_testgroups join table
- drop testgroup ref from subgroups table
 
- normalize testcases (tests) table
- rename table from tests to testcases (not critical, but I've wanted to do this for ages, for clarity's sake)
- replace status_id with simple boolean enabled
- drop test_status_lookup table
 
- normalize test_results table
- create test_subgroups table
- drop subgroup ref from tests table
- add product ref to tests table
 
 
- normalize platforms table
Perl modules
- Update affected Perl modules and CGI scripts to reflect the database schema changes, preserving existing functionality. Affected modules:
- Litmus::DB::Test.pm
 
CGI Scripts
HTML Templates and CSS
JavaScript Libraries
Test Case Management
TBD
Web Services (was Automated Testing)
TBD
Reporting (Result Querying)
High-Level Design
Access to reporting/querying
The Litmus start page will display a small table of recent test results. There will be links from this display that will take the user to a full reporting interface. There will also be links from the full reporting interface to access other parts of Litmus, including an option to return to the start page.
Tabular display
Much like Bugzilla, we are trying to display as much useful information as possible in a small space, hopefully without overwhelming the user.
The basic results display for querying will be a tabular display of all the relevant results that match the user's query. The tabular display will have the following basic layout:
| Date | Product | Platform | Test #/Name | Status | State | Branch | 
The Test #/Name field will contain the shortest meaningful descriptor for a given test. To borrow some useful functionality from Tinderbox, the test name will be clickable, and when clicked, a floating popup will appear that will contain a longer description of the test, as well as any notes associated with the test. A link from this popup will the take user the full result display for that single test.
Sorting and limiting search results
In the results display above each column heading would be clickable. Clicking the link would cause the results to be sorted by the relevant column. Clicking the link again would cause the results to sorted in reverse order by the same column.
At the bottom of the results display will be query form that the user can use to limit their search results by all the fields in the display. Each field will have a drop-down selection list that will either be prepopulated from the database, or or a static list for infrequently changing fields.
Infrequently changing fields, and the values associated with them:
- Product: Firefox, Thunderbird, Seamonkey
- Platform: Windows, MacOS, Linux
- Status: PASS/FAIL/UNTESTED
- State: DISABLED/?
- Date: (Results in the) Last Day, Last 2 Days, Last Week, Last 2 Weeks
Dynamic lists:
- Test#/Name
- Branch
The query form will contain sort controls, with the ability to toggle the sort between ascending and descending.
A text comparison field will allow the user to limit their query by a text-based matched. An associated comparison type list will allow the user to select whether they want an exact or a partial match.
All of the above query form elements will be usable together, i.e. a user can select limiting criteria based on field, can sort their results, and can also perform tet-based matching in the same query.
Displaying a single result
Clicking on the link to display a single test result will take a user to a new page where the complete details of that test result are displayed.
For each result parameter, there will be a generated link that will take the user to a display of test results that share the same parameter value, e.g. all results for the same product, all results for a given branch, etc.
There will also be a special link to display all the test results for the test run to which this result belongs.
The single result display will also allow for the addition of notes/comments to that testing result. Note: this will likely not be possible until proper authentication is in place.
Testing Requests (future)
TBD
Automation Control (future)
TBD