QA/TDAI/Projects

From MozillaWiki
< QA‎ | TDAI
Jump to: navigation, search

« QA/TDAI

These are the areas we could use your help on. The projects are listed in no real order, just feel free to jump in to #qa on irc.mozilla.org if you're interested in helping out and you have questions.

  • Find Clint Talbert (IRC: ctalbert) (email: ctalbert at mozilla dot com)

-- or --

  • Find Joel Maher (IRC: jmaher) (emai: jmaher at mozilla dot com)

Tests that Need to Be Written

There are hundreds of bugs in our backlog that need automated test cases urgently. Please feel free to jump right in and start writing test cases for them. We've prioritized our highest priority bugs into hot lists for each area of the Mozilla platform:

Mobile

Help us with Fennec by creating tests for specific Fennec features:

Firefox Front End

  • Create performance testing frameworks to help us measure the user responsiveness of various actions using Mozmill:
    • How long does it take to open the 100th tab compared to opening the same site as the only tab
    • How long does it take for the awesome bar to show a bookmarked result if you have 1000 entries in your places database? 10,000 entries in the database?
    • How long does it take if the result you search for is not bookmarked?
    • etc...

Test Suite Integration

There are several ajax frameworks that have excellent test tools. We'd like to import those test suites into our mochitest test harness. Information on how to do this work can be found here. Currently we would like to get the following test suites integrated:

Test Results Comparison Reports

Update: Some progress have been done and it's up and running here. The description is here.

Currently there is no way to compare test results between runs. We need to store test results more reliably to compare test data and see the differences between one run and another that can be helpful to track tests that fail randomly, track large number of failures across different platforms, etc

The solution is to set up a database to store test failures of each run and preferably web interface for viewing .

The idea is to have a results server and relevant data from log files of each run will be parsed and sent to the server via JSON. Queries can then be built on the results to be viewed on a webpage, or as an XML feed.

To start off, we will:

  • write a log parser for xpcshell and reftest and upload the data to CouchDb.
  • Once small sets of data are in there, we need to look at queries.
  • We also need to be able to run the scripts automatically during build and inform the status (work with the build team).

Some ideas I have for views (queries) that we will need:

  • number of pass, fail, todo (i.e. 1980, 6, 122) tests for a given test run (assume document: i.e. reftest-buildid-date)
  • method to display results for any give test pass. Would like to display based on any given date (or range) as well as compare the most current run to any given date
  • find tests which are not run in a given test pass. Get a list of all possible tests found in previous runs, cross reference with current test pass and return a list of tests which are not found in current test pass
  • look at history of a given set of tests (single, matching regex, matching type) over all runs for a given time period


Here is the json format that we will use:

 {"test id": {
   "build":2009070112345,
   "product":"fennec",
   "os":"maemo",
   "machineid":"nokia-n810-02",
   "testtype":"fennec reftest",
   "timestamp":"20090701153600",
   "tests": {
     "testname1":{"pass":3,"fail":0,"todo":1,"notes":"reason for failure"},
     "testname2":{"pass":12,"fail":0,"todo":2,"notes":"more notes"},
     .
     :
   }
  }
 }

For each log file we parse, we will add a document with the above json format. This will allow for us to have multiple files for a given run (if they are run in parallel) and a collection of different builds and dates to compare against. All test results will live in a single database as documents so we can share the same view code.

The notes field will be a collection of the notes found when we have a failure for the specific test file.


The unknowns that we have are:

* test id (just need to standardize the format)


One issue found while using couchdb is that the futon view produces results: http://67.205.203.107:5984/_utils/database.html?logs-1/_design%2Flogs-1%2Fsummary

but the html api appears to be missing information: http://67.205.203.107:5984/logs-1/_design/logs-1/_view/summary

Might be related to how we have the db or _design doc setup.

Per Patch Code coverage Analysis

This project will aid developers when making large substantial changes to the codebase. If you change a large bit of code or a substantial behavior you can use code coverage metrics gathered during unit tests to see whether or not your code is having any unintended effects and you can also see if you are hitting all the cases in your code. This system would do the following:

  • Have a way to add --coverage to the test harnesses (mochitest, reftest, xpchsell) which would tell them to gather code coverage data Dependency: would have to depend on code coverage tools being installed
  • Would enable developers to submit a patch
  • After the unit tests run once with the patch and once without it the two sets of gathered data are diffed using a tool that can help show what was different about those two code paths.

Because of the difficulty of installing code coverage tools this might want to be on a web server and be a kind of "try-server" esque mechanism.

We need a really good way to diff the data such that it outputs easily digestible actionable information.