QA/TDAI/Projects
These are the areas we could use your help on. The projects are listed in no real order, just feel free to jump in to #qa on irc.mozilla.org if you're interested in helping out and you have questions.
- Find Clint Talbert (IRC: ctalbert) (email: ctalbert at mozilla dot com)
-- or --
- Find Joel Maher (IRC: jmaher) (emai: jmaher at mozilla dot com)
Tests that Need to Be Written
There are hundreds of bugs in our backlog that need automated test cases urgently. Please feel free to jump right in and start writing test cases for them. We've prioritized our highest priority bugs into hot lists for each area of the Mozilla platform:
- Content Testing - tests for the interfaces exposed to web authors
- Layout Testing - tests for the way we render web pages
- Graphics Testing - tests for our graphics drawing subsystem
- Javascript Testing - tests for our Javascript engine
- General Tests - tests for areas of the platform outside of these broad divisions
Mobile
Help us with Fennec by creating tests for specific Fennec features:
- Mobile Browser Tests
- QA Companion (add-on) for mobile specific testing and feedback
Firefox Front End
- Create performance testing frameworks to help us measure the user responsiveness of various actions using Mozmill:
- How long does it take to open the 100th tab compared to opening the same site as the only tab
- How long does it take for the awesome bar to show a bookmarked result if you have 1000 entries in your places database? 10,000 entries in the database?
- How long does it take if the result you search for is not bookmarked?
- etc...
- Help automate manual Litmus test cases using Mozmill.
- Current tracking spreadsheet for automation.
Test Suite Integration
There are several ajax frameworks that have excellent test tools. We'd like to import those test suites into our mochitest test harness. Information on how to do this work can be found here. Currently we would like to get the following test suites integrated:
- Scriptaculous bug 424816
- Dojo bug 424818
- Yahoo UI bug 424819
- Selenium bug 424820
- Qooxdoo bug 426191
Test Results Comparison Reports
Currently there is no way to compare test results between runs. We need to store test results more reliably to compare test data and see the differences between one run and another that can be helpful to track tests that fail randomly, track large number of failures across different platforms, etc
The solution is to set up a database to store test failures of each run and preferably web interface for viewing .
The idea is to have a results server and relevant data from log files of each run will be parsed and sent to the server via JSON. Queries can then be built on the results to be viewed on a webpage, or as an XML feed.
To start off, we will:
- write a log parser for xpcshell and reftest and upload the data to CouchDb.
- Once small sets of data are in there, we need to look at queries.
- We also need to be able to run the scripts automatically during build and inform the status (work with the build team).