QA/TDAI/Projects: Difference between revisions

From MozillaWiki
< QA‎ | TDAI
Jump to navigation Jump to search
No edit summary
 
(21 intermediate revisions by 3 users not shown)
Line 1: Line 1:
<small>[[QA/TDAI|&laquo; QA/TDAI]]</small>
<small>[[QA/TDAI|&laquo; QA/TDAI]]</small>


See Tim, Clint, or Jay for more details about how you can help contribute to these projects.
These are the areas we could use your help on.  The projects are listed in no real order, just feel free to jump in to #qa on irc.mozilla.org if you're interested in helping out and you have questions.


* Tim: - email: timr@mozilla.com; IRC: timr; IM: triley5000@aol.com, tim_riley5@yahoo
* Find Clint Talbert (IRC: ctalbert) (email: ctalbert at mozilla dot com)
* Jay: - irc: jay
-- or --
* Clint: irc: ctalbert, email: ctalbert@mozilla.com
* Find Joel Maher (IRC: jmaher) (emai: jmaher at mozilla dot com)


= Firefox Application Area =
= Tests that Need to Be Written =
* [[QA/TDAI/Projects/Places_Tests|Places Query API Testing]]
There are hundreds of bugs in our backlog that need automated test cases urgently.  Please feel free to jump right in and start writing test cases for them.  We've prioritized our highest priority bugs into hot lists for each area of the Mozilla platform:
* A great way to get started here would be to work on some easy tests for XUL Error Pages - see {{bug|529119}} for ideas and ask on #qa if you have questions!
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&product=Core&component=Content&component=Document+Navigation&component=DOM&component=DOM%3A+Abstract+Schemas&component=DOM%3A+Core+%26+HTML&component=DOM%3A+CSS+Object+Model&component=DOM%3A+Events&component=DOM%3A+Mozilla+Extensions&component=DOM%3A+Other&component=DOM%3A+Traversal-Range&component=DOM%3A+Validation&component=Event+Handling&component=HTML%3A+Form+Submission&component=HTML%3A+Parser&component=Java+APIs+for+DOM&component=Java+Embedding+Plugin&component=Java%3A+Live+Connect&component=Java%3A+OJI&component=Networking&component=Networking%3A+Cache&component=Networking%3A+Cookies&component=Networking%3A+File&component=Networking%3A+FTP&component=Networking%3A+HTTP&component=Networking%3A+JAR&component=Plug-ins&component=Security&component=Security%3A+CAPS&component=Serializers&component=Web+Services&component=WebDAV&component=XBL&component=XForms&component=XML&component=XPConnect&component=XSLT&component=XTF&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&resolution=FIXED&resolution=DUPLICATE&resolution=---&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=-3m&chfieldto=Now&chfield=assigned_to&chfield=resolution&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&field0-0-0=flagtypes.name&type0-0-0=equals&value0-0-0=in-testsuite%3F Content Testing - tests for the interfaces exposed to web authors]
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&product=Core&product=Toolkit&component=Layout&component=Layout%3A+Block+and+Inline&component=Layout%3A+Floats&component=Layout%3A+Form+Controls&component=Layout%3A+HTML+Frames&component=Layout%3A+Images&component=Layout%3A+Misc+Code&component=Layout%3A+R+%26+A+Pos&component=Layout%3A+Tables&component=Layout%3A+Text&component=Layout%3A+View+Rendering&component=MathML&component=Plug-ins&component=Plugin+Finder+Service&component=Printing&component=Selection&component=Style+System+%28CSS%29&component=SVG&component=Video%2FAudio&component=View+Source&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&resolution=FIXED&resolution=DUPLICATE&resolution=---&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=-3m&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&known_name=Gfx%2FWidget+blocking1.9.1%2B&query_based_on=Gfx%2FWidget+blocking1.9.1%2B&field0-0-0=flagtypes.name&type0-0-0=equals&value0-0-0=in-testsuite%3F Layout Testing - tests for the way we render web pages]
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&product=Core&product=Toolkit&component=Drag+and+Drop&component=GFX%3A+Color+Management&component=GFX%3A+Thebes&component=ImageLib&component=Layout%3A+Canvas&component=Widget&component=Widget%3A+BeOS&component=Widget%3A+Gtk&component=Widget%3A+Mac&component=Widget%3A+OS%2F2&component=Widget%3A+Photon&component=Widget%3A+Win32&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&resolution=FIXED&resolution=DUPLICATE&resolution=---&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=-3m&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&known_name=Gfx%2FWidget+blocking1.9.1%2B&query_based_on=Gfx%2FWidget+blocking1.9.1%2B&field0-0-0=flagtypes.name&type0-0-0=equals&value0-0-0=in-testsuite%3F Graphics Testing - tests for our graphics drawing subsystem]
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&product=Core&component=JavaScript+Engine&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=allwords&keywords=&resolution=FIXED&resolution=DUPLICATE&resolution=---&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=-3m&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&known_name=wanted1.9.1%2B+JS&query_based_on=wanted1.9.1%2B+JS&field0-0-0=flagtypes.name&type0-0-0=equals&value0-0-0=in-testsuite%3F Javascript Testing - tests for our Javascript engine]
* [https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&short_desc_type=allwordssubstr&short_desc=&product=Core&product=Toolkit&component=Build+Config&component=Cmd-line+Features&component=Disability+Access+APIs&component=Embedding%3A+ActiveX+Wrapper&component=Embedding%3A+APIs&component=Embedding%3A+GRE+Core&component=Embedding%3A+GTK+Widget&component=Embedding%3A+Mac&component=Embedding%3A+MFC+Embed&component=Embedding%3A+Packaging&component=File+Handling&component=Find+Backend&component=General&component=Geolocation&component=GFX%3A+Color+Management&component=History%3A+Global&component=Image+Blocking&component=Image%3A+Painting&component=Installer%3A+XPInstall+Engine&component=Internationalization&component=IPC&component=Java+APIs+to+WebShell&component=Java+to+XPCOM+Bridge&component=Java-Implemented+Plugins&component=jemalloc&component=Keyboard%3A+Navigation&component=Localization&component=Preferences%3A+Backend&component=Print+Preview&component=Printing%3A+Output&component=Printing%3A+Setup&component=Profile%3A+BackEnd&component=Profile%3A+Migration&component=Profile%3A+Roaming&component=QuickLaunch+%28AKA+turbo+mode%29&component=RDF&component=Rewriting+and+Analysis&component=Security%3A+PSM&component=Security%3A+S%2FMIME&component=Security%3A+UI&component=Spelling+checker&component=SQL&component=String&component=Talkback+Client&component=Tracking&component=Widget%3A+Cocoa&component=Widget%3A+Qt&component=X-remote&component=XP+Toolkit%2FWidgets%3A+Menus&component=XP+Toolkit%2FWidgets%3A+XUL&component=XPCOM&long_desc_type=allwordssubstr&long_desc=&bug_file_loc_type=allwordssubstr&bug_file_loc=&status_whiteboard_type=allwordssubstr&status_whiteboard=&keywords_type=nowords&keywords=&resolution=FIXED&resolution=DUPLICATE&resolution=---&emailtype1=substring&email1=&emailtype2=substring&email2=&bugidtype=include&bug_id=&votes=&chfieldfrom=-3m&chfieldto=Now&chfieldvalue=&cmdtype=doit&order=Reuse+same+sort+as+last+time&known_name=untriaged%2B&query_based_on=untriaged%2B&field0-0-0=flagtypes.name&type0-0-0=equals&value0-0-0=in-testsuite%3F&field0-0-1=noop&type0-0-1=noop&value0-0-1=&field0-1-0=noop&type0-1-0=noop&value0-1-0= General Tests - tests for areas of the platform outside of these broad divisions]


= Mobile =
= Mobile =
* [[Mobile/Fennec_TestDev Mobile Browser Tests]]
Help us with Fennec by creating tests for specific Fennec features:
* [https://wiki.mozilla.org/Mobile/Fennec_TestDev Mobile Browser Tests]
* QA Companion (add-on) for mobile specific testing and feedback


= Platform Content Area =
= Firefox Front End =
* Develop XBL2 tests to help with the Moz 2 project to port XBL to XBL2. There are compatibility issues (XBL -> XBL2), JS interview testing, visual/graphical testing wirth RefTest [Ref: jsicking Feb 2007]
* Create performance testing frameworks to help us measure the user responsiveness of various actions using Mozmill:
* [[QA/TDAI/Code_Coverage|Test Code Coverage Analysis]]
** How long does it take to open the 100th tab compared to opening the same site as the only tab
** How long does it take for the awesome bar to show a bookmarked result if you have 1000 entries in your places database? 10,000 entries in the database?
** How long does it take if the result you search for is '''not''' bookmarked?
** etc...


= Platform Graphics/Layout Area =
* Help automate manual Litmus test cases using Mozmill.
= Security =
** Current [http://spreadsheets.google.com/ccc?key=pAP5Y5AH3-Tl-wRoNgBujUQ&hl=en tracking spreadsheet] for automation.
* Help jesse, bc reproduce fuzz crashers.
* Develop edge case negative tests to reveal weaknesses. Example: what happens if the network connect goes down during heavy http traffic. [ref: Window; Feb 2007]


= Test Suite Integration =
= Test Suite Integration =
There are several test tools and suites out there for various AJAX development libraries.  We'd like to import all those test suites into our own mochitest testing environmentSome general information to help you get started with these projects is [[QA/TDAI/Projects/Test_Suite_Integration|here]].  Currently we have the following projects in this area:
There are several ajax frameworks that have excellent test tools.  We'd like to import those test suites into our mochitest test harnessInformation on how to do this work can be found [[QA/TDAI/Projects/Test_Suite_Integration|here]].  Currently we would like to get the following test suites integrated:
* jQuery {{bug|424813}} -- in progress - harthur
* Scriptaculous {{bug|424816}}
* Prototype {{bug|424814}} -- in progress - harthur
* Dojo {{bug|424818}}  
* Scriptaculous {{bug|424816}} -- in progress - ctalbert
* Dojo {{bug|424818}} -- in progress - mw22
* Yahoo UI {{bug|424819}}
* Yahoo UI {{bug|424819}}
* Selenium {{bug|424820}}
* Selenium {{bug|424820}}
* Qooxdoo {{bug|426191}}
* Qooxdoo {{bug|426191}}


= Tools =
= Test Results Comparison Reports =
* AUS Update checking tool
 
* Image Comparison tool
Update: Some progress have been done and it's up and running [http://brasstacks.mozilla.com/buildcompare/ here]. The description is [[here]].
* [[QA/TDAI/MozMillTestTool|Mozilla Grindmill Test Tool]]
 
Currently there is no way to compare test results between runs. We need to store test results more reliably to compare test data and see the differences between one run and another that can be helpful to track tests that fail randomly, track large number of failures across different platforms, etc
 
The solution is to set up a database to store test failures of each run and preferably web interface for viewing .
 
The idea is to have a results server and relevant data from log files of each run will be parsed and sent to the server via JSON. Queries can then be built on the results to be viewed on a webpage, or as an XML feed.
 
To start off,  we will:
*write a log parser for xpcshell and reftest and upload the data to [http://couchdb.apache.org/ CouchDb].
*Once small sets of data are in there, we need to look at queries.
*We also need to be able to run the scripts automatically during build and inform the status (work with the build team).
 
Some ideas I have for views (queries) that we will need:
* number of pass, fail, todo (i.e. 1980, 6, 122) tests for a given test run (assume document: i.e. reftest-buildid-date)
* method to display results for any give test pass.  Would like to display based on any given date (or range) as well as compare the most current run to any given date
* find tests which are not run in a given test pass.  Get a list of all possible tests found in previous runs, cross reference with current test pass and return a list of tests which are not found in current test pass
* look at history of a given set of tests (single, matching regex, matching type) over all runs for a given time period
 
 
Here is the json format that we will use:
  {"test id": {
    "build":2009070112345,
    "product":"fennec",
    "os":"maemo",
    "machineid":"nokia-n810-02",
    "testtype":"fennec reftest",
    "timestamp":"20090701153600",
    "tests": {
      "testname1":{"pass":3,"fail":0,"todo":1,"notes":"reason for failure"},
      "testname2":{"pass":12,"fail":0,"todo":2,"notes":"more notes"},
      .
      :
    }
  }
  }
 
For each log file we parse, we will add a document with the above json format.  This will allow for us to have multiple files for a given run (if they are run in parallel) and a collection of different builds and dates to compare against.  All test results will live in a single database as documents so we can share the same view code.
 
The notes field will be a collection of the notes found when we have a failure for the specific test file.
 
 
The unknowns that we have are:
* test id (just need to standardize the format)
 
 
One issue found while using couchdb is that the futon view produces results:
http://67.205.203.107:5984/_utils/database.html?logs-1/_design%2Flogs-1%2Fsummary
 
but the html api appears to be missing information:
http://67.205.203.107:5984/logs-1/_design/logs-1/_view/summary
 
Might be related to how we have the db or _design doc setup.
 
= Per Patch Code coverage Analysis =
This project will aid developers when making large substantial changes to the codebase.  If you change a large bit of code or a substantial behavior you can use code coverage metrics gathered during unit tests to see whether or not your code is having any unintended effects and you can also see if you are hitting all the cases in your code.
This system would do the following:
* Have a way to add --coverage to the test harnesses (mochitest, reftest, xpchsell) which would tell them to gather code coverage data ''Dependency: would have to depend on code coverage tools being installed''
* Would enable developers to submit a patch
* After the unit tests run once with the patch and once without it the two sets of gathered data are diffed using a tool that can help show what was different about those two code paths.
 
Because of the difficulty of installing code coverage tools this might want to be on a web server and be a kind of "try-server" esque mechanism.
 
We need a really good way to diff the data such that it outputs easily digestible actionable information.

Latest revision as of 19:00, 18 February 2010

« QA/TDAI

These are the areas we could use your help on. The projects are listed in no real order, just feel free to jump in to #qa on irc.mozilla.org if you're interested in helping out and you have questions.

  • Find Clint Talbert (IRC: ctalbert) (email: ctalbert at mozilla dot com)

-- or --

  • Find Joel Maher (IRC: jmaher) (emai: jmaher at mozilla dot com)

Tests that Need to Be Written

There are hundreds of bugs in our backlog that need automated test cases urgently. Please feel free to jump right in and start writing test cases for them. We've prioritized our highest priority bugs into hot lists for each area of the Mozilla platform:

Mobile

Help us with Fennec by creating tests for specific Fennec features:

Firefox Front End

  • Create performance testing frameworks to help us measure the user responsiveness of various actions using Mozmill:
    • How long does it take to open the 100th tab compared to opening the same site as the only tab
    • How long does it take for the awesome bar to show a bookmarked result if you have 1000 entries in your places database? 10,000 entries in the database?
    • How long does it take if the result you search for is not bookmarked?
    • etc...

Test Suite Integration

There are several ajax frameworks that have excellent test tools. We'd like to import those test suites into our mochitest test harness. Information on how to do this work can be found here. Currently we would like to get the following test suites integrated:

Test Results Comparison Reports

Update: Some progress have been done and it's up and running here. The description is here.

Currently there is no way to compare test results between runs. We need to store test results more reliably to compare test data and see the differences between one run and another that can be helpful to track tests that fail randomly, track large number of failures across different platforms, etc

The solution is to set up a database to store test failures of each run and preferably web interface for viewing .

The idea is to have a results server and relevant data from log files of each run will be parsed and sent to the server via JSON. Queries can then be built on the results to be viewed on a webpage, or as an XML feed.

To start off, we will:

  • write a log parser for xpcshell and reftest and upload the data to CouchDb.
  • Once small sets of data are in there, we need to look at queries.
  • We also need to be able to run the scripts automatically during build and inform the status (work with the build team).

Some ideas I have for views (queries) that we will need:

  • number of pass, fail, todo (i.e. 1980, 6, 122) tests for a given test run (assume document: i.e. reftest-buildid-date)
  • method to display results for any give test pass. Would like to display based on any given date (or range) as well as compare the most current run to any given date
  • find tests which are not run in a given test pass. Get a list of all possible tests found in previous runs, cross reference with current test pass and return a list of tests which are not found in current test pass
  • look at history of a given set of tests (single, matching regex, matching type) over all runs for a given time period


Here is the json format that we will use:

 {"test id": {
   "build":2009070112345,
   "product":"fennec",
   "os":"maemo",
   "machineid":"nokia-n810-02",
   "testtype":"fennec reftest",
   "timestamp":"20090701153600",
   "tests": {
     "testname1":{"pass":3,"fail":0,"todo":1,"notes":"reason for failure"},
     "testname2":{"pass":12,"fail":0,"todo":2,"notes":"more notes"},
     .
     :
   }
  }
 }

For each log file we parse, we will add a document with the above json format. This will allow for us to have multiple files for a given run (if they are run in parallel) and a collection of different builds and dates to compare against. All test results will live in a single database as documents so we can share the same view code.

The notes field will be a collection of the notes found when we have a failure for the specific test file.


The unknowns that we have are:

* test id (just need to standardize the format)


One issue found while using couchdb is that the futon view produces results: http://67.205.203.107:5984/_utils/database.html?logs-1/_design%2Flogs-1%2Fsummary

but the html api appears to be missing information: http://67.205.203.107:5984/logs-1/_design/logs-1/_view/summary

Might be related to how we have the db or _design doc setup.

Per Patch Code coverage Analysis

This project will aid developers when making large substantial changes to the codebase. If you change a large bit of code or a substantial behavior you can use code coverage metrics gathered during unit tests to see whether or not your code is having any unintended effects and you can also see if you are hitting all the cases in your code. This system would do the following:

  • Have a way to add --coverage to the test harnesses (mochitest, reftest, xpchsell) which would tell them to gather code coverage data Dependency: would have to depend on code coverage tools being installed
  • Would enable developers to submit a patch
  • After the unit tests run once with the patch and once without it the two sets of gathered data are diffed using a tool that can help show what was different about those two code paths.

Because of the difficulty of installing code coverage tools this might want to be on a web server and be a kind of "try-server" esque mechanism.

We need a really good way to diff the data such that it outputs easily digestible actionable information.