Calendar:QA Procedures For Litmus

From MozillaWiki
Jump to: navigation, search

The Litmus Testing System is a great tool. This page will detail how we will handle the results from our test runs on the Calendar QA project. Note that all of this information is subject to change as Litmus develops. It is a great tool, but since it is being developed at the same time that we are using it, then many of these procedures will probably change.

Litmus Feature Wish List

These are items that will make working with Litmus (as described below) much easier.

  • Ability to query on the "Vetted" status of failures and unclear results. This enables us to filter out results that have been "Vetted" out of our queries.
  • Ability to query for all failures that do (or do not) have a bugzilla ID.
  • Ability to query for all failures that are associated with a bugzilla ID AND where that Bug has been resolved. (In other words, ability to query for all test cases that should be retested due to fixed or resolved bugs.)
  • Note: We may start to track harnesses with a whiteboard entry. For example, "[testharness=litmus]" could be attached to the bugs that you all have that are associated with litmus tests. It may be possible to do queries against bugzilla and litmus to combine them and get the info requested above. (ray).

What to do with Test Cases that have Failed

If a test case has failed in Litmus, then follow these steps to vet the test case.

  • Ensure that a Bug ID was entered for the failure.
    • If there is no Bug ID specified, search bugzilla for an existing bug and enter its number into the test case.
    • If no existing Bug can be found, please ensure that the issue is an actual bug (i.e. not a tester error), and please enter a new Bug and add its number to the test case.
  • Once the test case has an actual Bug ID associated with it, please ensure the Valid checkbox is checked, and click the "Vet Result" button.

What to do with Test Cases that are Unclear/Broken

If a test case is marked as unclear or broken, we need to do follow these steps:

  • Usually, testers will leave a comment if the test case is unclear.
  • Understand the reason why the test case was unclear and correct the test case.
  • Once the test case has been corrected and made more clear, ensure the Valid Checkbox is checked on the result, and click the "Vet Result" button.

If a test case previously passed and now fails

  • This is very serious, and usually indicates a regression
  • Ensure a bug ID is specified, and when the bug is fixed, be sure to retest this test case

If a test case previously failed and now passes

  • This indicates that a long standing problem has now been fixed. In this case make sure (if possible) that you retested issue using the same architecture, operating system and of course product (sunbird or lightning).
  • Ensure that the associated bug ID was resolved or fixed.
    • If the bug ID was NOT resolved or fixed, then this bug may have been fixed by a side-effect en change from another bug. In this case, make a comment about this in Bugzilla, inviting the developers to understand how the bug was suddenly fixed.

If a test case has both previously passed and failed

  • This indicates that either the test case was unclear or the code in this area continues to break and to be fixed.
  • Try to understand which issue this is, contact previous testers, review tester comments.
  • If the test case seems to be unclear, try to re-write its steps and expected behaviors.
  • If the test case does seem to test code that regresses and breaks often, alert the developers to this. Also, be sure to test that area of code any time you download a nightly or begin a test day.

Wish list (todo)

  • Button and checkbox "Vet result" must be explained better - it's not clear what does it mean and how it works. It might be link to some external documentation but internal description is better.
  • Test case was marked as "failed" because of misunderstanding scenario or tester wanted to have something more than it is implemented so he should have marked result as "passed" and sent new issue (enh.) into bugzilla.

example: http://litmus.mozilla.org/single_result.cgi?id=24280

question: what to do with such result?

  • Add note that all results must be verified using clean profile and the latest build

example: http://litmus.mozilla.org/single_result.cgi?id=25284

  • In some cases added comment proves that tester hasn't interpreted scenario properly (probably misunderstood) so result is not valid (does not matter if passed of railed just doesn't have sense). So in such cases result should be removed, whether not? Could be nice if won't be visible any more in search result...

example: http://litmus.mozilla.org/single_result.cgi?id=44069 this was test but also might be removed :)

  • Test case (scenario) must be improved, so all results for such test might not be valid any more. So?
  • Section "What to do with Test Cases that have Failed" if you link test restult with bug in bugzilla you should also have a look because maybe some others failures can be linked to the same bug (two testers marked result as failed because of the same reason so this is duplicate)
  • I would suggest that points in next test case day should be counted only for people who will mark result as failed AND will send issue into bugzilla or link result to existing bug :-) So "Associated Bug #s:" won't be empty and verification will be easy. We need to encourage this process.