B2G/QA/WebAPI Test Plan/Instructions for Contributors

From MozillaWiki
< B2G‎ | QA‎ | WebAPI Test Plan
Jump to: navigation, search

Instructions for Contributors to the WebAPI Testing project

Welcome to the WebAPI testing project! Thank you so much for considering working with us.

If you would like to do so, and have not already contacted us, please send us an email and we'll be happy to help you get started!

These are the (currently rough) instructions for helping out. These will be improved over time.

Goal

We have two goals. One is acceptance testing of the APIs as implemented in Firefox OS. The other is implementing regression tests that will run from here on out to aid new development.

Acceptance testing is the more important goal in the short term for QA, as it's driven by the date of the final release. This must be balanced against development's primary goal, getting more regression tests in place. The regression tests are important, and should be a portion of every API's test plan. However, a strategy that gives us regression tests but no acceptance tests will make signoff risky.

How

Acceptance testing can be achieved in one of two ways: mochitests to be run on-device, and interactive web pages created to exercise the APIs on-device. An example of an interactive page would be the Screen Orientation Exercise.

Regression testing can be achieved in a couple of different ways too: mochitests to be run in any environment supporting the API in question, and Marionette tests that run only against a Firefox OS emulator.

Unfortunately, we do not have a way to automatically run regression tests against a real device. We can manually kick off a mochitest run, but they won't run in continuous integration.

Priorities

For acceptance testing, we'll lean towards making interactive pages over on-device mochitests. The reason is that we don't know yet how useful mochitests that only work on the device will be in the long run, since they can't be used for CI regression, and because support for mochitest on-device is not assured past Q3 2012. We can also perform exploratory testing with interactive pages, and they can be packaged as demos/examples for our developer community.

However, the strategy is a judgment call for you to make per-area.

For regression testing, where an API (or portion of an API) is not device-specific, mochitest should be used. It's run everywhere, so we get a bunch of value from them. If a device-specific feature must be used, then Marionette against emulator should be favored.

A given area can (and probably should) have any combination of test frameworks to implement the final testing strategy. By the time we're done, we may end up writing tests in xpcshell or other framework too.

Scope

For acceptance, we should touch as much of the API as possible, using it like an end-user (app programmer) would. For regression, we want more of a unit-test style, verifying each separate API function works as designed.

For this round, we only care a lot about basic positive tests. Negative tests, corner cases, etc, are nice-to-have but don't spend any major time on them unless you think they're vital. We'll come back around to do those.

Procedure

I fully expect you to have a ton of questions while you do this. Please don't hesitate to ask Geo Mealer via email or on IRC (user:geo).

  1. Pick a Ready area from the Test Plan

  2. Go to the tracker (join the project if necessary).

  3. Hit "Start" on the Icebox entry corresponding to the area. It'll move to Current w/ your initials added.

  4. Update the test plan table to also show it as started.

    A note on the tracker entry: it's a placeholder. The "10" estimate is a guess of 2.5 days of work per area (one day == 4 hours of useful time). As you discover more about the area, you should break this down into a planning estimate and implementation estimate.

    You can change the existing tracker entry to "planning" and add one or more "implementation" entries similar to what other tasks in the Done column. Try to keep this up to date with what you're actually doing.

  5. Go take a look at the API. You can start from the [WebAPI|developer list]. The key developer is likely the one assigned to the tracking bug for the API.

  6. Send an email to the developer to start the planning process. You should ask:

    • Is the API ready?
    • Is the API doc page accurate?
    • If not, what does the API look like now?
    • What testing gaps exist? This is the starting point for your plan.

  7. This is where you should be breaking down the Tracker into planning vs. implementation, with estimates of the time needed in hours (4 hrs per day, remember).

  8. Come up with a plan that matches the priorities and scope set above.

  9. Create the test plan document for the API. There's a template at the bottom of the WebAPI Test Plan page.

  10. Close out any Tracker tasks for planning. Touch up the estimates on the implementation tasks and/or create additional implementation tasks to match the plan. We suggest one implementation task per framework plus one for an interactive page if you're building one.

  11. Create a tracking bug in Bugzilla for your implementation. Can break down as above, or just use one bug for everything.

  12. Implement.

    Regarding implementation: Questions about specific usage of the API go to the devs. However, questions about JavaScript, basic development or framework stuff, or functional programming (callback usage) should go to Geo Mealer. We don't want to lean on the devs for ramp-up.

  13. Review. Ask your developer who (besides them) should be on the review.

  14. Finish out any implementation Tracker tasks and move on. If you feel we definitely didn't finish the area, create a Revisit task and put it in the Tracker icebox.