Websites/Affiliates/Test Plan

From MozillaWiki
Jump to: navigation, search

Test Plan

Analysis

Currently in use

(as of 12.01.2011)

dev

  • nose
  • unittest
  • webtrends
  • other metrics collected manually on a weekly basis

qa

  • pytest
  • unittestzero
  • Selenium
  • Manual
    • fuzzing
    • Netsparker

Stretch goals for Q1

dev

  • QUnit
  • Statsd/Graphite

qa

  • Automated fuzzing
  • Quality bots
  • Garmr

Not planned

  • load testing

Changes

Moving forward there will be a number of significant changes in how QA is done within Affiliates.

1. Manual testing:

  • Manual testing will be done only for exploratory testing and new feature verification.
  • Manual testing of bugs, features and exploratory testing will be done by QA.
  • Manual testing may be done in production, instead of in staging.
    • Blocking bugs will still be verified prior to production push.
  • Manual testing of new features will be done by community members.

2. Automated testing:

A carefully defined Selenium automated test suite will be created. Tests will focus on areas that cannot be covered in unit tests. Example: Python cannot test CSRF functionality because the python suite disables it.

  • Selenium tests will only be for use cases which cannot be covered by Python or QUnit.
  • This test suite will be run for each code check-in, and will only be run in staging or the developer environment. Each test will qualify as a deployment blocker if it fails.

Tests will cover critical areas such as: XXX

  • All other automated tests will be deleted or removed.
  • In production automated tests will be run on demand that verify the environment and services are all working.
  • We could have more tests run in production, although only ones that do not create or destroy content. It is important not to create unnecessary data. Features are fully tested after each check-in.

3. Releases:

  • Will go out as many times as there are fixes ready to go. They will be released by developers, without IT or QA. The developers will be in charge of monitoring services after the release to watch for changes in behavior.

4. Bug fixes:

  • Do not need verification prior to releases. They may be verified by a developer, or by QA in production.

Risk areas

The lower the number the higher the risk.

1. [0] Banner links work - CDN
1. Registration (old school or new BroswerID
1. Logging in
1. Password recovery

2. Making Banners
2. Profile editing
2. My banners
2. Email newsletter registration
2. Leaderboard
2. Calendar

Legend

[1] must have good enough coverage that we feel comfortable a regression will be caught quickly be pushed out to production if broken
[2] can fail for up to an hour

Risk plan and tools

1. Risk: Less critical tests will not be covered with automation.

  • If the feature breaks it won't cause as much of a problem. It is deemed acceptable to to wait an hour for a fix.

2. Risk: More bugs with less automation, why not automate more rather than less?

  • Time. Creating more tests in Selenium is guaranteeing with unit tests have already verified, and it slows the process down. Right now there is a lot of duplicate effort. The team will discuss what is not currently covered by unit cases.

3. Risk: Releases may have hidden bugs or regressions which affect users.

  • Using Graphite as a tool. This tool will monitor usage of many common Affiliates functions. Any significant data spikes or dips after a release will quickly indicate if there is an issue.
  • Using StatsD as a tool. Another tool to monitor site usage by users, which may indicate issues if they exist.

4. Risk: Quality is going down, how do we change direction?

  • We are already releasing very quickly, which allows for fixes to go out faster. If we get to a point where we are not satisfied with the quality we will weigh that against the benefits of the speed of fixes.

5. Risk: Quality will go down over time as QA is less involved with the release process.

  • There is no endpoint for looking at quality, goal is to give the best product as fast as possible. Monitoring releases will provide quality data statistics to refer to.

6. Risk: We don't have a current baseline for quality.

  • We are gathering data all the time with new services. It is really difficult to measure the current level of regression and new bugs which are sent out in releases. The entire team is keeping an eye on quality in order to keep the levels up. In order to maintain current levels of quality we will keep an eye on regressions and feedback.