Firefox/Stub Attribution/Test Plan: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
Line 104: Line 104:
** Where/how frequently will they run?  With each pull request/commit/release?
** Where/how frequently will they run?  With each pull request/commit/release?
* What will the Bouncer tests cover?
* What will the Bouncer tests cover?
** Where/how frequently will they run?  With each pull request/commit/release?  On a cron?
* What will the Stub Attribution unit tests cover?
** Where/how frequently will they run?  With each pull request/commit/release?  On a cron?
** Where/how frequently will they run?  With each pull request/commit/release?  On a cron?
{{VerifiedUser}}
{{VerifiedUser}}
== References ==
== References ==
* Whiteboard flows/diagrams:
* Whiteboard flows/diagrams:

Revision as of 04:33, 8 November 2016

Stub Attribution Test Plan

Dependencies (tools/systems):

Acceptance Criteria

In order to begin testing, the following are needed:

Full Query
ID Summary Priority Status
1273940 Make stub attribution script available on the server -- RESOLVED
1278981 Create service to authenticate stub attribution request -- RESOLVED
1279291 Construct and send attribution code from download buttons -- RESOLVED
1306457 Implement the whitelist for Stub Attribution `source` field -- RESOLVED

4 Total; 0 Open (0%); 4 Resolved (100%); 0 Verified (0%);

  1. a staged or dark-launched Mozilla.org instance ready with Download-button logic and passed-in attribution_code (from bug 1279291), which:
  2. ...uses the AJAX service to sign Stub-Attribution URLs with a SHA-246 HMAC hash (bug 1278981)
  3. Bouncer (download.mozilla.org) logic updated to ignore attribution_code for XP+Vista users
  4. Firefox stub installer binaries - PENDING in Firefox 50 builds shipping November 15th - which include the stub attribution_code's post-signing data capability available and pointed to...
  5. production-ready stubdownloader.prod.mozaws.net service/instance

Open Issues

TODO

  • determine what additional test-automation coverage we might need to add to the end-to-end Bouncer tests
  • confirm and note who checks the Telemetry pings
  • confirm and note load-testing strategy
  • confirm and note performance-testing strategy

Questions:

  1. How (or can we even?) establish a reasonable performance metric around downloading the stub installer, currently?
    1. ...so we can measure against this baseline when we test the dynamic vs. cached-downloads Stub-Attribution Service
  2. All (supported) Windows platforms, or?
  3. Which IE + Windows versions should get the Stub Attribution (SHA-256)
    1. By the looks of bug 1304538, IE 6-8 should get SHA-1
      1. So they're not eligible for this, right?
  4. How to test (all five?) codes/fields?
    1. Source
    2. Medium
    3. Campaign
    4. Content
    5. "Referrer" alone?
      1. Which specific browsers (vendors + versions) properly support it/if have it enabled, we'll honor the setting?
  5. Which locale(s)?
  6. Entry-points on Mozilla.com. Which page(s)? Or all download pages?
    1. https://www.mozilla.org/en-US/firefox/all/
    2. https://www.mozilla.org/en-US/firefox/new/?scene=2 (and do the ?scene=[] parameters matter?)
  7. Do we need to cover the upgrade-path scenario? i.e.:
    1. user downloads and installs using the special stub installer w/tracking code
    2. we verify correct pings, etc.
    3. user upgrades later to a newer version of Firefox (pave-over install)
      1. do we still check for this ping?
  8. How about the successfully installed, then uninstalled case: check to see that client no longer sends ping?
  9. Should we test/what should happen in the double-install case? (Install, then install again w+w/o uninstalling the 1st binary.)
    1. Similarly, what about the a) install stub w/attribution b) upgrade to a later version of Firefox case?

Testing Approaches

Manual Tests:

  1. Positive (happy-path) Tests:
    1. Do Not Track disabled
      • Click an ad banner or link which has an attribution (source? medium? campaign? content? referrer?) code
      • Verify that the URL which takes you to a Mozilla.org download page contains a stub_attcode with a valid param
      • Verify that the same valid stub_attcode param/value gets passed to the Mozilla.org Download Firefox button
      • Download and install the stub installer
      • Verify (how?) that upon successful installation, the stub installer sends a "success!" ping with the same stub_attcode param/value

Regression Testing

Goal: Ensure that Bouncer + Mozilla.org continue to deliver the correct builds to the appropriate users

  • All browsers available on Windows XP/Vista (test IE 6, IE 7, Chrome) except for Firefox should still get the SHA-1 stub installer

Negative Testing

  1. Ensure other installers/binaries don't have/pass on the URL param/stub code
    1. e.g. Mac, Linux, full Firefox for Windows installers

Performance Testing

Goal: Understand, mitigate, and quantify, if possible, the performance impact that adding a dynamic Stub Attribution service has on downloading stub-installer builds on Windows clients (and without regressing the delivery performance of builds for other platforms?)

Load/Soak Tests

Fuzz/Security Testing

  • Purpose:
    • Surface potential incorrectly-handled errors (tracebacks/HTTP 5xx, etc.) and potential security issues
  • Likely using OWASP ZAP, either manually and/or via the CLI, using Docker/Jenkins
    • Against Bouncer
    • Against Stub Downloader service
    • Against Mozilla.org

Test Automation Coverage:

  • What will the Bedrock unit tests cover?
    • Where/how frequently will they run? With each pull request/commit/release?
  • What will the Bouncer tests cover?
    • Where/how frequently will they run? With each pull request/commit/release? On a cron?
  • What will the Stub Attribution unit tests cover?
    • Where/how frequently will they run? With each pull request/commit/release? On a cron?

References