Firefox/Stub Attribution/Test Plan

From MozillaWiki
Jump to: navigation, search

Stub Attribution Test Plan

Dependencies (tools/systems):

  1. Firefox client code (Stub Installer)
  2. Bouncer - GitHub
    1. Staging: http://bouncer-bouncer.stage.mozaws.net/
  3. Mozilla.org (Bedrock) - GitHub
  4. Stub Downloader service - GitHub
  5. Telemetry
    • Caveat: Telemetry data lags 3 weeks, even after the switches have been flipped

Acceptance Criteria

In order to fully test, the following are needed:

Full Query
ID Summary Priority Status
1273940 Make stub attribution script available on the server -- RESOLVED
1278981 Create service to authenticate stub attribution request -- RESOLVED
1279291 Construct and send attribution code from download buttons -- RESOLVED
1306457 Implement the whitelist for Stub Attribution `source` field -- RESOLVED
1318456 Stub installer with dummy cert for testing -- RESOLVED
1320773 Update stub installer build to include dummy certificate for Stub Attribution data P1 RESOLVED
1324692 Unable to see neither stub-attribution keys/values in Telemetry, nor associated postSigningData file -- RESOLVED

7 Total; 0 Open (0%); 7 Resolved (100%); 0 Verified (0%);

  1. a staged or dark-launched Mozilla.org instance ready with Download-button logic and passed-in attribution_code (from bug 1279291), which:
  2. ...uses the AJAX service to sign Stub-Attribution URLs with a SHA-246 HMAC hash (bug 1278981)
  3. Bouncer (download.mozilla.org) logic updated to ignore attribution_code for XP+Vista users
  4. Firefox stub installer binaries - PENDING in Firefox 50 builds shipping November 15th - which include the stub attribution_code's post-signing data capability available and pointed to...
  5. production-ready stubdownloader.prod.mozaws.net service/instance

Dark Launch Criteria

  1. Bedrock's front-end stub-attribution button code merged to master, pushed to prod
  2. Stub Attribution service (backend)
    1. pushed to prod: https://stubdownloader.services.mozilla.com/
    2. monitoring in place (with alerting) for:
      1. expected cache misses
      2. server error codes, i.e. HTTP 4xx, 5xxx
  3. RelEng: all changes listed in bug 1320773

Latest Testing Status

  1. verified that https://stubattribution-default.stage.mozaws.net/?product=test-stub&os=win&lang=en-US returns test-stub.exe from https://download-installer.cdn.mozilla.net/pub/firefox/nightly/experimental/bug1318456/test-stub.exe
  2. verified that https://bouncer-bouncer.stage.mozaws.net/?product=test-stub&os=win&lang=en-US redirects and returns test-stub.exe from https://download-installer.cdn.mozilla.net/pub/firefox/nightly/experimental/bug1318456/test-stub.exe

Open Issues

TODO

Notes + Questions:

  1. Notes on caching/timeout values:
    1. (Stub Attribution) HMAC_TIMEOUT is 10 minutes: https://github.com/mozilla-services/stubattribution/commit/8a6cc83547705d35451fb3ad08e3f82035f04abb
    2. (Stub Attribution) unaltered stub-installer binary w/cert is cached for 5 minutes: https://github.com/mozilla-services/stubattribution/pull/38/files
    3. (Bedrock) stub-attribution cookie is now 24 hours: https://github.com/mozilla/bedrock/pull/4456/commits/4e8d5bb6bd6506d9cd2dfabef67ff2c780c8bbd9
  2. How can we test through cookied-Bedrock flow, at scale, for unique builds?
  3. How can we test (and try to break) Jeremy's 10-minute cache. And what does it cover, binary/condition-wise? (i.e. what's the driving logic/algorithm for caching vs. serving fresh?)
  4. How/can we performance-test the UI experience?
  5. What can we drive, using WebDriver?
  6. Can we make two identical requests using the go-bouncer e2e tests, and check for/ensure we get a cached binary?
    1. Likewise, two different requests (i.e. with just one unique key attribute), and get fresh, unique binaries in that case?
  7. How (or can we even?) establish a reasonable performance metric around downloading the stub installer, currently?
    1. ...so we can measure against this baseline when we test the dynamic vs. cached-downloads Stub-Attribution Service
  8. How to test (all five?) codes/fields?
    1. Source
    2. Medium
    3. Campaign
    4. Content
    5. "Referrer" alone?
      1. Which specific browsers (vendors + versions) properly support it/if have it enabled, we'll honor the setting?
  9. Which locale(s)?
  10. Entry-points on Mozilla.com. Which page(s)? Or all download pages?
    1. https://www.mozilla.org/en-US/firefox/all/
    2. https://www.mozilla.org/en-US/firefox/new/?scene=2 (and do the ?scene=[] parameters matter?)
  11. Do we need to cover the upgrade-path scenario? i.e.:
    1. user downloads and installs using the special stub installer w/tracking code
    2. we verify correct pings, etc.
    3. user upgrades later to a newer version of Firefox (pave-over install)
      1. do we still check for this ping?
  12. How about the successfully installed, then uninstalled case: check to see that client no longer sends ping?
  13. Should we test/what should happen in the double-install case? (Install, then install again w+w/o uninstalling the 1st binary.)
    1. Similarly, what about the a) install stub w/attribution b) upgrade to a later version of Firefox case?

Testing Approaches

Manual Tests:

  1. Positive (happy-path) Tests:
  2. Ensure that https://stubattribution-default.stage.mozaws.net/?product=test-stub&os=win&lang=en-US works and returns a Telemetry-enabled (with Stub Attribution params) build
    1. Do Not Track disabled
      • Click an ad banner or link which has an attribution (source? medium? campaign? content? referrer?) code
      • Verify that the URL which takes you to a Mozilla.org download page contains a stub_attcode with a valid param
      • Verify that the same valid stub_attcode param/value gets passed to the Mozilla.org Download Firefox button
      • Download and install the stub installer
      • Verify (how?) that upon successful installation, the stub installer sends a "success!" ping with the same stub_attcode param/value

Regression Testing

Goal: Ensure that Bouncer + Mozilla.org continue to deliver the correct builds to the appropriate users

  • All browsers available on Windows XP/Vista (test IE 6, IE 7, Chrome) except for Firefox should still get the SHA-1 stub installer

Negative Testing

  1. Ensure other installers/binaries don't have/pass on the URL param/stub code
    1. e.g. Mac, Linux, full Firefox for Windows installers

Performance Testing

Goal: Understand, mitigate, and quantify, if possible, the performance impact that adding a dynamic Stub Attribution service has on downloading stub-installer builds on Windows clients (and without regressing the delivery performance of builds for other platforms?)

Load/Soak Tests

Goals:

  1. In order to be able to more fully test the Mozilla.org -> Bouncer -> Stub Attribution dynamic stub-installer build scenario generate enough traffic/load such that we try to break through server-side caching
  2. Help establish baselines and thresholds for server-side performance/availability

Fuzz/Security Testing

  • Purpose:
    • Surface potential incorrectly-handled errors (tracebacks/HTTP 5xx, etc.) and potential security issues
  • Likely using OWASP ZAP, either manually and/or via the CLI, using Docker/Jenkins
    • Against Bouncer
    • Against Stub Downloader service
    • Against Mozilla.org

Test Automation Coverage:

  • What will the Bedrock unit tests cover?
    • Where/how frequently will they run? With each pull request/commit/release?
  • What will the Bouncer tests cover?
    • Where/how frequently will they run? With each pull request/commit/release? On a cron?
  • What will the Stub Attribution unit tests cover?
    • Where/how frequently will they run? With each pull request/commit/release? On a cron?

References