Firefox/Stub Attribution/Test Plan
From MozillaWiki
< Firefox | Stub Attribution
Contents
Stub Attribution Test Plan
Dependencies (tools/systems):
- Firefox client code (Stub Installer)
- Bouncer - GitHub
- Mozilla.org (Bedrock) - GitHub
- Stub Downloader service - GitHub
- Telemetry
- Caveat: Telemetry data lags 3 weeks, even after the switches have been flipped
Acceptance Criteria
In order to fully test, the following are needed:
7 Total; 0 Open (0%); 7 Resolved (100%); 0 Verified (0%);
- a staged or dark-launched Mozilla.org instance ready with Download-button logic and passed-in attribution_code (from bug 1279291), which:
- ...uses the AJAX service to sign Stub-Attribution URLs with a SHA-246 HMAC hash (bug 1278981)
- Bouncer (download.mozilla.org) logic updated to ignore attribution_code for XP+Vista users
- Firefox stub installer binaries - PENDING in Firefox 50 builds shipping November 15th - which include the stub attribution_code's post-signing data capability available and pointed to...
- production-ready stubdownloader.prod.mozaws.net service/instance
Dark Launch Criteria
- Bedrock's front-end stub-attribution button code merged to master, pushed to prod
- Stub Attribution service (backend)
- pushed to prod: https://stubdownloader.services.mozilla.com/
- monitoring in place (with alerting) for:
- expected cache misses
- server error codes, i.e. HTTP 4xx, 5xxx
- RelEng: all changes listed in bug 1320773
Latest Testing Status
- verified that https://stubattribution-default.stage.mozaws.net/?product=test-stub&os=win&lang=en-US returns test-stub.exe from https://download-installer.cdn.mozilla.net/pub/firefox/nightly/experimental/bug1318456/test-stub.exe
- verified that https://bouncer-bouncer.stage.mozaws.net/?product=test-stub&os=win&lang=en-US redirects and returns test-stub.exe from https://download-installer.cdn.mozilla.net/pub/firefox/nightly/experimental/bug1318456/test-stub.exe
Open Issues
TODO
- find Python help sourcing raw, incoming/already-collected (server-side) attribution.* data, using http://reports.telemetry.mozilla.org/render?markdown=tutorials/telemetry_hello_world.kp (or the like)
- add more sources, like:
- www.getfirefox.com (which 301 -> https://www.mozilla.org/en-US/firefox/new/?utm_medium=referral&utm_source=getfirefox-com)
- add Twitter
- which other social-media sources?
- determine what additional test-automation coverage we might need to add to the end-to-end Bouncer tests
- assert SHA-1 only served to Chrome, IE 6, 7, 8 on XP, Vista
- assert SHA-256 is served to IE 8, Windows 7
- confirm and note who checks the Telemetry pings
- confirm and note load-testing strategy
- confirm and note performance-testing strategy
- duration from Mozilla.org Download-button click to being served the correct stub-installer binary
Notes + Questions:
- From https://github.com/mozilla-services/go-bouncer/issues/34: "If the attribution_code parameter is set, redirect to the stub downloader service with the product, os, lang and attribution_code parameters set."
- Additionally, from https://github.com/mozilla-services/go-bouncer/pull/36#issuecomment-254936680: "This looks fine to me. It should include some sort of signature or hmac parameter that will contain the hmac signature that verifies that the request came from www.mozilla.org, but I'm very pleased that bouncer will handle the redirect."
- And, finally, from https://github.com/mozilla-services/go-bouncer/pull/36#issuecomment-255865245: "New commit passes through an attribution_sig parameter."
- Additionally, from https://github.com/mozilla-services/go-bouncer/pull/36#issuecomment-254936680: "This looks fine to me. It should include some sort of signature or hmac parameter that will contain the hmac signature that verifies that the request came from www.mozilla.org, but I'm very pleased that bouncer will handle the redirect."
- From https://github.com/mozilla-services/stubattribution/pull/20/files: "If set, the `attribution_code` parameter will be verified by validating that the `attribution_sig` parameter matches the hex-encoded sha256 hmac of `attribution_code` using`HMAC_KEY`."
- From oremj: "we want every unique group of four attribution code parameters to point at the same url; so for source=google, campaign=foo, source=foo, medium=foo... attribution_code and attribution_sig should always be the same"
- From me: "and right now, looks like attribution_sig is a timestamped + more hashed value"
- Notes on caching/timeout values:
- (Stub Attribution) HMAC_TIMEOUT is 10 minutes: https://github.com/mozilla-services/stubattribution/commit/8a6cc83547705d35451fb3ad08e3f82035f04abb
- (Stub Attribution) unaltered stub-installer binary w/cert is cached for 5 minutes: https://github.com/mozilla-services/stubattribution/pull/38/files
- (Bedrock) stub-attribution cookie is now 24 hours: https://github.com/mozilla/bedrock/pull/4456/commits/4e8d5bb6bd6506d9cd2dfabef67ff2c780c8bbd9
- How can we test through cookied-Bedrock flow, at scale, for unique builds?
- How can we test (and try to break) Jeremy's 10-minute cache. And what does it cover, binary/condition-wise? (i.e. what's the driving logic/algorithm for caching vs. serving fresh?)
- How/can we performance-test the UI experience?
- What can we drive, using WebDriver?
- Can we make two identical requests using the go-bouncer e2e tests, and check for/ensure we get a cached binary?
- Likewise, two different requests (i.e. with just one unique key attribute), and get fresh, unique binaries in that case?
- How (or can we even?) establish a reasonable performance metric around downloading the stub installer, currently?
- ...so we can measure against this baseline when we test the dynamic vs. cached-downloads Stub-Attribution Service
- How to test (all five?) codes/fields?
- Source
- Medium
- Campaign
- Content
- "Referrer" alone?
- Which specific browsers (vendors + versions) properly support it/if have it enabled, we'll honor the setting?
- Which locale(s)?
- Entry-points on Mozilla.com. Which page(s)? Or all download pages?
- https://www.mozilla.org/en-US/firefox/all/
- https://www.mozilla.org/en-US/firefox/new/?scene=2 (and do the ?scene=[] parameters matter?)
- Do we need to cover the upgrade-path scenario? i.e.:
- user downloads and installs using the special stub installer w/tracking code
- we verify correct pings, etc.
- user upgrades later to a newer version of Firefox (pave-over install)
- do we still check for this ping?
- How about the successfully installed, then uninstalled case: check to see that client no longer sends ping?
- Should we test/what should happen in the double-install case? (Install, then install again w+w/o uninstalling the 1st binary.)
- Similarly, what about the a) install stub w/attribution b) upgrade to a later version of Firefox case?
Testing Approaches
Manual Tests:
- Positive (happy-path) Tests:
- Ensure that https://stubattribution-default.stage.mozaws.net/?product=test-stub&os=win&lang=en-US works and returns a Telemetry-enabled (with Stub Attribution params) build
- Do Not Track disabled
- Click an ad banner or link which has an attribution (source? medium? campaign? content? referrer?) code
- Verify that the URL which takes you to a Mozilla.org download page contains a stub_attcode with a valid param
- Verify that the same valid stub_attcode param/value gets passed to the Mozilla.org Download Firefox button
- Download and install the stub installer
- Verify (how?) that upon successful installation, the stub installer sends a "success!" ping with the same stub_attcode param/value
- Do Not Track disabled
Regression Testing
Goal: Ensure that Bouncer + Mozilla.org continue to deliver the correct builds to the appropriate users
- All browsers available on Windows XP/Vista (test IE 6, IE 7, Chrome) except for Firefox should still get the SHA-1 stub installer
Negative Testing
- Ensure other installers/binaries don't have/pass on the URL param/stub code
- e.g. Mac, Linux, full Firefox for Windows installers
Performance Testing
Goal: Understand, mitigate, and quantify, if possible, the performance impact that adding a dynamic Stub Attribution service has on downloading stub-installer builds on Windows clients (and without regressing the delivery performance of builds for other platforms?)
Load/Soak Tests
Goals:
- In order to be able to more fully test the Mozilla.org -> Bouncer -> Stub Attribution dynamic stub-installer build scenario generate enough traffic/load such that we try to break through server-side caching
- Help establish baselines and thresholds for server-side performance/availability
- Against Bouncer
- Against Stub Downloader service
- Tool(s):
- Loads Broker - most likely
- Loads v2?
- Locust?
- Tool(s):
- Against Mozilla.org?
Fuzz/Security Testing
- Purpose:
- Surface potential incorrectly-handled errors (tracebacks/HTTP 5xx, etc.) and potential security issues
- Likely using OWASP ZAP, either manually and/or via the CLI, using Docker/Jenkins
- Against Bouncer
- Against Stub Downloader service
- Against Mozilla.org
Test Automation Coverage:
- What will the Bedrock unit tests cover?
- Where/how frequently will they run? With each pull request/commit/release?
- What will the Bouncer tests cover?
- Where/how frequently will they run? With each pull request/commit/release? On a cron?
- What will the Stub Attribution unit tests cover?
- Where/how frequently will they run? With each pull request/commit/release? On a cron?
References
- Source, medium, campaign, and content values from in-the-wild:
- Whiteboard flows/diagrams:
- Stub Attribution wiki
- Stub Attribution Project Plan (Google Doc)
- Stub Attribution project's checkin notes (Google Doc)