Firefox/Stub Attribution/Test Plan: Difference between revisions
< Firefox | Stub Attribution
Jump to navigation
Jump to search
| Line 104: | Line 104: | ||
** Where/how frequently will they run? With each pull request/commit/release? | ** Where/how frequently will they run? With each pull request/commit/release? | ||
* What will the Bouncer tests cover? | * What will the Bouncer tests cover? | ||
** Where/how frequently will they run? With each pull request/commit/release? On a cron? | |||
* What will the Stub Attribution unit tests cover? | |||
** Where/how frequently will they run? With each pull request/commit/release? On a cron? | ** Where/how frequently will they run? With each pull request/commit/release? On a cron? | ||
{{VerifiedUser}} | {{VerifiedUser}} | ||
== References == | == References == | ||
* Whiteboard flows/diagrams: | * Whiteboard flows/diagrams: | ||
Revision as of 04:33, 8 November 2016
Stub Attribution Test Plan
Dependencies (tools/systems):
- Firefox client code (Stub Installer)
- Bouncer - GitHub
- Mozilla.org (Bedrock) - GitHub
- Stub Downloader service - GitHub
- Telemetry
- Caveat: Telemetry data lags 3 weeks, even after the switches have been flipped
Acceptance Criteria
In order to begin testing, the following are needed:
| ID | Summary | Priority | Status |
|---|---|---|---|
| 1273940 | Make stub attribution script available on the server | -- | RESOLVED |
| 1278981 | Create service to authenticate stub attribution request | -- | RESOLVED |
| 1279291 | Construct and send attribution code from download buttons | -- | RESOLVED |
| 1306457 | Implement the whitelist for Stub Attribution `source` field | -- | RESOLVED |
4 Total; 0 Open (0%); 4 Resolved (100%); 0 Verified (0%);
- a staged or dark-launched Mozilla.org instance ready with Download-button logic and passed-in attribution_code (from bug 1279291), which:
- ...uses the AJAX service to sign Stub-Attribution URLs with a SHA-246 HMAC hash (bug 1278981)
- Bouncer (download.mozilla.org) logic updated to ignore attribution_code for XP+Vista users
- Firefox stub installer binaries - PENDING in Firefox 50 builds shipping November 15th - which include the stub attribution_code's post-signing data capability available and pointed to...
- production-ready stubdownloader.prod.mozaws.net service/instance
Open Issues
TODO
- determine what additional test-automation coverage we might need to add to the end-to-end Bouncer tests
- confirm and note who checks the Telemetry pings
- confirm and note load-testing strategy
- confirm and note performance-testing strategy
Questions:
- How (or can we even?) establish a reasonable performance metric around downloading the stub installer, currently?
- ...so we can measure against this baseline when we test the dynamic vs. cached-downloads Stub-Attribution Service
- All (supported) Windows platforms, or?
- Which IE + Windows versions should get the Stub Attribution (SHA-256)
- By the looks of bug 1304538, IE 6-8 should get SHA-1
- So they're not eligible for this, right?
- By the looks of bug 1304538, IE 6-8 should get SHA-1
- How to test (all five?) codes/fields?
- Source
- Medium
- Campaign
- Content
- "Referrer" alone?
- Which specific browsers (vendors + versions) properly support it/if have it enabled, we'll honor the setting?
- Which locale(s)?
- Entry-points on Mozilla.com. Which page(s)? Or all download pages?
- https://www.mozilla.org/en-US/firefox/all/
- https://www.mozilla.org/en-US/firefox/new/?scene=2 (and do the ?scene=[] parameters matter?)
- Do we need to cover the upgrade-path scenario? i.e.:
- user downloads and installs using the special stub installer w/tracking code
- we verify correct pings, etc.
- user upgrades later to a newer version of Firefox (pave-over install)
- do we still check for this ping?
- How about the successfully installed, then uninstalled case: check to see that client no longer sends ping?
- Should we test/what should happen in the double-install case? (Install, then install again w+w/o uninstalling the 1st binary.)
- Similarly, what about the a) install stub w/attribution b) upgrade to a later version of Firefox case?
Testing Approaches
Manual Tests:
- Positive (happy-path) Tests:
- Do Not Track disabled
- Click an ad banner or link which has an attribution (source? medium? campaign? content? referrer?) code
- Verify that the URL which takes you to a Mozilla.org download page contains a stub_attcode with a valid param
- Verify that the same valid stub_attcode param/value gets passed to the Mozilla.org Download Firefox button
- Download and install the stub installer
- Verify (how?) that upon successful installation, the stub installer sends a "success!" ping with the same stub_attcode param/value
- Do Not Track disabled
Regression Testing
Goal: Ensure that Bouncer + Mozilla.org continue to deliver the correct builds to the appropriate users
- All browsers available on Windows XP/Vista (test IE 6, IE 7, Chrome) except for Firefox should still get the SHA-1 stub installer
Negative Testing
- Ensure other installers/binaries don't have/pass on the URL param/stub code
- e.g. Mac, Linux, full Firefox for Windows installers
Performance Testing
Goal: Understand, mitigate, and quantify, if possible, the performance impact that adding a dynamic Stub Attribution service has on downloading stub-installer builds on Windows clients (and without regressing the delivery performance of builds for other platforms?)
Load/Soak Tests
- Against Bouncer
- Against Stub Downloader service
- Tool(s):
- Loads Broker - most likely
- Loads v2?
- Locust?
- Tool(s):
- Against Mozilla.org?
Fuzz/Security Testing
- Purpose:
- Surface potential incorrectly-handled errors (tracebacks/HTTP 5xx, etc.) and potential security issues
- Likely using OWASP ZAP, either manually and/or via the CLI, using Docker/Jenkins
- Against Bouncer
- Against Stub Downloader service
- Against Mozilla.org
Test Automation Coverage:
- What will the Bedrock unit tests cover?
- Where/how frequently will they run? With each pull request/commit/release?
- What will the Bouncer tests cover?
- Where/how frequently will they run? With each pull request/commit/release? On a cron?
- What will the Stub Attribution unit tests cover?
- Where/how frequently will they run? With each pull request/commit/release? On a cron?
References
- Whiteboard flows/diagrams:
- Stub Attribution wiki
- Stub Attribution Project Plan (Google Doc)
- Stub Attribution project's checkin notes (Google Doc)