Firefox/Stub Attribution/Test Plan: Difference between revisions

Jump to navigation Jump to search
Line 43: Line 43:


== Questions: ==
== Questions: ==
# How (or can we even?) establish a reasonable performance metric around downloading the stub installer, currently?
#How can we test through cookied-Bedrock flow, at scale, for unique builds?
## ...so we can measure against this baseline when we test the dynamic vs. cached-downloads Stub-Attribution Service
#How can we test (and try to break) Jeremy's 10-minute cache. And what does it cover, binary/condition-wise? (i.e. what's the driving logic/algorithm for caching vs. serving fresh?)
#How/can we performance-test the UI experience?
#What can we drive, using WebDriver?
#Can we make two identical requests using the go-bouncer e2e tests, and check for/ensure we get a cached binary?
##Likewise, two different requests (i.e. with just one unique key attribute), and get fresh, unique binaries in that case?
#How (or can we even?) establish a reasonable performance metric around downloading the stub installer, currently?
##...so we can measure against this baseline when we test the dynamic vs. cached-downloads Stub-Attribution Service
# How to test (all five?) codes/fields?
# How to test (all five?) codes/fields?
## Source
## Source
Confirmed users
9,511

edits

Navigation menu