|
|
| (22 intermediate revisions by 3 users not shown) |
| Line 7: |
Line 7: |
| * Tests should be configurable to have different levels of concurrency | | * Tests should be configurable to have different levels of concurrency |
|
| |
|
| = Grinder = | | = TODO = |
| We concluded that [http://grinder.sourceforge.net/ Grinder] fulfills these requirements, and excels in ways AB and httperf can't or don't.
| | * Analyze traffic and load |
| | * Come up with potential tests cases |
| | * Record test cases for deployment in Grinder |
| | * Do progressive load tests until melting point |
|
| |
|
| = The Guessing Game = | | = How to Create a Test Case = |
| The Grinder results will not be completely accurate. This is the nature of load testing, agreed, but there are also some things we can do with peak/off-peak load numbers to understand how the load test results could be skewed to accommodate for the external effect of other apps and overall higher stress on shared resources.
| |
|
| |
|
| We discussed gathering some cumulative NS/app/db stats to get a better hold of what our load tests numbers mean, and gain some perspective on the margin of error.
| | = Test Cases = |
|
| |
|
| mrz is going to give us some numbers based on [https://nagios.mozilla.org/graphs/mpt/Systems/ cumulative Cacti results].
| | = Where to Check-in Test Cases = |
| | |
| = Test Variance =
| |
| * By concurrent requests
| |
| ** 100
| |
| ** 1000
| |
| ** 2000
| |
| * By multiple input vars (GET)
| |
| ** All unique vars, causing cache-hit to be zero
| |
| ** Mixture of unique vars, possibly 50% duplicated
| |
| ** Only one URI across all requests
| |
| * Otherwise differentiated by request URI
| |
| | |
| = Pages We Want To Test =
| |
| * Main page
| |
| * Search page
| |
| * Category listing
| |
| * Add-on main page
| |
| * Services
| |
| ** Update check
| |
| ** Blocklist
| |
| ** PFS
| |
| * RSS / Feeds
| |
| * Vanilla
| |
| ** Addon-specific discussion page
| |
| ** Top page
| |
| * Others? Mark? Shaver?
| |