|
|
| (25 intermediate revisions by 3 users not shown) |
| Line 7: |
Line 7: |
| * Tests should be configurable to have different levels of concurrency | | * Tests should be configurable to have different levels of concurrency |
|
| |
|
| = Grinder = | | = TODO = |
| We concluded that [http://grinder.sourceforge.net/ Grinder] fulfills these requirements, and excels in ways AB and httperf can't or don't.
| | * Analyze traffic and load |
| | * Come up with potential tests cases |
| | * Record test cases for deployment in Grinder |
| | * Do progressive load tests until melting point |
|
| |
|
| = Noted Issues = | | = How to Create a Test Case = |
| The Grinder results will not be completely accurate. This is the nature of load testing, agreed, but there are also some things we can do with peak/off-peak load numbers to understand how the load test results could be skewed to accommodate for the external effect of other apps and overall higher stress on shared resources.
| |
|
| |
|
| We discussed gathering some cumulative NS/app/db stats to get a better hold of what our load tests numbers mean, and gain some perspective on the margin of error.
| | = Test Cases = |
|
| |
|
| mrz is going to give us some numbers based on [https://nagios.mozilla.org/graphs/mpt/Systems/ cumulative Cacti results].
| | = Where to Check-in Test Cases = |