Update:Remora Load Testing: Difference between revisions
Line 45: | Line 45: | ||
= Grinder Installation = | = Grinder Installation = | ||
Grinder is a | Grinder is a distributed Java application scriptable in Jython. A central GUI application known as the Console marshals a herd of Agents running on other machines. The Agents spawn processes which, in turn, run tests in threads. The tests, written in Jython, are intended to hit some resource repeatedly. The Agents collect response statistics from the processes and report back to the Console. The Console aggregates and summarizes the data for presentation within its GUI. The Console also can save the aggregated results as a CSV file. | ||
== Setting up Grinder == | == Setting up Grinder == | ||
Setting up the application is the same for both the Agents and the Console. We will probably create a custom tar file for a distribution. Once untarred, it will only be necessary to edit an environment setup file to adjust paths. | Setting up the application is the same for both the Agents and the Console. We will probably create a custom tar file for a distribution. Once untarred, it will only be necessary to edit an environment setup file to adjust paths. |
Revision as of 19:19, 5 October 2006
Goals
- Tests should be distributed, run from multiple nodes
- Tests should return req/s capability
- Tests should return metrics on HTTP error codes and/or success/failure rates
- Tests should test with cache and without -- and at varying cache-hit rates between
- Tests should be configurable to have different levels of concurrency
Grinder
We concluded that Grinder fulfills these requirements, and excels in ways AB and httperf can't or don't.
The Guessing Game
The Grinder results will not be completely accurate. This is the nature of load testing, agreed, but there are also some things we can do with peak/off-peak load numbers to understand how the load test results could be skewed to accommodate for the external effect of other apps and overall higher stress on shared resources.
We discussed gathering some cumulative NS/app/db stats to get a better hold of what our load tests numbers mean, and gain some perspective on the margin of error.
mrz is going to give us some numbers based on cumulative Cacti results.
Test Variance
- By concurrent requests
- 100
- 1000
- 2000
- By multiple input vars (GET)
- All unique vars, causing cache-hit to be zero
- Mixture of unique vars, possibly 50% duplicated
- Only one URI across all requests
- Otherwise differentiated by request URI
Pages We Want To Test
- Main page
- Search page
- Category listing
- Add-on main page
- Services
- Update check
- Blocklist
- PFS
- RSS / Feeds
- Vanilla
- Addon-specific discussion page
- Top page
- Others? Mark? Shaver?
Grinder Installation
Grinder is a distributed Java application scriptable in Jython. A central GUI application known as the Console marshals a herd of Agents running on other machines. The Agents spawn processes which, in turn, run tests in threads. The tests, written in Jython, are intended to hit some resource repeatedly. The Agents collect response statistics from the processes and report back to the Console. The Console aggregates and summarizes the data for presentation within its GUI. The Console also can save the aggregated results as a CSV file.
Setting up Grinder
Setting up the application is the same for both the Agents and the Console. We will probably create a custom tar file for a distribution. Once untarred, it will only be necessary to edit an environment setup file to adjust paths.
(TO-DO enumerate actual steps)
Running the tests
Tests are run manually by starting the Agents and then the Console.
Agents are started at a command line by typing:
...