Breakpad/Design/Loadtesting: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
 
No edit summary
Line 7: Line 7:


The main reason why we use log replay is that it's a better way to imitate actual load than AB (AB tests the cache and is only one page).
The main reason why we use log replay is that it's a better way to imitate actual load than AB (AB tests the cache and is only one page).
==General Approach==
Since we don't fully know what our required req/s capabilities are with what concurrency, here are some of the preliminary things we should find with our first run:
* where the current bottleneck is
* how we can improve it
* where other weak points in performance are
It would also be useful to us xdebug or equivalent to profile the reporter and collector web applications, but I have not done that in Python (only PHP).  It's useful for finding expensive operations -- taking the traces and throwing them into kcachegrind has proven useful in the past for our PHP applications.
==Stuff We Need to Start Testing==
* Need collector logs and example post data to send to collector.
* Need reporter logs to replay.


==Collector==
==Collector==
* Need collector logs and example post data to send to collector.
* Need to esimate how many avg req/s we want to see with what concurrency.


==Processor==
==Processor==
The processor will be the tricky part, but it's more scalable than the other pieces since we control the rate.
The processor will be the tricky part, but it's more scalable than the other pieces since we control the rate.  It would be useful to build in some rate or time tracking pieces to the processor so we understand how fast it can process reports.


==Reporter==
==Reporter==
* Need reporter logs to replay.  Can create some locally.
* Need to estimate how many avg req/s we want to see with what concurrency.

Revision as of 16:28, 8 June 2007

Load Testing Plan

Below is a list of items we plan on doing for load testing. In general, the best way we've had to test load has been to:

The main reason why we use log replay is that it's a better way to imitate actual load than AB (AB tests the cache and is only one page).

General Approach

Since we don't fully know what our required req/s capabilities are with what concurrency, here are some of the preliminary things we should find with our first run:

  • where the current bottleneck is
  • how we can improve it
  • where other weak points in performance are

It would also be useful to us xdebug or equivalent to profile the reporter and collector web applications, but I have not done that in Python (only PHP). It's useful for finding expensive operations -- taking the traces and throwing them into kcachegrind has proven useful in the past for our PHP applications.

Stuff We Need to Start Testing

  • Need collector logs and example post data to send to collector.
  • Need reporter logs to replay.

Collector

Processor

The processor will be the tricky part, but it's more scalable than the other pieces since we control the rate. It would be useful to build in some rate or time tracking pieces to the processor so we understand how fast it can process reports.

Reporter