- Set up system performance tests to run on win32 systems
- Tests the time it takes Firefox to load a series of web pages
- Tests the startup time of Firefox, useful for testing the effect of various extensions on startup time
- To get this up and running ASAP and then work on expansion/fine tuning from there.
- Performance (start time, page render time, memory footprint) is going to be an increasing area of focus. Performance complaints are one of the most common problems raised by users.
- Performance requires constant measurement in a way that is as close as possible to what users perceive and care about.
- Constant performance measurements are critical to understanding whether recent code checkins have caused problems or not
- Many of our tests are very old and do not accurately reflect re-world usage.
- It is very difficult for developers to reproduce test results on their local machines. So they have to check code in and wait for the tinderboxes to cycle before they see results. This is obviously a more difficult goal to realize - developers would have to have the performance testing framework properly installed on their machine and then create their own baselines.
- These new performance tests are slated for use with Firefox 3. After they have been established to be effective and useful we may consider phasing out the old tests - this is highly dependent upon getting these tests ported to all platforms. We expect both the old Tp and this new Tp to end up running concurrently for the foreseeable future.
- Annie Sullivan has created a Python framework to do this kind of testing
- Located in the trunk at /mozilla/testing/performance/win32
- Framework has been integrated to make use of new web pages set of 100 top pages
- Info on building page sets here: Web Page Set
- Framework has been integrated with an extension of the new build graph server to display results as bar graphs
- Info on the graph server extension here: Graph Server (Warning: Pretty graph pictures)
- the new Tp is sampled every second during it's cycle through the 100 local web page copies. All this data is saved and then sent to the graph server - this allows for finer grained examination of results. This includes the ability to view the difference in the page load time for a specific web page, instead of a general number for the entire test
- Tp does cycle through the given web page set a set number of times and then averages the results. This should cut down on network jitter affecting the results
- The way the Python framework is written we can sample any/all available information through windows WMI interface - we can collect information not just about memory footprint but a wide range of process information.
- The framework allows for the loading of a new profile for each test run. This gives the option of testing with fresh profiles or older profiles that represent more realistic content to the average user.
- Need to determine appropriate place to check-in changes made to the Python framework - whether to supersede the basic framework with the rather substantial changes made to integrate with the new page set and the new graph server or have the changes split off into another project
- What process counters are we interested in collecting? What values are useful for developers
- Further work on the graph server to display the results in a helpful way
- Tp has been the initial focus, need to move onto consideration of Ts and possibly other tests TBD
- Port the tests to other platforms (Mac, Linux)