QA/Automation/Projects/Endurance Tests

From MozillaWiki
< QA‎ | Automation‎ | Projects
Jump to: navigation, search


Overview

We're exploring different ways and or abilities to utilize Mozmill and its testing capabilities to determine if Firefox can sustain the continuous expected load seen through simulated 'real' web usage. These tests will ultimately create a simple real world model that employs a high but normal load pattern of actions for the browser in order to determine any potential:

  • Browser performance degradation
  • Crashes

Initially, this project will focus on setting up an environment employing a new test harness that will loop through simple test snippets to expose the above areas of concern within Firefox.

Name: Endurance Tests
Leads: Dave Hunt, Anthony Hughes
Contributors: Aaron Train, Henrik Skupin, Geo Mealer
Tracking Bug(s): bug 629065
Discussions: General, Test Snippets

Goals for Q3/11

Status Task Description
[DONE] Improve coverage Improve coverage of Endurance tests for real world use-case scenarios and add additional metrics to support the memshrink project for Firefox

Meetings

From Q4/2011 the endurance tests project will report updates and discuss topics in the QA Automation Services regular meetings.

Meeting Notes

Definitions

Test Snippet 
A test snippet will be one or more actions that can be performed within the AUT written in JavaScript according to the Mozmill specifications.

Roadmap

Phase I - Completed

  1. Command line wrapper for Mozmill that:
    • Takes the following parameters:
      • Number of times to loop
      • Delay to wait before each iteration
    • Runs a single 'snippet' according to the above parameters
    • Outputs memory usage to the report
  2. A 'snippet' that uses the new command line wrapper and outputs results that would potentially expose degradation of performance
  3. New reports for endurance tests in Mozmill dashboard

Phase II

  1. Determine achievable additional performance metrics
  2. Determine achievable client hardware details
  3. Restart between each test by default (override from command line)
  4. All tests must use local content
  5. Improve performance of charts with considerable data

Phase III

Phase ?

  • Create a view for results listed by memory consumption
  • Gracefully handle the AUT crashing, and ensure all reports are intact
  • Output results to a location on the file system
  • Read parameters in from disk
  • Determine areas of perceived risk within the AUT
  • Write more test snippets based on above
  • Gather metrics on more system resources
  • Present results in an easy to digest way
  • Automate the running of endurance tests
  • Abort tests that exceed predetermined resource thresholds
  • Determining saturation points (baselines) in the AUT and monitor for major discrepancies

Resources