QA/Automation/Projects/Endurance Tests/Documentation

From MozillaWiki
Jump to: navigation, search

Introduction

The Endurance Tests are intended for replicating and discovering issues related to degradation of application performance over time. This is achieved by running a snippet of test code repeatedly, gathering details on system resources, and reporting these metrics for review. Check out the project page for more details on Endurance Tests.

Running Endurance Tests

Mozmill Crowd

The simplest way to run Endurance Tests is via the Mozmill Crowd add-on. Once you have installed the add-on, follow these steps to run the Endurance Tests:

  1. Launch Mozmill Crowd
  2. Select an application to test (current will be selected by default)
  3. Select the Endurance Test-run
  4. Enter a number of iterations and a delay as required in the 'Preferences' dialog
  5. Click 'Start Test-run'

Note: If this is the first time you have run tests using Mozmill Crowd then you will be prompted to select an location for the Mozmill environment to be downloaded to.

Reports

To submit results to a report server, check the box labeled 'Submit test results to a couchdb server' in the 'Preferences' dialog. By default the value will be http://mozmill-crowd.brasstacks.mozilla.com/db/, which is the preferred target for results.

Command Line

For more advanced usage you can also run the Endurance Tests via the command line. You will need to clone Mozmill's Automation Testruns repository and run the testrun_endurance.py script. Run this with the flag --help to see a list of command line options:

./testrun_endurance.py --help
Usage: testrun_endurance.py [options] (binary|folder)

Options:
  --version                 show program's version number and exit
  -h, --help                show this help message and exit
  -a PATH, --addons=PATH    Addons to install
  --delay=DELAY             Duration (in seconds) to wait before each iteration
  --entities=ENTITIES       Number of entities to create within a test snippet
  --iterations=ITERATIONS   Number of times to repeat each test snippet
  --logfile=PATH            Path to the log file
  --no-restart              Do not restart application between tests
  -r URL, --report=URL      Send results to the report server
  --reserved=RESERVED       Specify a reserved test to run
  --repository=URL          URL of a custom remote or local repository

Entities

Each test snippet will be written such that it ends in the same state that it started. For example, a test that opens a new tab will also close the tab -- with the assumption that any resources consumed can potentially be released. For tests that use entities it is possible to influence an accumulation of resources within the snippet. Using the open new tabs example, specifying an entities value of 10 would open ten tabs and then close them all within an iteration.

Add-ons

You can specify additional add-ons to be installed for the duration of the testrun. These can be specified by local path or URL using the -a PATH or --addons=PATH parameters, and PATH must point to the XPI file for the add-on.

Restarts

By default the Endurance Tests will restart the application between each test. If you want to disable these restarts then provide the optional --no-restart parameter.

Reports

To submit results to a report server, add the -r URL or --report=URL command line parameter. The preferred target URL for the results is http://mozmill-crowd.brasstacks.mozilla.com/db/

Reserved Tests

Reserved tests are any tests that are not to be run in the general endurance testrun. These tests may be to confirm fixes or replicate issues, and may break some of the rules applied to endurance tests. An example of a reserved test that has been checked in is the 'Mem Buster' test, which loads popular websites in separate tabs. The value of this argument must match the target directory name beneath the 'reserved' directory.

Writing Endurance Tests

All Endurance Tests need to instantiate an EnduranceManager from the endurance.js shared-module. The only other requirement is that the test calls the run method, with a single argument. This argument is the function that will be repeated for each iteration.

Without Entities

var Endurance = require("../../../lib/endurance");

function setupModule() {
  controller = mozmill.getBrowserController();
  enduranceManager = new Endurance.EnduranceManager(controller);
}

function testExample() {
  enduranceManager.run(function () {
    // Action 1
    enduranceManager.addCheckpoint("Action 1");
    // Action 2
    enduranceManager.addCheckpoint("Action 2");
  });
}

With Entities

var Endurance = require("../../../lib/endurance");

function setupModule() {
  controller = mozmill.getBrowserController();
  enduranceManager = new Endurance.EnduranceManager(controller);
}

function testExample() {
  enduranceManager.run(function () {
    enduranceManager.loop(function () {
        // Action 1
        enduranceManager.addCheckpoint("Action 1");
        // Action 2
        enduranceManager.addCheckpoint("Action 2");
    });
  });
}

Reports

If you have sent the results to the preferred Mozmill Test Results Dashboard instance, then you can see your test result reports at http://mozmill-crowd.blargon7.com/#/endurance/reports.

Daily Results

Endurance tests are run daily on our QA infrastructure. The reports can be seen on our release dashboard.