Performance sheriffing/Raptor

From MozillaWiki
Jump to: navigation, search

Raptor

Raptor is a new performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.

Raptor currently supports two types of performance tests: page-load tests, and standard benchmark tests. A third type -- tentatively named either "resource(s)" or "load" (TBD) -- should be landing soon. It will key off of the power.py test, and begin to set us + dev teams on a documented, easier way to measure resource usage, including battery draw/levels, CPU usage, memory usage, etc.

Page-Load Tests

Page-load tests basically involve loading a specific web page and measuring the load performance (i.e. time-to-first-non-blank-paint, first-contentful-paint , dom-content-flushed, ttfi). The pageload measurements are 'warm load' in that a new tab is opened only at the start of the test for each new page, and each pagecycle is a reload in the same browser tab.

For page-load tests, instead of using live web pages for performance testing, Raptor uses a tool called [Mitmproxy]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on tooltool and are automatically downloaded by Raptor when they are required for a test.

Benchmark Tests

Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.

Running Locally

Prerequisites

In order to run Raptor on a local machine you need:

  • Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy of some of the source for some of the performance benchmarks
  • If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary
  • If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)

Getting a List of Raptor Tests

To see which Raptor performance tests are currently available on all platforms use the 'print-tests' option, i.e.:

    mozilla-central$ ./mach raptor-test --print-tests

That will out-put all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).

Running on Firefox

To run Raptor locally, just build Firefox and then run:

    mozilla-central$ ./mach raptor-test --test <raptor-test-name>

For example, to run the raptor-tp6 pageload test locally, just use:

    mozilla-central$ ./mach raptor-test --test raptor-tp6-1

You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:

    mozilla-central$ ./mach raptor-test --test raptor-tp6-amazon-firefox

Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.

Running on the Android GeckoView Example App

When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.

First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:

    mozilla-central$ adb devices

If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):

    mozilla-central$ ./mach bootstrap

Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).

Next, make sure your Android device is ready to go. Local Android-device prerequisites are:

  • Device is rooted

Note: If you are using Magisk to root your device, use version 17.3

  • Device is in 'superuser' mode
    • [stephend] - I want to explain this a bit more, so leaving this comment as a reminder
  • The geckoview example app is already installed on the device. Download the geckoview_example.apk from the appropriate android build on treeherder, then install it on your device, i.e.:
    mozilla-central$ adb install -g ../Downloads/geckoview_example.apk

The '-g' flag will automatically set all application permissions ON, which is required. Note, when updating the geckoview example app, you MUST uninstall the existing one first, i.e.:

    mozilla-central$ adb uninstall org.mozilla.geckoview_example

Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:

    mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"

Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls (Bug 1492222).

To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:

    mozilla-central$ ./mach raptor-test --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"

A couple notes about debugging:

  • Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:
    mozilla-central$ adb logcat | grep GeckoConsole
  • You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process

Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.

Running on Google Chrome

To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:

    mozilla-central$ ./mach raptor-test --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"

For example, to run the raptor-speedometer benchmark on Google Chrome use:

    mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome

Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.

Page-Timeouts

On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.

You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:

 ./mach raptor-test --test raptor-tp6-1 --page-timeout 120000

If an iteration of a benchmark test is not finishing within the allocated time, increase it by:

 ./mach raptor-test --test raptor-speedometer --page-timeout 600000

Page-Cycles

Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.

You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:

 ./mach raptor-test --test raptor-tp6-google-firefox --page-cycles 2

Running Page-Load Tests on Live Sites

By default, Raptor page-load performance tests load the test pages from a recording (see Raptor and Mitmproxy). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.

To use live pages instead of page recordings, just edit the Raptor tp6* test INI file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:

 use_live_pages = true

With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.

When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.

The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.

Important: This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.

Running Raptor on Try

Raptor tests can be run on try on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).

Note: Raptor is currently 'tier 2' on Treeherder, which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.

The easiest way to run Raptor tests on try is to use mach try fuzzy:

    mozilla-central$ ./mach try fuzzy --full

Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.

To see the Raptor test results on your try run:

  1. In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')
  2. Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed
  3. If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact

Raptor Hardware in Production

The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [hardware used in automation wiki page] for more details.

Profiling Raptor Jobs

Raptor tests are able to create Gecko profiles which can be viewed in perf-html.io. This is currently only supported when running Raptor on Firefox desktop.

Nightly Profiling Jobs in Production

We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the "Rap-Prof" treeherder group on Mozilla Central.

Profiling Locally

To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:

    mozilla-central$ ./mach raptor-test --test raptor-sunspider --gecko-profile

When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:

    mozilla-central/testing/mozharness/build/blobber_upload_dir/

Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor-test command line.

Raptor will automatically launch Firefox and load the latest Gecko profile in perfhtml.io. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.

If auto-launch doesn't work for some reason, just start Firefox manually and browse to perfhtml.io, click on "Browse" and select the Raptor profile ZIP file noted above.

If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the directions on MDN.

Profiling on Try Server

To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:

   mozilla-central$ ./mach try fuzzy --gecko-profile

Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.

See below for how to view the gecko profiles from within treeherder.

Add Profiling to Previously Completed Jobs

Note: You might need treeherder 'admin' access for the following.

Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:

  1. In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')
  2. Below, and to the left of the 'Job Details' tab, select the '...' to show the menu
  3. On the pop-up menu, select 'Create Gecko Profile'

The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.

Viewing Profiles on Treeherder

When the Raptor jobs are finished, to view the gecko profiles:

  1. In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')
  2. Click on the 'Job Details' tab below
  3. The Raptor profile ZIP files will be listed as job artifacts;
  4. Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right

Recording Pages for Raptor Pageload Tests

Raptor pageload tests ('tp6' and 'tp6m' suites) use the Mitmproxy tool to record and play back page archives. For more information on creating new page playback archives, please see Raptor and Mitmproxy.

Raptor Test List

Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.

Page-Load Tests

For all Raptor page-load tests, the pages are played back from [Mitmproxy] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our perf-automation github repo.

All the pages in a test suite an be run by calling the top-level test name, i.e.:

 ./mach raptor-test --test raptor-tp6-1

Individual test pages can be ran by calling the subtest, i.e.:

 ./mach raptor-test --test raptor-tp6-google-firefox

Some of the page recordings contain [hero elements]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).

Below are the details for each page-load suite, and the test pages contained within each.

raptor-tp6-1

  • contact: :rwood, :jmaher
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, hero element, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-1.ini.

Test pages in tp6-1 (* = firefox or chrome):

[raptor-tp6-amazon-*]

[raptor-tp6-facebook-*]

[raptor-tp6-google-*]

[raptor-tp6-youtube-*]

raptor-tp6-2

  • contact: :rwood, :jmaher
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, hero element, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-2.ini.

Test pages in tp6-2 (* = firefox or chrome):

[raptor-tp6-docs-*]

[raptor-tp6-sheets-*]

[raptor-tp6-slides-*]

raptor-tp6-3

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-3.ini.

Test pages in tp6-3 (* = firefox or chrome):

[raptor-tp6-imdb-*]

[raptor-tp6-imgur-*]

[raptor-tp6-wikia-*]

raptor-tp6-4

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-4.ini.

Test pages in tp6-4 (* = firefox or chrome):

[raptor-tp6-bing-*]

[raptor-tp6-yandex-*]

raptor-tp6-5

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-5.ini.

Test pages in tp6-5 (* = firefox or chrome):

[raptor-tp6-apple-*]

[raptor-tp6-microsoft-*]

raptor-tp6-6

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-6.ini.

Test pages in tp6-6 (* = firefox or chrome):

[raptor-tp6-reddit-*]

[raptor-tp6-yahoo-news-*]

raptor-tp6-7

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-7.ini.

Test pages in tp6-7 (* = firefox or chrome):

[raptor-tp6-instagram-*]

[raptor-tp6-twitter-*]

[raptor-tp6-yahoo-mail-*]

raptor-tp6-8

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-8.ini.

Test pages in tp6-8 (* = firefox or chrome):

[raptor-tp6-ebay-*]

[raptor-tp6-wikipedia-*]

raptor-tp6-9

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-9.ini.

Test pages in tp6-9 (* = firefox or chrome):

[raptor-tp6-google-mail-*]

[raptor-tp6-pinterest-*]

raptor-tp6-10

  • contact: :rwood, :jmaher, :bebe
  • type: page-load
  • browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)
  • measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • measuring on Chrome: first-contentful-paint, loadtime
  • page-cycles: 25
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6-10.ini.

Test pages in tp6-10 (* = firefox or chrome):

[raptor-tp6-paypal-*]

raptor-tp6m-1

  • contact: :rwood, :davehunt
  • type: page-load
  • browsers: Firefox Android Geckoview Example App
  • measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime
  • page-cycles: 15
  • reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).
  • test INI: raptor-tp6m-1.ini.

Test pages in tp6m-1 (* = firefox or chrome):

[raptor-tp6m-amazon-*]

[raptor-tp6m-facebook-*]

[raptor-tp6m-google-*]

[raptor-tp6m-youtube-*]

Benchmark Tests

raptor-assorted-dom

  • contact: bholley
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop
  • TODO

raptor-motionmark-animometer, raptor-motionmark-htmlsuite

  • contact: ?
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop
  • measuring: benchmark measuring the time to animate complex scenes
  • summarization:
    • subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value
    • suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)

raptor-speedometer

  • contact: :selena
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview
  • measuring: responsiveness of web applications
  • reporting: runs/minute score
  • data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.
  • summarization:
    • subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.
    • score: geometric mean of the 16 sums

This is the Speedometer javascript benchmark taken verbatim and slightly modified to work with the Raptor harness.

raptor-stylebench

  • contact: :emilio
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop
  • measuring: speed of dynamic style recalculation
  • reporting: runs/minute score

raptor-sunspider

  • contact: ?
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop
  • TODO

raptor-unity-webgl

  • contact: ?
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview
  • TODO

raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion

  • contact: ?
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop
  • TODO

raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion

  • contact: ?
  • type: benchmark
  • browsers: Firefox desktop only
  • TODO

raptor-webaudio

  • contact: ?
  • type: benchmark
  • browsers: Firefox desktop, Chrome desktop
  • TODO

Debugging the Raptor Web Extension

When developing on Raptor and debugging, there's often a need to look at the output coming from the Raptor Web Extension. Here are some pointers to help.

Raptor Debug Mode

The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:

 ./mach raptor-test --test raptor-tp6-amazon-firefox --debug-mode

Or on Chrome, for example:

 ./mach raptor-test --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode

Running Raptor with debug mode will:

  • Automatically set the number of test page-cycles to 2 maximum
  • Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds
  • On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension
  • On Chrome, the devtools console will automatically open
  • The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.

Manual Debugging on Firefox Desktop

The main Raptor runner is 'runner.js' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code 'measure.js'.

In order to retrieve the console.log() output from the Raptor runner, do the following:

  1. Invoke Raptor locally via ./mach raptor-test
  2. During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.
  3. On the debugging page that appears, make sure "Add-ons" is selected on the left (default).
  4. Turn ON the "Enable add-on debugging" check-box
  5. Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.
  6. A new window will open in a minute, and click the "console" tab

To retrieve the console.log() output from the Raptor content 'measure.js' code:

  1. As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.

Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:

  1. In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of code is here.
  2. Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down method is here.

For benchmark type tests (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects 'benchmark-relay.js' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.

Note, Bug 1470450 is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.

Debugging TP6 and Killing the Mitmproxy Server

Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:

    INFO -  Error starting proxy server: OSError(48, 'Address already in use')
    INFO -  raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1

That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:

    mozilla-unified rwood$ ps -ax | grep mitm
    5439 ttys000    0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp
    5440 ttys000    0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp
    5509 ttys000    0:00.01 grep mitm

Then just kill the first mitm process in the list and that's sufficient:

    mozilla-unified rwood$ kill 5439

Now when you run Raptor again, the Mitmproxy server will be able to start.

Manual Debugging on Firefox Android

Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.

When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:

  1. With your android device (i.e. Google Pixel 2) all setup and connected to USB, invoke the Raptor test normally via ./mach raptor-test
  2. Startup a local copy of the Firefox Nightly Desktop browser
  3. In Firefox Desktop choose "Tools => Web Developer => WebIDE"
  4. In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.
  5. Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.
  6. The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.
  7. On the left side under "Tabs" you'll also see an option for the active tab/page, select that and the Raptor content console.log() output should be included there.

Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:

    adb logcat | grep GeckoConsole

Manual Debugging on Google Chrome

Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.