https://wiki.mozilla.org/api.php?action=feedcontributions&user=Bebef+1987&feedformat=atomMozillaWiki - User contributions [en]2024-03-28T16:59:08ZUser contributionsMediaWiki 1.27.4https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1222538TestEngineering/Performance/Raptor2020-01-20T14:22:33Z<p>Bebef 1987: </p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests: warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shut down and restarted between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shut down<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the Android device sdcard<br />
* The Firefox Android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The Android app is shut down<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
==== Running Fennec ESR 68 tests ====<br />
<br />
Fennec 68 tests are setup to run on latest fennec esr 68 build.<br />
<br />
To start a try run on Fennec ESR 68 run:<br />
<br />
$ ./mach try fuzzy -q="fennec68" --full<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
All pageload tests can be found at [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/ raptor-tp6 tests ]<br />
<br />
Below are the details for page-load suites:<br />
<br />
===== raptor-tp6-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
===== raptor-tp6-cold-1 to 4 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-cold-1 to 4 ].<br />
<br />
===== raptor-tp6m-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
===== raptor-tp6m-cold-1 to 27 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 27].<br />
<br />
===== raptor-tp6m-cold-1 to 9-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 9-fennec68].<br />
<br />
===== raptor-tp6m-cold-1 to 27-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 14-fennec68].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-releng/cron-task-mozilla-mobile-fenix%2Fraptor Taskcluster force hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On each push<br />
|}<br />
<br />
Tests are now defined in a similar fashion compared to what exists in mozilla-central. Task definitions are expressed in Yaml:<br />
* https://github.com/mozilla-mobile/fenix/blob/1c9c5317eb33d92dde3293dfe6a857c279a7ab12/taskcluster/ci/raptor/kind.yml<br />
* https://github.com/mozilla-mobile/reference-browser/blob/4560a83cb559d3d4d06383205a8bb76a44336704/taskcluster/ci/raptor/kind.yml<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/pull/5565/files. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia<br />
<br />
== Code formatting on Raptor ==<br />
As Raptor is a Mozilla project we fallow the general Python coding style:<br />
* https://firefox-source-docs.mozilla.org/tools/lint/coding-style/coding_style_python.html<br />
<br />
[https://github.com/psf/black/ black]is the tool used to reformat the Python code.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1218016TestEngineering/Performance/Raptor2019-09-18T12:31:58Z<p>Bebef 1987: /* Page-Load Tests */</p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests; warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shutdown and re-started between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shutdown<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The android app is shutdown<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
==== Running Fennec ESR 68 tests ====<br />
<br />
Fennec 68 tests are setup to run on latest fennec esr 68 build.<br />
<br />
To start a try run on Fennec ESR 68 run:<br />
<br />
$ ./mach try fuzzy -q="fennec68" --full<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
All pageload tests can be found at [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/ raptor-tp6 tests ]<br />
<br />
Below are the details for page-load suites:<br />
<br />
===== raptor-tp6-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
===== raptor-tp6-cold-1 to 4 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-cold-1 to 4 ].<br />
<br />
===== raptor-tp6m-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
===== raptor-tp6m-cold-1 to 27 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 27].<br />
<br />
===== raptor-tp6m-cold-1 to 9-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 9-fennec68].<br />
<br />
===== raptor-tp6m-cold-1 to 27-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 14-fennec68].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-mobile/fenix-raptor Taskcluster Hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On demand [https://tools.taskcluster.net/hooks/project-mobile/reference-browser-raptor Taskcluster Hook]<br />
|}<br />
<br />
Tests are defined differently from what exists in mozilla-central. Taskcluster payloads are expressed in Python function in:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/lib/tasks.py#L478-L616<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/lib/tasks.py#L455-L561<br />
<br />
Once defined, you must call these functions:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/decision_task.py#L83-L96<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/decision_task.py#L82-L91<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/commit/4cc16d4268240393f57b3711ab423c2407aeffb7. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1218015TestEngineering/Performance/Raptor2019-09-18T12:28:26Z<p>Bebef 1987: /* Running Fennec ESR 68 tests */</p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests; warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shutdown and re-started between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shutdown<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The android app is shutdown<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
==== Running Fennec ESR 68 tests ====<br />
<br />
Fennec 68 tests are setup to run on latest fennec esr 68 build.<br />
<br />
To start a try run on Fennec ESR 68 run:<br />
<br />
$ ./mach try fuzzy -q="fennec68" --full<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for page-load suites:<br />
<br />
===== raptor-tp6-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
===== raptor-tp6-cold-1 to 4 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-cold-1 to 4 ].<br />
<br />
===== raptor-tp6m-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
===== raptor-tp6m-cold-1 to 27 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 27].<br />
<br />
===== raptor-tp6m-cold-1 to 9-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 9-fennec68].<br />
<br />
===== raptor-tp6m-cold-1 to 27-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 14-fennec68].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-mobile/fenix-raptor Taskcluster Hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On demand [https://tools.taskcluster.net/hooks/project-mobile/reference-browser-raptor Taskcluster Hook]<br />
|}<br />
<br />
Tests are defined differently from what exists in mozilla-central. Taskcluster payloads are expressed in Python function in:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/lib/tasks.py#L478-L616<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/lib/tasks.py#L455-L561<br />
<br />
Once defined, you must call these functions:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/decision_task.py#L83-L96<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/decision_task.py#L82-L91<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/commit/4cc16d4268240393f57b3711ab423c2407aeffb7. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1218014TestEngineering/Performance/Raptor2019-09-18T12:27:17Z<p>Bebef 1987: /* Running Raptor on Try */</p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests; warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shutdown and re-started between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shutdown<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The android app is shutdown<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
==== Running Fennec ESR 68 tests ====<br />
<br />
Fennec 85 tests are setup to run on latest fennec esr 68 build.<br />
<br />
To start a try run on Fennec ESR 68 run:<br />
<br />
$ ./mach try fuzzy -q="fennec68" --full<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for page-load suites:<br />
<br />
===== raptor-tp6-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
===== raptor-tp6-cold-1 to 4 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-cold-1 to 4 ].<br />
<br />
===== raptor-tp6m-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
===== raptor-tp6m-cold-1 to 27 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 27].<br />
<br />
===== raptor-tp6m-cold-1 to 9-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 9-fennec68].<br />
<br />
===== raptor-tp6m-cold-1 to 27-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 14-fennec68].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-mobile/fenix-raptor Taskcluster Hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On demand [https://tools.taskcluster.net/hooks/project-mobile/reference-browser-raptor Taskcluster Hook]<br />
|}<br />
<br />
Tests are defined differently from what exists in mozilla-central. Taskcluster payloads are expressed in Python function in:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/lib/tasks.py#L478-L616<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/lib/tasks.py#L455-L561<br />
<br />
Once defined, you must call these functions:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/decision_task.py#L83-L96<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/decision_task.py#L82-L91<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/commit/4cc16d4268240393f57b3711ab423c2407aeffb7. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1218013TestEngineering/Performance/Raptor2019-09-18T12:20:56Z<p>Bebef 1987: /* Page-Load Tests */</p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests; warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shutdown and re-started between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shutdown<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The android app is shutdown<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for page-load suites:<br />
<br />
===== raptor-tp6-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
===== raptor-tp6-cold-1 to 4 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-cold-1 to 4 ].<br />
<br />
===== raptor-tp6m-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
===== raptor-tp6m-cold-1 to 27 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 27].<br />
<br />
===== raptor-tp6m-cold-1 to 9-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 9-fennec68].<br />
<br />
===== raptor-tp6m-cold-1 to 27-fennec68 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Fennec ESR 68 App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 14-fennec68].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-mobile/fenix-raptor Taskcluster Hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On demand [https://tools.taskcluster.net/hooks/project-mobile/reference-browser-raptor Taskcluster Hook]<br />
|}<br />
<br />
Tests are defined differently from what exists in mozilla-central. Taskcluster payloads are expressed in Python function in:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/lib/tasks.py#L478-L616<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/lib/tasks.py#L455-L561<br />
<br />
Once defined, you must call these functions:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/decision_task.py#L83-L96<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/decision_task.py#L82-L91<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/commit/4cc16d4268240393f57b3711ab423c2407aeffb7. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1218012TestEngineering/Performance/Raptor2019-09-18T12:16:04Z<p>Bebef 1987: /* raptor-tp6m-cold-1 to 10 */</p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests; warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shutdown and re-started between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shutdown<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The android app is shutdown<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for page-load suites:<br />
<br />
===== raptor-tp6-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
===== raptor-tp6-cold-1 to 4 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-cold-1 to 4 ].<br />
<br />
===== raptor-tp6m-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
===== raptor-tp6m-cold-1 to 27 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 27].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-mobile/fenix-raptor Taskcluster Hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On demand [https://tools.taskcluster.net/hooks/project-mobile/reference-browser-raptor Taskcluster Hook]<br />
|}<br />
<br />
Tests are defined differently from what exists in mozilla-central. Taskcluster payloads are expressed in Python function in:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/lib/tasks.py#L478-L616<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/lib/tasks.py#L455-L561<br />
<br />
Once defined, you must call these functions:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/decision_task.py#L83-L96<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/decision_task.py#L82-L91<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/commit/4cc16d4268240393f57b3711ab423c2407aeffb7. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1218011TestEngineering/Performance/Raptor2019-09-18T12:14:58Z<p>Bebef 1987: /* Page-Load Tests */</p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests; warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shutdown and re-started between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shutdown<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The android app is shutdown<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for page-load suites:<br />
<br />
===== raptor-tp6-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
===== raptor-tp6-cold-1 to 4 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-cold-1 to 4 ].<br />
<br />
===== raptor-tp6m-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
===== raptor-tp6m-cold-1 to 10 =====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load, cold<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-cold-1 to 10].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-mobile/fenix-raptor Taskcluster Hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On demand [https://tools.taskcluster.net/hooks/project-mobile/reference-browser-raptor Taskcluster Hook]<br />
|}<br />
<br />
Tests are defined differently from what exists in mozilla-central. Taskcluster payloads are expressed in Python function in:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/lib/tasks.py#L478-L616<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/lib/tasks.py#L455-L561<br />
<br />
Once defined, you must call these functions:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/decision_task.py#L83-L96<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/decision_task.py#L82-L91<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/commit/4cc16d4268240393f57b3711ab423c2407aeffb7. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1218010TestEngineering/Performance/Raptor2019-09-18T12:11:01Z<p>Bebef 1987: /* Page-Load Tests */</p>
<hr />
<div>[[Image:Raptor.png|frameless|right]]<br />
<br />
Raptor is a performance-testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android GeckoView, and on Google Chromium.<br />
<br />
* Contact: Rob Wood [rwood]<br />
* Source code: https://searchfox.org/mozilla-central/source/testing/raptor<br />
* Good first bugs: https://codetribute.mozilla.org/projects/automation?project%3DRaptor<br />
<br />
Raptor currently supports three test types: 1) page-load performance tests, 2) standard benchmark-performance tests, and 3) "scenario"-based tests, such as power, CPU, and memory-usage measurements on Android (and desktop?).<br />
<br />
Locally, raptor can be invoked with either of the following commands - raptor-test may be deprecated in the future:<br />
./mach raptor<br />
./mach raptor-test<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests involve loading a specific web page and measuring the load performance (i.e. [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Non-Blank_Paint_.28fnbpaint.29 time-to-first-non-blank-paint], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#First_Contentful_Paint_.28fcp.29 first-contentful-paint] , [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#DOM_Content_Flushed_.28dcf.29 dom-content-flushed], [https://wiki.mozilla.org/TestEngineering/Performance/Glossary#Time_To_First_Interactive_.28ttfi.29 ttfi]).<br />
<br />
For page-load tests by default, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on [https://github.com/mozilla/build-tooltool tooltool] and are automatically downloaded by Raptor when they are required for a test. Raptor uses mitmproxy via the [https://searchfox.org/mozilla-central/source/testing/mozbase/mozproxy mozbase mozproxy] package.<br />
<br />
There are two different types of Raptor page-load tests; warm page-load and cold page-load.<br />
<br />
==== Warm Page-Load ====<br />
For warm page-load tests, the desktop browser (or android browser app) is just started up once; so the browser is warm on each page-load.<br />
<br />
'''Raptor warm page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 24 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
'''Raptor warm page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The tab is reloaded 14 more times; measurements taken each time<br />
* The measurements from the first page-load are not included in overall results metrics b/c of first load noise; however they are listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
==== Cold Page-Load ====<br />
For cold page-load tests, the desktop browser (or android browser app) is shutdown and re-started between page load cycles; so the browser is cold on each page-load. This is what happens for Raptor cold page-load tests:<br />
<br />
'''Raptor cold page-load test process when running on Firefox/Chrome/Chromium desktop:'''<br />
<br />
* A new browser profile is created<br />
* The desktop browser is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* A new tab is opened<br />
* The test URL is loaded; measurements taken<br />
* The tab is closed<br />
* The desktop browser is shutdown<br />
* Entire process is repeated for the remaining browser cycles (25 cycles total)<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
'''Raptor cold page-load test process when running on Firefox android browser apps:'''<br />
<br />
* The android app data is cleared (via `adb shell pm clear firefox.app.binary.name`)<br />
* A new browser profile is created<br />
* The new browser profile is copied onto the android device sdcard<br />
* The Firefox android app is started up<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* The test URL is loaded; measurements taken<br />
* The android app is shutdown<br />
* Entire process is repeated for the remaining browser cycles (15 cycles total)<br />
* Note that the SSL cert DB is only created once (browser cycle 1) and copied into the profile for each additional browser cycle; thus not having to use the 'certutil' tool and re-created the db on each cycle<br />
* The measurements from all browser cycles are used to calculate overall results<br />
<br />
==== Using Live Sites ====<br />
It is possible to use live web pages for the page-load tests instead of using the mitproxy recordings. This option is available when running on Try only; as we don't want to submit data from live pages to Perfherder (since live page content will always be changing).<br />
<br />
To run a particular Raptor tp6 page-load test with live sites, open the raptor-tp6*.ini file ([https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tests folder]), and for the test default (or under a single page/subtest) just add this attribute:<br />
<br />
use_live_sites = true<br />
<br />
And push that change to Try (./mach try fuzzy --full) and run the Raptor page-load test.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
=== Scenario Tests ===<br />
<br />
Currently, there are three subtypes of Raptor-run "scenario" tests, all on (and only on) Android:<br />
# '''power-usage tests'''<br />
# '''memory-usage tests'''<br />
# '''CPU-usage tests'''<br />
<br />
For a combined-measurement run with distinct Perfherder output for each measurement type, you can do:<br />
<br />
./mach raptor-test --test raptor-scn-power-idle-bg-fenix --app fenix --binary org.mozilla.fenix.performancetest --host 10.0.0.16 --power-test --memory-test --cpu-test<br />
<br />
Each measurement subtype (power-, memory-, and cpu-usage) will have a corresponding PERFHERDER_DATA blob:<br />
<br />
<pre>22:31:05 INFO - raptor-output Info: PERFHERDER_DATA: {"framework": {"name": "raptor"}, "suites": [{"name": "raptor-scn-power-idle-bg-fenix-cpu", "lowerIsBetter": true, "alertThreshold": 2.0, "value": 0, "subtests": [{"lowerIsBetter": true, "unit": "%", "name": "cpu-browser_cpu_usage", "value": 0, "alertThreshold": 2.0}], "type": "cpu", "unit": "%"}]}<br />
22:31:05 INFO - raptor-output Info: cpu results can also be found locally at: /Users/sdonner/moz_src/mozilla-unified/testing/mozharness/build/raptor-cpu.json<br />
</pre><br />
(repeat for power, memory snippets)<br />
<br />
==== Power-Use Tests (Android) ====<br />
===== Prerequisites =====<br />
<br />
# rooted (i.e. superuser-capable), bootloader-unlocked Moto G5 or Google Pixel 2: internal (for now) [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit test-device setup doc.]<br />
# set up to run Raptor from a Firefox source tree (see [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_Locally Running Locally]<br />
# [https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_GeckoView_Example_App GeckoView-bootstrapped] environment<br />
<br />
'''Raptor power-use measurement test process when running on Firefox Android browser apps:'''<br />
<br />
* The Android app data is cleared, via:<br />
* adb shell pm clear firefox.app.binary.name<br />
* The new browser profile is copied onto the Android device's sdcard<br />
* We set `scenario_time` to '''20 minutes''' (1200000 milliseconds), and `page_timeout` to '22 minutes' (1320000 milliseconds)<br />
** It's crucial that `page_timeout` exceed `scenario_time`; if not, measurement tests will fail/bail early<br />
* We launch the {Fenix, Fennec, GeckoView, Reference Browser} on-Android app<br />
* Post-startup browser settle pause of 30 seconds<br />
* On Fennec only, a new browser tab is created (other Firefox apps use the single/existing tab)<br />
* Power-use/battery-level measurements (app-specific measurements) are taken, via:<br />
* adb shell dumpsys batterystats<br />
* Raw power-use measurement data is listed in the perfherder-data.json/raptor.json artifacts<br />
<br />
In the Perfherder (or Firefox Health) dashboards for these power usage tests, all data points have milli-Ampere-hour units, with a lower value being better.<br />
Proportional power usage is the total power usage of hidden battery sippers that is proportionally "smeared"/distributed across all open applications.<br />
<br />
==== Running Locally ====<br />
<br />
To run on a tethered phone via USB from a macOS host, on:<br />
<br />
===== Fennec =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fennec --app fennec --binary org.mozilla.firefox --power-test --host 10.252.27.96<br />
<br />
===== Fenix =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-fenix --app fenix --binary org.mozilla.fenix.performancetest --power-test --host 10.252.27.96<br />
<br />
===== GeckoView =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-geckoview --app geckoview --binary org.mozilla.geckoview_example --power-test --host 10.252.27.96<br />
<br />
===== Reference Browser =====<br />
<br />
./mach raptor --test raptor-scn-power-idle-refbrow --app refbrow --binary org.mozilla.reference.browser.raptor --power-test --host 10.252.27.96<br />
<br />
'''NOTE:'''<br />
* ''it is important that you include '' '''`--power-test`''', ''when running power-usage measurement tests, as that will help ensure that local test-measurement data doesn't accidentally get submitted to Perfherder''<br />
<br />
==== Writing New Tests ====<br />
<br />
==== Pushing to Try server ====<br />
As an example, a relatively good cross-sampling of builds can be seen in https://hg.mozilla.org/try/rev/6c07631a0c2bf56b51bb82fd5543d1b34d7f6c69.<br />
* Include both G5 Android 7 (hw-g5-7-0-arm7-api-16/*) *and* Pixel 2 Android 8 (p2-8-0-android-aarch64/) target platforms<br />
* pgo builds tend to be -- from my limited empirical evidence -- about 10 - 15 minutes longer to complete than their opt counterparts<br />
<br />
==== Perf Dashboards ====<br />
<br />
* Perfherder example (GeckoView): https://treeherder.mozilla.org/perf.html#/graphs?timerange=2592000&series=mozilla-central,2027286,1,10&series=mozilla-central,2027291,1,10&series=mozilla-central,2027296,1,10<br />
* [https://github.com/mozilla-frontend-infra/firefox-health-dashboard/issues/420 Coming soon] to https://health.graphics/android<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine, you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
* Git needs to be in the path in the terminal/window in which you build Firefox / run Raptor, as Raptor uses Git to check-out a local copy for some of the performance benchmarks' sources.<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
* If you plan on running Raptor on Android, your Android device must already be set up (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see which Raptor performance tests are currently available on all platforms, use the 'print-tests' option, e.g.:<br />
<br />
$ ./mach raptor --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally, just build Firefox and then run:<br />
<br />
$ ./mach raptor --test <raptor-test-name><br />
<br />
For example, to run the raptor-tp6 pageload test locally, just use:<br />
<br />
$ ./mach raptor --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
$ ./mach raptor --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android GeckoView Example App ====<br />
<br />
When running Raptor tests on a local Android device, Raptor is expecting the device to already be set up and ready to go.<br />
<br />
First, ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your Android device to USB and running:<br />
<br />
$ adb devices<br />
<br />
If your device serial number is listed, then you're all set. If ADB is not found, you can install it by running (in your local mozilla-development repo):<br />
<br />
$ ./mach bootstrap<br />
<br />
Then, in bootstrap, select the option for "Firefox for Android Artifact Mode," which will install the required tools (no need to do an actual build).<br />
<br />
Next, make sure your Android device is ready to go. Local Android-device prerequisites are:<br />
<br />
* Device is [https://docs.google.com/document/d/1XQLtvVM2U3h1jzzzpcGEDVOp4jMECsgLYJkhCfAwAnc/edit rooted]<br />
Note: If you are using Magisk to root your device, use [https://github.com/topjohnwu/Magisk/releases/tag/v17.3 version 17.3]<br />
<br />
* Device is in 'superuser' mode<br />
** [stephend] - I want to explain this a bit more, so leaving this comment as a reminder<br />
<br />
* The geckoview example app is already installed on the device (from ./mach bootstrap, above). Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android%2Cbuild android build on treeherder], then install it on your device, i.e.:<br />
<br />
$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON, which is required.<br />
<br />
Note, when the Gecko profiler should be run, or a build with build symbols is needed, then use a [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=nightly%2Candroid Nightly build of geckoview_example.apk].<br />
<br />
When updating the geckoview example app, you MUST uninstall the existing one first, i.e.:<br />
<br />
$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your Android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on Android GeckoView is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the GeckoView Example app, use this command line:<br />
<br />
$ ./mach raptor --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple notes about debugging:<br />
<br />
* Raptor browser-extension console messages *do* appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on Android using the Firefox WebIDE; click on the Android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
$ ./mach raptor --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example, to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
$ ./mach raptor --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command-line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command-line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
==== Running Page-Load Tests on Live Sites ====<br />
By default, Raptor page-load performance tests load the test pages from a recording (see [https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Raptor and Mitmproxy]). However it is possible to tell Raptor to load the test pages from the live internet instead of using the recorded page playback.<br />
<br />
To use live pages instead of page recordings, just edit the [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests Raptor tp6* test INI] file and add the following attribute either at the top (for all pages in the suite) or under an individual page/subtest heading:<br />
<br />
use_live_pages = true<br />
<br />
With that setting, Raptor will not start the playback tool (i.e. Mitmproxy) and will not turn on the corresponding browser proxy, therefore forcing the test page to load live.<br />
<br />
When `use_live_pages = true` and a page-load test is measuring hero element (set in the test INI 'measure' option) then the hero element measurement will automatically be dropped - because the hero elements only exist in our Mitmproxy recordings and not in live pages.<br />
<br />
The word 'live' will be appended to the test name in the PERFHERDER_DATA so live sites can be specifically seen in perfherder for try runs.<br />
<br />
'''Important:''' This is fine for running on try, but we don't want to enable live sites in the production repos - because we don't want live site data being ingested by perfherder and used for regression alerting etc. Therefore as a safety catch, if using live sites the test won't even run unless running locally or on try.<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create Gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with Gecko-profiling enabled running Nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of Gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create Gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
$ ./mach raptor --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest Gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile ZIP file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on Gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Customizing the profiler ====<br />
If the default profiling options are not enough, and further information is needed the gecko profiler can be customized.<br />
<br />
===== Enable profiling of additional threads =====<br />
In some cases it will be helpful to also measure threads which are not part of the default set. Like the '''MediaPlayback''' thread. This can be accomplished by using:<br />
<br />
# the '''gecko_profile_threads''' manifest entry, and specifying the thread names as comma separated list<br />
# the '''--gecko-profile-thread''' argument for ''mach''' for each extra thread to profile <br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You might need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile ZIP files will be listed as job artifacts;<br />
# Select a Raptor profile ZIP artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and play back page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
=== Performance Tuning for Android devices ===<br />
<br />
When the test is run against Android, Raptor executes a series of performance tuning commands over the ADB connection.<br />
<br />
Device agnostic:<br />
<br />
* memory bus <br />
* device remain on when on USB power<br />
* virtual memory (swappiness)<br />
* services (thermal throttling, cpu throttling)<br />
* i/o scheduler<br />
<br />
Device specific:<br />
<br />
* cpu governor<br />
* cpu minimum frequency<br />
* gpu governor<br />
* gpu minimum frequency<br />
<br />
For a detailed list of current tweaks, please refer to [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/raptor.py#676 this] Searchfox page.<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/TestEngineering/Performance/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for page-load suites:<br />
<br />
==== raptor-tp6-1 to 10 ====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox desktop, Chromium, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI's: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/desktop raptor-tp6-1 to 10 ].<br />
<br />
<br />
==== raptor-tp6m-1 to 10 ====<br />
* contact: :rwood, :davehunt, :bebe<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/tp6/mobile raptor-tp6m-1 to 10].<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] JavaScript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-youtube-playback ====<br />
* contact: ?<br />
* type: benchmark<br />
* details: [[/Youtube_playback_performance|YouTube playback performance]]<br />
* browsers: Firefox desktop, Firefox Android Geckoview<br />
* measuring: media streaming playback performance (dropped video frames)<br />
* reporting: For each video the number of dropped and decoded frames, as well as its percentage value is getting recorded. The overall reported result is the mean value of dropped video frames across all tested video files.<br />
* data: Given the size of the used media files those tests are currently run as live site tests, and are kept up-to-date via the [https://github.com/mozilla/perf-youtube-playback/ perf-youtube-playback] repository on Github.<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-youtube-playback.ini raptor-youtube-playback.ini]<br />
<br />
This are the [https://ytlr-cert.appspot.com/2019/main.html?test_type=playbackperf-test Playback Performance Tests] benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
=== Scenario Tests ===<br />
<br />
This test type runs browser tests that use idle pages for a specified amount of time to gather resource usage information such as power usage. The pages used for testing do not need to be recorded with mitmproxy.<br />
<br />
When creating a new scenario test, ensure that the `page-timeout` is greater than the `scenario-time` to make sure raptor doesn't exit the test before the scenario timer ends.<br />
<br />
This test type can also be used for specialized tests that require communication with the control-server to do things like sending the browser to the background for X minutes.<br />
<br />
==== Power-Usage Measurement Tests ====<br />
These Android power measurement tests output 3 different PERFHERDER_DATA entries. The first contains the power usage of the test itself, the second contains the power usage of the android OS (named os-baseline) over the course of 1 minute, and the third (the name is the test name with '%change-power' appended to it) is a combination of these two measures which shows the percentage increase in power consumption when the test is run, in comparison to when it is not running. In these perfherder data blobs, we provide power consumption attributed to the cpu, wifi, and screen in Milli-ampere-hours (mAh).<br />
<br />
===== raptor-scn-power-idle =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app foregrounded, over a 20-minute duration<br />
<br />
===== raptor-scn-power-idle-bg =====<br />
* contact: stephend, sparky<br />
* type: scenario<br />
* browsers: Android: Fennec 64.0.2, GeckoView Example, Fenix, and Reference Browser<br />
* measuring: Power consumption for idle Android browsers, with about:blank loaded and app backgrounded, over a 10-minute duration<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all set up and connected to USB, invoke the Raptor test normally via ./mach raptor<br />
# Start up a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page; select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.<br />
<br />
== Raptor on Mobile projects (Fenix, Reference-Browser) == <br />
<br />
=== Add new tests ===<br />
<br />
For mobile projects, Raptor tests are on the following repositories:<br />
<br />
{| class="wikitable"<br />
|-<br />
! Project !! Repository !! Tests results !! Schedule<br />
|-<br />
| Fenix (aka Firefox Preview) || [https://github.com/mozilla-mobile/fenix/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=fenix Treeherder view] || Every 24 hours [https://tools.taskcluster.net/hooks/project-mobile/fenix-raptor Taskcluster Hook]<br />
|-<br />
| Reference-Browser || [https://github.com/mozilla-mobile/reference-browser/ Github] || [https://treeherder.mozilla.org/#/jobs?repo=reference-browser Treeherder view] || On demand [https://tools.taskcluster.net/hooks/project-mobile/reference-browser-raptor Taskcluster Hook]<br />
|}<br />
<br />
Tests are defined differently from what exists in mozilla-central. Taskcluster payloads are expressed in Python function in:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/lib/tasks.py#L478-L616<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/lib/tasks.py#L455-L561<br />
<br />
Once defined, you must call these functions:<br />
* https://github.com/mozilla-mobile/reference-browser/blob/f2ae31e23e36a749b937ff9728c28d53760242eb/automation/taskcluster/decision_task.py#L83-L96<br />
* https://github.com/mozilla-mobile/fenix/blob/8928822e99ff09ab45bce8ebab63aead10b7ebde/automation/taskcluster/decision_task.py#L82-L91<br />
<br />
If you want to test your changes on a PR, before they land, you need to apply a patch like this one: https://github.com/mozilla-mobile/fenix/commit/4cc16d4268240393f57b3711ab423c2407aeffb7. Don't forget to revert it before merging the patch. <br />
<br />
On Fenix and Reference-Browser, the raptor revision is tied to the latest nightly of mozilla-central <br />
<br />
For more information, please reach out to :jlorenzo or :mhentges in #cia</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor/Mitmproxy&diff=1207314TestEngineering/Performance/Raptor/Mitmproxy2019-02-06T14:52:58Z<p>Bebef 1987: </p>
<hr />
<div>__TOC__<br />
=== Mitmproxy ===<br />
<br />
Instead of using live web pages for performance testing, Raptor uses a tool called [https://mitmproxy.org/ Mitmproxy]. Mitmproxy allows us to record a live web page and save it as a playback archive. Then during the Raptor pageload test (i.e. raptor-tp6) we use Mitmproxy's 'mitmdump' tool to playback the archive through a local proxy. Raptor automatically configures Firefox to use the proxy, and when the test browses to the test page URL, it loads the page from the Mitmproxy playback archive.<br />
<br />
For more information about Mitmproxy installation, etc. see the [https://mitmproxy.org/ documentation]. Mitmproxy is an open source tool and the source is [https://github.com/mitmproxy/mitmproxy found here on github].<br />
<br />
==== Test Page Recordings ====<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
==== Custom Playback Script ====<br />
When the mitmproxy recording is played back in production, we use a [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script]. The script will return 404s for unknown URLs instead of dropping the entire connection.<br />
<br />
This is an example of the command line used in production (Linux x64) to start mitmproxy and playback one of the recording archives, using the custom playback script:<br />
<br />
/home/cltbld/tasks/task_1541153570/testing/raptor/mitmdump -k -q -s /home/cltbld/tasks/task_1541153570/build/tests/raptor/raptor/playback/alternate-server-replay.py /home/cltbld/tasks/task_1541153570/testing/raptor/facebook.mp<br />
<br />
== How to Record a Mitmproxy Test Page on Firefox Desktop ==<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
The following process was used to record the mitmproxy page archives (on OSX):<br />
<br />
1. Install Mitmproxy 2.X following the mitmproxy [http://docs.mitmproxy.org/en/stable/install.html installation instructions]. We use version 2.0.2 in production (and that was the version used to record the current pagesets). Note that we are unable to upgrade to a newer Mitmproxy because of some non-backwards compatible changes they made, see [https://bugzilla.mozilla.org/show_bug.cgi?id=1457274 Bug 1457274]).<br />
<br />
2. Setup a local proxy in Firefox:<br />
* Start Firefox<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* On the "Connection Settings" screen, select "Manual proxy configuration"<br />
* For "HTTP Proxy" type in "127.0.0.1" with port "8080"<br />
* For "SSL Proxy" use the same "127.0.0.1" with port "8080"<br />
* Click the "OK" button to save the proxy settings<br />
<br />
3. Install the Mitmproxy CA certificate:<br />
* Open a terminal window<br />
* Startup Mitmproxy in host mode:<br />
mitmproxy --host<br />
* In Firefox, browse to "mitm.it" and follow the directions on how to accept the CA certificate<br />
* Shutdown the Mitmproxy tool (in terminal hit "Q", then "Y" to quit)<br />
<br />
4. Record a new page:<br />
* Start Firefox with the proxy still enabled<br />
* Clear the browser history/cache<br />
* In a terminal window start the mitmdump recording tool:<br />
mitmdump -w /path/to/recording.mp<br />
* Inside Firefox browse to the URL that you want to record (i.e. www.spacex.com)<br />
* Wait for the page to be fully loaded and displayed<br />
* In the mitmdump terminal window press "ctrl + c" to stop the recording<br />
<br />
5. To test playing back your recorded page:<br />
* Be sure you have the [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script] available<br />
* Start Firefox with the proxy still enabled<br />
* With Mitmproxy NOT running, browse to your recorded URL (i.e. www.spacex.com); you'll just get an error saying that the proxy server is refusing connections<br />
* In a terminal window, start Mitmproxy playback, using the custom playback script:<br />
mitmdump -k -s /path/to/alternate-server-replay.py /path/to/recording.mp<br />
<br />
For example:<br />
mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor $ ./mitmdump -k -s "/Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/Desktop/new_recordings/no_hero/google-search-no-hero.mp"<br />
<br />
NOTE: On some platforms you will need quotes around the args as used above.<br />
<br />
* In Firefox browse to the URL that you recorded already (i.e. www.spacex.com). This time the page will load successfully; it is actually loading the page from the local mitmdump archive file (*.mp) and not the external site<br />
* You can actually turn off your local WiFi connection if you want and verify the page still loads<br />
* In the terminal window press "ctrl + c" to stop the playback<br />
<br />
6. When you're finished remember to turn off your Firefox proxy:<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* Select "No proxy" and click the "OK" button<br />
<br />
== Adding Hero Elements ==<br />
<br />
Hero elements are special html attributes that can be inserted into existing html elements in pages, so that we can measure pageload up to the time that specific element is displayed. You basically just add an 'element_timing' attribute to an existing html element i.e. <element_timing='hero1'>.<br />
<br />
Raptor supports multiple measurements per single pageload including hero elements. To have a Raptor test measure an existing hero element, you simply add 'hero' to the 'measure = ' line in the Raptor test INI file, and below that add an 'hero = hero1' line that specifies the hero element attribute text (i.e. 'hero1') to look for.<br />
<br />
Since Raptor uses Mitmproxy to playback web pages, in order to use hero elements they must be manually added to the web page archive. Tarek created a script ('[https://github.com/tarekziade/mitmflow mitmflow]') to add hero elements to existing Mitmproxy page recordings. See the [https://github.com/tarekziade/mitmflow mitmflow repo] for more information, but the basic steps to add a hero element to an existing Mitmproxy page archive are:<br />
<br />
1. Copy [https://github.com/tarekziade/mitmflow/blob/master/replace.py Tarek's mitmflow replace script] into the same folder where you have the mitmdump binary.<br />
<br />
2. Startup Firefox, turn on the proxy (see settings above).<br />
<br />
3. Use Mitmproxy (mitmdump) to playback the web page recording of the page you wish to add the hero element to (see above for mitmdump playback command line syntax).<br />
<br />
4. Use the Firefox dev tools page inspector and find an element in the test page where you wish to add the hero element. It should be a unique element like a picture, something with a unique id for example.<br />
<br />
5. Update the Mitmproxy replacement script accordingly to indicate which element you want to add the hero element to.<br />
<br />
6. Use Mitmproxy (mitmdump) to read your page recording, run it through the Mitmflow replace script, and write out a new Mitmproxy page recording with the element having been added. i.e. with Mitmproxy 2.x:<br />
<br />
./mitmdump -dd -s "./replace.py" -r /Users/rwood/Desktop/recordings/google.mp -w /Users/rwood/Desktop/recordings/google-hero.mp<br />
<br />
7. Be sure to use Mitmproxy (mitmdump) to playback your new page recording and verify with inspector that the hero element was added successfully (and only once - if there are other elements with the same id then the hero element could be added to multiple elements by mistake).<br />
<br />
== How to Record a Mitmproxy Test Page on Android ==<br />
<br />
For Raptor page-load tests that run on android (i.e. tp6m-1) the mitmproxy recordings were actually made on an android device (Google Pixel 2) with the geckoview example app.<br />
<br />
Recording a mitmproxy page on android is very similar to desktop except it's easier to run an existing android page-load test (i.e. tp6m-1) first to get the device setup before recording. Also you need to `adb reverse` a port so the device can access mitmdump running on the host machine. Here's how to create a mitmproxy recording using the android geckoview example app:<br />
<br />
1. Ensure your android device (i.e. GP2) is already setup to run Raptor on the geckoview example app, see [[https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_Geckoview_Example_App|Running Raptor on the Geckoview Example App]]<br />
<br />
2. In order to get mitmdump installed on your host machine, and the android device ready to record (i.e. the mitmdump CA certificate installed in the geckoview example app, proxy turned on, etc) run the Raptor tp6m-1 test. With your android device attached to USB, run:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example" --page-cycles 1<br />
<br />
Wait for that to finish. The geckoview example app will remain open on the device.<br />
<br />
3. Clear geckoview_example app data and cache: adb shell pm clear org.mozilla.geckoview_example<br />
<br />
4. In your terminal, change into the obj../testing/raptor folder, that is where mitmdump is located, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:mozilla-unified rwood$ cd obj.../testing/raptor<br />
<br />
5. ADB reverse the port so that the android device can talk to the mitmproxy server on the host, by running this command in a terminal:<br />
<br />
adb reverse tcp:8080 tcp:8080<br />
<br />
6. On the android device in the geckoview example app, browse to "about:blank".<br />
<br />
7. From within the obj../testing/raptor folder, startup mitmdump recording and specify the path and name for the new recording file, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:raptor rwood$ ./mitmdump -w "/Users/rwood/mozilla-unified/obj-ff-dbg/testing/raptor/new-mobile-recording.mp"<br />
<br />
8. With recording running, on the geckoview example app browser to the <new recording's https mobile url> that you wish to record. Wait for the page to load and display fully in the geckoview example app, then in the terminal where mitmdump is running press `ctrl + c` to stop the recording.<br />
<br />
To test the new mobile recording:<br />
<br />
1. Leave the new recording in the obj../testing/raptor dir, and add a section in the tp6m-1 test INI:<br />
<br />
[raptor-tp6-new-mobile-recording-geckoview]<br />
page_cycles = 15<br />
apps = geckoview<br />
test_url = <new recording's https mobile url><br />
playback_recordings = new-mobile-recording.mp<br />
measure = fnbpaint, fcp, dcf, ttfi, loadtime<br />
<br />
2. In your terminal change back into the root of your repo i.e. \mozilla-central\ and run the modified tp6m-1 on geckoview with the command:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-new-mobile-recording-geckoview --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Watch the test run on the android device and verify that the test page is loaded correctly in the geckoview example app. Wait for Raptor to finish and report the results - verify that all of the measurements were successfully retrieved.<br />
<br />
<br />
== Raptor-Studio ==<br />
<br />
A simple command line tool for recording and replaying web traffic for Raptor. <br />
<br />
Note:<br />
Raptor-Studio is still under development. Currently limited to record/replay on the GeckoView example app for Android.<br />
<br />
===Source===<br />
Source code can be found in github at [https://github.com/davehunt/raptor-studio https://github.com/davehunt/raptor-studio]<br />
<br />
===Installation===<br />
Use pipenv to install dependencies <br />
$ pipenv install<br />
<br />
===Usage===<br />
See command line options on how to configure and run the app<br />
$ pipenv run python studio.py --help</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor/Mitmproxy&diff=1207313TestEngineering/Performance/Raptor/Mitmproxy2019-02-06T14:51:22Z<p>Bebef 1987: </p>
<hr />
<div>__TOC__<br />
=== Mitmproxy ===<br />
<br />
Instead of using live web pages for performance testing, Raptor uses a tool called [https://mitmproxy.org/ Mitmproxy]. Mitmproxy allows us to record a live web page and save it as a playback archive. Then during the Raptor pageload test (i.e. raptor-tp6) we use Mitmproxy's 'mitmdump' tool to playback the archive through a local proxy. Raptor automatically configures Firefox to use the proxy, and when the test browses to the test page URL, it loads the page from the Mitmproxy playback archive.<br />
<br />
For more information about Mitmproxy installation, etc. see the [https://mitmproxy.org/ documentation]. Mitmproxy is an open source tool and the source is [https://github.com/mitmproxy/mitmproxy found here on github].<br />
<br />
==== Test Page Recordings ====<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
==== Custom Playback Script ====<br />
When the mitmproxy recording is played back in production, we use a [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script]. The script will return 404s for unknown URLs instead of dropping the entire connection.<br />
<br />
This is an example of the command line used in production (Linux x64) to start mitmproxy and playback one of the recording archives, using the custom playback script:<br />
<br />
/home/cltbld/tasks/task_1541153570/testing/raptor/mitmdump -k -q -s /home/cltbld/tasks/task_1541153570/build/tests/raptor/raptor/playback/alternate-server-replay.py /home/cltbld/tasks/task_1541153570/testing/raptor/facebook.mp<br />
<br />
== How to Record a Mitmproxy Test Page on Firefox Desktop ==<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
The following process was used to record the mitmproxy page archives (on OSX):<br />
<br />
1. Install Mitmproxy 2.X following the mitmproxy [http://docs.mitmproxy.org/en/stable/install.html installation instructions]. We use version 2.0.2 in production (and that was the version used to record the current pagesets). Note that we are unable to upgrade to a newer Mitmproxy because of some non-backwards compatible changes they made, see [https://bugzilla.mozilla.org/show_bug.cgi?id=1457274 Bug 1457274]).<br />
<br />
2. Setup a local proxy in Firefox:<br />
* Start Firefox<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* On the "Connection Settings" screen, select "Manual proxy configuration"<br />
* For "HTTP Proxy" type in "127.0.0.1" with port "8080"<br />
* For "SSL Proxy" use the same "127.0.0.1" with port "8080"<br />
* Click the "OK" button to save the proxy settings<br />
<br />
3. Install the Mitmproxy CA certificate:<br />
* Open a terminal window<br />
* Startup Mitmproxy in host mode:<br />
mitmproxy --host<br />
* In Firefox, browse to "mitm.it" and follow the directions on how to accept the CA certificate<br />
* Shutdown the Mitmproxy tool (in terminal hit "Q", then "Y" to quit)<br />
<br />
4. Record a new page:<br />
* Start Firefox with the proxy still enabled<br />
* Clear the browser history/cache<br />
* In a terminal window start the mitmdump recording tool:<br />
mitmdump -w /path/to/recording.mp<br />
* Inside Firefox browse to the URL that you want to record (i.e. www.spacex.com)<br />
* Wait for the page to be fully loaded and displayed<br />
* In the mitmdump terminal window press "ctrl + c" to stop the recording<br />
<br />
5. To test playing back your recorded page:<br />
* Be sure you have the [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script] available<br />
* Start Firefox with the proxy still enabled<br />
* With Mitmproxy NOT running, browse to your recorded URL (i.e. www.spacex.com); you'll just get an error saying that the proxy server is refusing connections<br />
* In a terminal window, start Mitmproxy playback, using the custom playback script:<br />
mitmdump -k -s /path/to/alternate-server-replay.py /path/to/recording.mp<br />
<br />
For example:<br />
mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor $ ./mitmdump -k -s "/Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/Desktop/new_recordings/no_hero/google-search-no-hero.mp"<br />
<br />
NOTE: On some platforms you will need quotes around the args as used above.<br />
<br />
* In Firefox browse to the URL that you recorded already (i.e. www.spacex.com). This time the page will load successfully; it is actually loading the page from the local mitmdump archive file (*.mp) and not the external site<br />
* You can actually turn off your local WiFi connection if you want and verify the page still loads<br />
* In the terminal window press "ctrl + c" to stop the playback<br />
<br />
6. When you're finished remember to turn off your Firefox proxy:<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* Select "No proxy" and click the "OK" button<br />
<br />
== Adding Hero Elements ==<br />
<br />
Hero elements are special html attributes that can be inserted into existing html elements in pages, so that we can measure pageload up to the time that specific element is displayed. You basically just add an 'element_timing' attribute to an existing html element i.e. <element_timing='hero1'>.<br />
<br />
Raptor supports multiple measurements per single pageload including hero elements. To have a Raptor test measure an existing hero element, you simply add 'hero' to the 'measure = ' line in the Raptor test INI file, and below that add an 'hero = hero1' line that specifies the hero element attribute text (i.e. 'hero1') to look for.<br />
<br />
Since Raptor uses Mitmproxy to playback web pages, in order to use hero elements they must be manually added to the web page archive. Tarek created a script ('[https://github.com/tarekziade/mitmflow mitmflow]') to add hero elements to existing Mitmproxy page recordings. See the [https://github.com/tarekziade/mitmflow mitmflow repo] for more information, but the basic steps to add a hero element to an existing Mitmproxy page archive are:<br />
<br />
1. Copy [https://github.com/tarekziade/mitmflow/blob/master/replace.py Tarek's mitmflow replace script] into the same folder where you have the mitmdump binary.<br />
<br />
2. Startup Firefox, turn on the proxy (see settings above).<br />
<br />
3. Use Mitmproxy (mitmdump) to playback the web page recording of the page you wish to add the hero element to (see above for mitmdump playback command line syntax).<br />
<br />
4. Use the Firefox dev tools page inspector and find an element in the test page where you wish to add the hero element. It should be a unique element like a picture, something with a unique id for example.<br />
<br />
5. Update the Mitmproxy replacement script accordingly to indicate which element you want to add the hero element to.<br />
<br />
6. Use Mitmproxy (mitmdump) to read your page recording, run it through the Mitmflow replace script, and write out a new Mitmproxy page recording with the element having been added. i.e. with Mitmproxy 2.x:<br />
<br />
./mitmdump -dd -s "./replace.py" -r /Users/rwood/Desktop/recordings/google.mp -w /Users/rwood/Desktop/recordings/google-hero.mp<br />
<br />
7. Be sure to use Mitmproxy (mitmdump) to playback your new page recording and verify with inspector that the hero element was added successfully (and only once - if there are other elements with the same id then the hero element could be added to multiple elements by mistake).<br />
<br />
== How to Record a Mitmproxy Test Page on Android ==<br />
<br />
For Raptor page-load tests that run on android (i.e. tp6m-1) the mitmproxy recordings were actually made on an android device (Google Pixel 2) with the geckoview example app.<br />
<br />
Recording a mitmproxy page on android is very similar to desktop except it's easier to run an existing android page-load test (i.e. tp6m-1) first to get the device setup before recording. Also you need to `adb reverse` a port so the device can access mitmdump running on the host machine. Here's how to create a mitmproxy recording using the android geckoview example app:<br />
<br />
1. Ensure your android device (i.e. GP2) is already setup to run Raptor on the geckoview example app, see [[https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_Geckoview_Example_App|Running Raptor on the Geckoview Example App]]<br />
<br />
2. In order to get mitmdump installed on your host machine, and the android device ready to record (i.e. the mitmdump CA certificate installed in the geckoview example app, proxy turned on, etc) run the Raptor tp6m-1 test. With your android device attached to USB, run:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example" --page-cycles 1<br />
<br />
Wait for that to finish. The geckoview example app will remain open on the device.<br />
<br />
3. Clear geckoview_example app data and cache: adb shell pm clear org.mozilla.geckoview_example<br />
<br />
4. In your terminal, change into the obj../testing/raptor folder, that is where mitmdump is located, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:mozilla-unified rwood$ cd obj.../testing/raptor<br />
<br />
5. ADB reverse the port so that the android device can talk to the mitmproxy server on the host, by running this command in a terminal:<br />
<br />
adb reverse tcp:8080 tcp:8080<br />
<br />
6. On the android device in the geckoview example app, browse to "about:blank".<br />
<br />
7. From within the obj../testing/raptor folder, startup mitmdump recording and specify the path and name for the new recording file, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:raptor rwood$ ./mitmdump -w "/Users/rwood/mozilla-unified/obj-ff-dbg/testing/raptor/new-mobile-recording.mp"<br />
<br />
8. With recording running, on the geckoview example app browser to the <new recording's https mobile url> that you wish to record. Wait for the page to load and display fully in the geckoview example app, then in the terminal where mitmdump is running press `ctrl + c` to stop the recording.<br />
<br />
To test the new mobile recording:<br />
<br />
1. Leave the new recording in the obj../testing/raptor dir, and add a section in the tp6m-1 test INI:<br />
<br />
[raptor-tp6-new-mobile-recording-geckoview]<br />
page_cycles = 15<br />
apps = geckoview<br />
test_url = <new recording's https mobile url><br />
playback_recordings = new-mobile-recording.mp<br />
measure = fnbpaint, fcp, dcf, ttfi, loadtime<br />
<br />
2. In your terminal change back into the root of your repo i.e. \mozilla-central\ and run the modified tp6m-1 on geckoview with the command:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-new-mobile-recording-geckoview --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Watch the test run on the android device and verify that the test page is loaded correctly in the geckoview example app. Wait for Raptor to finish and report the results - verify that all of the measurements were successfully retrieved.<br />
<br />
<br />
== Raptor-Studio ==<br />
<br />
A simple command line tool for recording and replaying web traffic for Raptor. <br />
<br />
Note:<br />
Raptor-Studio is still under development. Currently limited to record/replay on the GeckoView example app for Android.<br />
<br />
==Source==<br />
Source code can be found in github at [https://github.com/davehunt/raptor-studio https://github.com/davehunt/raptor-studio]<br />
<br />
==Installation==<br />
Use pipenv to install dependencies <br />
$ pipenv install<br />
<br />
==Usage==<br />
See command line options on how to configure and run the app<br />
$ pipenv run python studio.py --help</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor/Mitmproxy&diff=1207312TestEngineering/Performance/Raptor/Mitmproxy2019-02-06T14:49:44Z<p>Bebef 1987: </p>
<hr />
<div>__TOC__<br />
=== Mitmproxy ===<br />
<br />
Instead of using live web pages for performance testing, Raptor uses a tool called [https://mitmproxy.org/ Mitmproxy]. Mitmproxy allows us to record a live web page and save it as a playback archive. Then during the Raptor pageload test (i.e. raptor-tp6) we use Mitmproxy's 'mitmdump' tool to playback the archive through a local proxy. Raptor automatically configures Firefox to use the proxy, and when the test browses to the test page URL, it loads the page from the Mitmproxy playback archive.<br />
<br />
For more information about Mitmproxy installation, etc. see the [https://mitmproxy.org/ documentation]. Mitmproxy is an open source tool and the source is [https://github.com/mitmproxy/mitmproxy found here on github].<br />
<br />
==== Test Page Recordings ====<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
==== Custom Playback Script ====<br />
When the mitmproxy recording is played back in production, we use a [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script]. The script will return 404s for unknown URLs instead of dropping the entire connection.<br />
<br />
This is an example of the command line used in production (Linux x64) to start mitmproxy and playback one of the recording archives, using the custom playback script:<br />
<br />
/home/cltbld/tasks/task_1541153570/testing/raptor/mitmdump -k -q -s /home/cltbld/tasks/task_1541153570/build/tests/raptor/raptor/playback/alternate-server-replay.py /home/cltbld/tasks/task_1541153570/testing/raptor/facebook.mp<br />
<br />
== How to Record a Mitmproxy Test Page on Firefox Desktop ==<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
The following process was used to record the mitmproxy page archives (on OSX):<br />
<br />
1. Install Mitmproxy 2.X following the mitmproxy [http://docs.mitmproxy.org/en/stable/install.html installation instructions]. We use version 2.0.2 in production (and that was the version used to record the current pagesets). Note that we are unable to upgrade to a newer Mitmproxy because of some non-backwards compatible changes they made, see [https://bugzilla.mozilla.org/show_bug.cgi?id=1457274 Bug 1457274]).<br />
<br />
2. Setup a local proxy in Firefox:<br />
* Start Firefox<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* On the "Connection Settings" screen, select "Manual proxy configuration"<br />
* For "HTTP Proxy" type in "127.0.0.1" with port "8080"<br />
* For "SSL Proxy" use the same "127.0.0.1" with port "8080"<br />
* Click the "OK" button to save the proxy settings<br />
<br />
3. Install the Mitmproxy CA certificate:<br />
* Open a terminal window<br />
* Startup Mitmproxy in host mode:<br />
mitmproxy --host<br />
* In Firefox, browse to "mitm.it" and follow the directions on how to accept the CA certificate<br />
* Shutdown the Mitmproxy tool (in terminal hit "Q", then "Y" to quit)<br />
<br />
4. Record a new page:<br />
* Start Firefox with the proxy still enabled<br />
* Clear the browser history/cache<br />
* In a terminal window start the mitmdump recording tool:<br />
mitmdump -w /path/to/recording.mp<br />
* Inside Firefox browse to the URL that you want to record (i.e. www.spacex.com)<br />
* Wait for the page to be fully loaded and displayed<br />
* In the mitmdump terminal window press "ctrl + c" to stop the recording<br />
<br />
5. To test playing back your recorded page:<br />
* Be sure you have the [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script] available<br />
* Start Firefox with the proxy still enabled<br />
* With Mitmproxy NOT running, browse to your recorded URL (i.e. www.spacex.com); you'll just get an error saying that the proxy server is refusing connections<br />
* In a terminal window, start Mitmproxy playback, using the custom playback script:<br />
mitmdump -k -s /path/to/alternate-server-replay.py /path/to/recording.mp<br />
<br />
For example:<br />
mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor $ ./mitmdump -k -s "/Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/Desktop/new_recordings/no_hero/google-search-no-hero.mp"<br />
<br />
NOTE: On some platforms you will need quotes around the args as used above.<br />
<br />
* In Firefox browse to the URL that you recorded already (i.e. www.spacex.com). This time the page will load successfully; it is actually loading the page from the local mitmdump archive file (*.mp) and not the external site<br />
* You can actually turn off your local WiFi connection if you want and verify the page still loads<br />
* In the terminal window press "ctrl + c" to stop the playback<br />
<br />
6. When you're finished remember to turn off your Firefox proxy:<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* Select "No proxy" and click the "OK" button<br />
<br />
== Adding Hero Elements ==<br />
<br />
Hero elements are special html attributes that can be inserted into existing html elements in pages, so that we can measure pageload up to the time that specific element is displayed. You basically just add an 'element_timing' attribute to an existing html element i.e. <element_timing='hero1'>.<br />
<br />
Raptor supports multiple measurements per single pageload including hero elements. To have a Raptor test measure an existing hero element, you simply add 'hero' to the 'measure = ' line in the Raptor test INI file, and below that add an 'hero = hero1' line that specifies the hero element attribute text (i.e. 'hero1') to look for.<br />
<br />
Since Raptor uses Mitmproxy to playback web pages, in order to use hero elements they must be manually added to the web page archive. Tarek created a script ('[https://github.com/tarekziade/mitmflow mitmflow]') to add hero elements to existing Mitmproxy page recordings. See the [https://github.com/tarekziade/mitmflow mitmflow repo] for more information, but the basic steps to add a hero element to an existing Mitmproxy page archive are:<br />
<br />
1. Copy [https://github.com/tarekziade/mitmflow/blob/master/replace.py Tarek's mitmflow replace script] into the same folder where you have the mitmdump binary.<br />
<br />
2. Startup Firefox, turn on the proxy (see settings above).<br />
<br />
3. Use Mitmproxy (mitmdump) to playback the web page recording of the page you wish to add the hero element to (see above for mitmdump playback command line syntax).<br />
<br />
4. Use the Firefox dev tools page inspector and find an element in the test page where you wish to add the hero element. It should be a unique element like a picture, something with a unique id for example.<br />
<br />
5. Update the Mitmproxy replacement script accordingly to indicate which element you want to add the hero element to.<br />
<br />
6. Use Mitmproxy (mitmdump) to read your page recording, run it through the Mitmflow replace script, and write out a new Mitmproxy page recording with the element having been added. i.e. with Mitmproxy 2.x:<br />
<br />
./mitmdump -dd -s "./replace.py" -r /Users/rwood/Desktop/recordings/google.mp -w /Users/rwood/Desktop/recordings/google-hero.mp<br />
<br />
7. Be sure to use Mitmproxy (mitmdump) to playback your new page recording and verify with inspector that the hero element was added successfully (and only once - if there are other elements with the same id then the hero element could be added to multiple elements by mistake).<br />
<br />
== How to Record a Mitmproxy Test Page on Android ==<br />
<br />
For Raptor page-load tests that run on android (i.e. tp6m-1) the mitmproxy recordings were actually made on an android device (Google Pixel 2) with the geckoview example app.<br />
<br />
Recording a mitmproxy page on android is very similar to desktop except it's easier to run an existing android page-load test (i.e. tp6m-1) first to get the device setup before recording. Also you need to `adb reverse` a port so the device can access mitmdump running on the host machine. Here's how to create a mitmproxy recording using the android geckoview example app:<br />
<br />
1. Ensure your android device (i.e. GP2) is already setup to run Raptor on the geckoview example app, see [[https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_Geckoview_Example_App|Running Raptor on the Geckoview Example App]]<br />
<br />
2. In order to get mitmdump installed on your host machine, and the android device ready to record (i.e. the mitmdump CA certificate installed in the geckoview example app, proxy turned on, etc) run the Raptor tp6m-1 test. With your android device attached to USB, run:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example" --page-cycles 1<br />
<br />
Wait for that to finish. The geckoview example app will remain open on the device.<br />
<br />
3. Clear geckoview_example app data and cache: adb shell pm clear org.mozilla.geckoview_example<br />
<br />
4. In your terminal, change into the obj../testing/raptor folder, that is where mitmdump is located, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:mozilla-unified rwood$ cd obj.../testing/raptor<br />
<br />
5. ADB reverse the port so that the android device can talk to the mitmproxy server on the host, by running this command in a terminal:<br />
<br />
adb reverse tcp:8080 tcp:8080<br />
<br />
6. On the android device in the geckoview example app, browse to "about:blank".<br />
<br />
7. From within the obj../testing/raptor folder, startup mitmdump recording and specify the path and name for the new recording file, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:raptor rwood$ ./mitmdump -w "/Users/rwood/mozilla-unified/obj-ff-dbg/testing/raptor/new-mobile-recording.mp"<br />
<br />
8. With recording running, on the geckoview example app browser to the <new recording's https mobile url> that you wish to record. Wait for the page to load and display fully in the geckoview example app, then in the terminal where mitmdump is running press `ctrl + c` to stop the recording.<br />
<br />
To test the new mobile recording:<br />
<br />
1. Leave the new recording in the obj../testing/raptor dir, and add a section in the tp6m-1 test INI:<br />
<br />
[raptor-tp6-new-mobile-recording-geckoview]<br />
page_cycles = 15<br />
apps = geckoview<br />
test_url = <new recording's https mobile url><br />
playback_recordings = new-mobile-recording.mp<br />
measure = fnbpaint, fcp, dcf, ttfi, loadtime<br />
<br />
2. In your terminal change back into the root of your repo i.e. \mozilla-central\ and run the modified tp6m-1 on geckoview with the command:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-new-mobile-recording-geckoview --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Watch the test run on the android device and verify that the test page is loaded correctly in the geckoview example app. Wait for Raptor to finish and report the results - verify that all of the measurements were successfully retrieved.<br />
<br />
<br />
=== Raptor-Studio ===<br />
<br />
A simple command line tool for recording and replaying web traffic for Raptor. <br />
<br />
Note:<br />
Raptor-Studio is still under development. Currently limited to record/replay on the GeckoView example app for Android.<br />
<br />
==Source==<br />
Source code can be found in github at [https://github.com/davehunt/raptor-studio https://github.com/davehunt/raptor-studio]<br />
<br />
==Installation==<br />
Use pipenv to install dependencies <br />
$ pipenv install<br />
<br />
==Usage==<br />
See command line options on how to configure and run the app<br />
$ pipenv run python studio.py --help</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor/Mitmproxy&diff=1207310TestEngineering/Performance/Raptor/Mitmproxy2019-02-06T14:38:59Z<p>Bebef 1987: /* How to Record a Mitmproxy Test Page on Android */</p>
<hr />
<div>__TOC__<br />
=== Mitmproxy ===<br />
<br />
Instead of using live web pages for performance testing, Raptor uses a tool called [https://mitmproxy.org/ Mitmproxy]. Mitmproxy allows us to record a live web page and save it as a playback archive. Then during the Raptor pageload test (i.e. raptor-tp6) we use Mitmproxy's 'mitmdump' tool to playback the archive through a local proxy. Raptor automatically configures Firefox to use the proxy, and when the test browses to the test page URL, it loads the page from the Mitmproxy playback archive.<br />
<br />
For more information about Mitmproxy installation, etc. see the [https://mitmproxy.org/ documentation]. Mitmproxy is an open source tool and the source is [https://github.com/mitmproxy/mitmproxy found here on github].<br />
<br />
==== Test Page Recordings ====<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
==== Custom Playback Script ====<br />
When the mitmproxy recording is played back in production, we use a [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script]. The script will return 404s for unknown URLs instead of dropping the entire connection.<br />
<br />
This is an example of the command line used in production (Linux x64) to start mitmproxy and playback one of the recording archives, using the custom playback script:<br />
<br />
/home/cltbld/tasks/task_1541153570/testing/raptor/mitmdump -k -q -s /home/cltbld/tasks/task_1541153570/build/tests/raptor/raptor/playback/alternate-server-replay.py /home/cltbld/tasks/task_1541153570/testing/raptor/facebook.mp<br />
<br />
== How to Record a Mitmproxy Test Page on Firefox Desktop ==<br />
Test pages used for Raptor pageload tests (i.e. raptor-tp6, raptor-gdocs) are mitmproxy recordings that are played back during the test (and ultimately loaded in Firefox via the local proxy). Each test page is a separate mitmproxy recording (*.mp) file, and all the page recordings for each suite are contained in a single zip for that suite (i.e. mitmproxy-recordings-raptor-tp6.zip) on tooltool.<br />
<br />
When the Raptor pageload test is run, the mitmproxy recording archive for use during the test is automatically downloaded from tooltool.<br />
<br />
The following process was used to record the mitmproxy page archives (on OSX):<br />
<br />
1. Install Mitmproxy 2.X following the mitmproxy [http://docs.mitmproxy.org/en/stable/install.html installation instructions]. We use version 2.0.2 in production (and that was the version used to record the current pagesets). Note that we are unable to upgrade to a newer Mitmproxy because of some non-backwards compatible changes they made, see [https://bugzilla.mozilla.org/show_bug.cgi?id=1457274 Bug 1457274]).<br />
<br />
2. Setup a local proxy in Firefox:<br />
* Start Firefox<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* On the "Connection Settings" screen, select "Manual proxy configuration"<br />
* For "HTTP Proxy" type in "127.0.0.1" with port "8080"<br />
* For "SSL Proxy" use the same "127.0.0.1" with port "8080"<br />
* Click the "OK" button to save the proxy settings<br />
<br />
3. Install the Mitmproxy CA certificate:<br />
* Open a terminal window<br />
* Startup Mitmproxy in host mode:<br />
mitmproxy --host<br />
* In Firefox, browse to "mitm.it" and follow the directions on how to accept the CA certificate<br />
* Shutdown the Mitmproxy tool (in terminal hit "Q", then "Y" to quit)<br />
<br />
4. Record a new page:<br />
* Start Firefox with the proxy still enabled<br />
* Clear the browser history/cache<br />
* In a terminal window start the mitmdump recording tool:<br />
mitmdump -w /path/to/recording.mp<br />
* Inside Firefox browse to the URL that you want to record (i.e. www.spacex.com)<br />
* Wait for the page to be fully loaded and displayed<br />
* In the mitmdump terminal window press "ctrl + c" to stop the recording<br />
<br />
5. To test playing back your recorded page:<br />
* Be sure you have the [https://searchfox.org/mozilla-central/rev/39cb1e96cf97713c444c5a0404d4f84627aee85d/testing/raptor/raptor/playback/alternate-server-replay.py custom playback script] available<br />
* Start Firefox with the proxy still enabled<br />
* With Mitmproxy NOT running, browse to your recorded URL (i.e. www.spacex.com); you'll just get an error saying that the proxy server is refusing connections<br />
* In a terminal window, start Mitmproxy playback, using the custom playback script:<br />
mitmdump -k -s /path/to/alternate-server-replay.py /path/to/recording.mp<br />
<br />
For example:<br />
mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor $ ./mitmdump -k -s "/Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/Desktop/new_recordings/no_hero/google-search-no-hero.mp"<br />
<br />
NOTE: On some platforms you will need quotes around the args as used above.<br />
<br />
* In Firefox browse to the URL that you recorded already (i.e. www.spacex.com). This time the page will load successfully; it is actually loading the page from the local mitmdump archive file (*.mp) and not the external site<br />
* You can actually turn off your local WiFi connection if you want and verify the page still loads<br />
* In the terminal window press "ctrl + c" to stop the playback<br />
<br />
6. When you're finished remember to turn off your Firefox proxy:<br />
* Preferences => General<br />
* Network Proxy => Settings<br />
* Select "No proxy" and click the "OK" button<br />
<br />
== Adding Hero Elements ==<br />
<br />
Hero elements are special html attributes that can be inserted into existing html elements in pages, so that we can measure pageload up to the time that specific element is displayed. You basically just add an 'element_timing' attribute to an existing html element i.e. <element_timing='hero1'>.<br />
<br />
Raptor supports multiple measurements per single pageload including hero elements. To have a Raptor test measure an existing hero element, you simply add 'hero' to the 'measure = ' line in the Raptor test INI file, and below that add an 'hero = hero1' line that specifies the hero element attribute text (i.e. 'hero1') to look for.<br />
<br />
Since Raptor uses Mitmproxy to playback web pages, in order to use hero elements they must be manually added to the web page archive. Tarek created a script ('[https://github.com/tarekziade/mitmflow mitmflow]') to add hero elements to existing Mitmproxy page recordings. See the [https://github.com/tarekziade/mitmflow mitmflow repo] for more information, but the basic steps to add a hero element to an existing Mitmproxy page archive are:<br />
<br />
1. Copy [https://github.com/tarekziade/mitmflow/blob/master/replace.py Tarek's mitmflow replace script] into the same folder where you have the mitmdump binary.<br />
<br />
2. Startup Firefox, turn on the proxy (see settings above).<br />
<br />
3. Use Mitmproxy (mitmdump) to playback the web page recording of the page you wish to add the hero element to (see above for mitmdump playback command line syntax).<br />
<br />
4. Use the Firefox dev tools page inspector and find an element in the test page where you wish to add the hero element. It should be a unique element like a picture, something with a unique id for example.<br />
<br />
5. Update the Mitmproxy replacement script accordingly to indicate which element you want to add the hero element to.<br />
<br />
6. Use Mitmproxy (mitmdump) to read your page recording, run it through the Mitmflow replace script, and write out a new Mitmproxy page recording with the element having been added. i.e. with Mitmproxy 2.x:<br />
<br />
./mitmdump -dd -s "./replace.py" -r /Users/rwood/Desktop/recordings/google.mp -w /Users/rwood/Desktop/recordings/google-hero.mp<br />
<br />
7. Be sure to use Mitmproxy (mitmdump) to playback your new page recording and verify with inspector that the hero element was added successfully (and only once - if there are other elements with the same id then the hero element could be added to multiple elements by mistake).<br />
<br />
== How to Record a Mitmproxy Test Page on Android ==<br />
<br />
For Raptor page-load tests that run on android (i.e. tp6m-1) the mitmproxy recordings were actually made on an android device (Google Pixel 2) with the geckoview example app.<br />
<br />
Recording a mitmproxy page on android is very similar to desktop except it's easier to run an existing android page-load test (i.e. tp6m-1) first to get the device setup before recording. Also you need to `adb reverse` a port so the device can access mitmdump running on the host machine. Here's how to create a mitmproxy recording using the android geckoview example app:<br />
<br />
1. Ensure your android device (i.e. GP2) is already setup to run Raptor on the geckoview example app, see [[https://wiki.mozilla.org/Performance_sheriffing/Raptor#Running_on_the_Android_Geckoview_Example_App|Running Raptor on the Geckoview Example App]]<br />
<br />
2. In order to get mitmdump installed on your host machine, and the android device ready to record (i.e. the mitmdump CA certificate installed in the geckoview example app, proxy turned on, etc) run the Raptor tp6m-1 test. With your android device attached to USB, run:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example" --page-cycles 1<br />
<br />
Wait for that to finish. The geckoview example app will remain open on the device.<br />
<br />
3. Clear geckoview_example app data and cache: adb shell pm clear org.mozilla.geckoview_example<br />
<br />
4. In your terminal, change into the obj../testing/raptor folder, that is where mitmdump is located, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:mozilla-unified rwood$ cd obj.../testing/raptor<br />
<br />
5. ADB reverse the port so that the android device can talk to the mitmproxy server on the host, by running this command in a terminal:<br />
<br />
adb reverse tcp:8080 tcp:8080<br />
<br />
6. On the android device in the geckoview example app, browse to "about:blank".<br />
<br />
7. From within the obj../testing/raptor folder, startup mitmdump recording and specify the path and name for the new recording file, i.e.:<br />
<br />
Roberts-MacBook-Pro-1927:raptor rwood$ ./mitmdump -w "/Users/rwood/mozilla-unified/obj-ff-dbg/testing/raptor/new-mobile-recording.mp"<br />
<br />
8. With recording running, on the geckoview example app browser to the <new recording's https mobile url> that you wish to record. Wait for the page to load and display fully in the geckoview example app, then in the terminal where mitmdump is running press `ctrl + c` to stop the recording.<br />
<br />
To test the new mobile recording:<br />
<br />
1. Leave the new recording in the obj../testing/raptor dir, and add a section in the tp6m-1 test INI:<br />
<br />
[raptor-tp6-new-mobile-recording-geckoview]<br />
page_cycles = 15<br />
apps = geckoview<br />
test_url = <new recording's https mobile url><br />
playback_recordings = new-mobile-recording.mp<br />
measure = fnbpaint, fcp, dcf, ttfi, loadtime<br />
<br />
2. In your terminal change back into the root of your repo i.e. \mozilla-central\ and run the modified tp6m-1 on geckoview with the command:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-new-mobile-recording-geckoview --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Watch the test run on the android device and verify that the test page is loaded correctly in the geckoview example app. Wait for Raptor to finish and report the results - verify that all of the measurements were successfully retrieved.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1207043TestEngineering/Performance/Raptor2019-02-01T12:48:05Z<p>Bebef 1987: /* Running on the Android Geckoview Example App */</p>
<hr />
<div>== Raptor ==<br />
<br />
Raptor is a new performance testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android Geckoview, and on Google Chromium.<br />
<br />
Raptor supports two types of performance tests: page-load tests, and standard benchmark tests.<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests basically involve loading a specific web page and measuring the load performance (i.e. time-to-first-non-blank-paint, dom-content-flushed, ttfi). The pageload measurements are 'warm load' in that a new tab is opened only at the start of the test for each new page, and each pagecycle is a reload in the same browser tab.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
For page-load tests, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on tooltool and are automatically downloaded by Raptor when they are required for a test.<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
<br />
* GIT needs to be in the path in the terminal that you build Firefox / run Raptor from, as Raptor uses GIT to check out a local copy of some of the source for some of the performance benchmarks<br />
<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
<br />
* If you plan on running Raptor on android, your android device must already be setup (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see what Raptor performance tests are currently available on all platforms use the 'print-tests' option, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally just build Firefox and then run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name><br />
<br />
For example to run the raptor tp6 pageload test locally just use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on the Android Geckoview Example App ====<br />
<br />
When running Raptor tests on a local android device, Raptor is expecting the device to already be setup and ready to go.<br />
<br />
First ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your android device to USB and running:<br />
<br />
mozilla-central$ adb devices<br />
<br />
If your device serial number is listed then you're set. If ADB is not found you can install it by running (in your local mozilla development repo):<br />
<br />
mozilla-central$ ./mach bootstrap<br />
<br />
Then in bootstrap select the option for "Firefox for Android Artifact Mode" and that will install the required tools (no need to do an actual build).<br />
<br />
Next make sure your android device is ready to go. Local android device pre-requisites are:<br />
<br />
* Device is rooted <br />
Note: In you are using Magisk to root your device use version ver. 17.3<br />
<br />
* Device is in 'superuser' mode<br />
<br />
* The geckoview example app is already installed on the device. Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android android build on treeherder], then install it on your device i.e.:<br />
<br />
mozilla-central$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON which is required. Note, when updating the geckoview example app, you must uninstall the existing one first, i.e.:<br />
<br />
mozilla-central$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on android geckoview is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
To run a Raptor page-load test (i.e. tp6m-1) on the geckoview example app, use this command line:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6m-1 --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
A couple of notes about debugging:<br />
<br />
* Raptor browser extension console messages do appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
mozilla-central$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on android using the Firefox WebIDE, click on the android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor-test --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor-test --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
mozilla-central$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with gecko profiling enabled running nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor-test command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile zip file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
mozilla-central$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You may need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile zip files will be listed as job artifacts;<br />
# Select a Raptor profile zip artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests ('tp6' and 'tp6m' suites) use the [https://mitmproxy.org/ Mitmproxy] tool to record and playback page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for each page-load suite, and the test pages contained within each.<br />
<br />
==== raptor-tp6-1 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-1.ini raptor-tp6-1.ini].<br />
<br />
''' Test pages in tp6-1 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-amazon-*]<br />
* URL: https://www.amazon.com/s/url=search-alias%3Daps&field-keywords=laptop<br />
* Hero: string description element for first laptop in search results<br />
<br />
[raptor-tp6-facebook-*]<br />
* URL: https://www.facebook.com (logged into a user account)<br />
* Hero: on the Facebook 'Home' icon<br />
<br />
[raptor-tp6-google-*]<br />
* URL: https://www.google.com/search?hl=en&q=barack+obama&cad=h<br />
* Hero: bigger photo of Obama in search results towards top right<br />
<br />
[raptor-tp6-youtube-*]<br />
* URL: https://www.youtube.com<br />
* Hero: YouTube logo on the top left<br />
<br />
==== raptor-tp6-2 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-2.ini raptor-tp6-2.ini].<br />
<br />
''' Test pages in tp6-2 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-docs-*]<br />
* URL: https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-sheets-*]<br />
* URL: https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-slides-*]<br />
* URL: https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
==== raptor-tp6-3 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-3.ini raptor-tp6-3.ini].<br />
<br />
''' Test pages in tp6-3 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-imdb-*]<br />
* URL: https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2<br />
<br />
[raptor-tp6-imgur-*]<br />
* URL: https://imgur.com/gallery/m5tYJL6<br />
<br />
[raptor-tp6-wikia-*]<br />
* URL: http://fandom.wikia.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase<br />
<br />
==== raptor-tp6-4 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-4.ini raptor-tp6-4.ini].<br />
<br />
''' Test pages in tp6-4 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-bing-*]<br />
* URL: https://www.bing.com/search?q=barack+obama<br />
<br />
[raptor-tp6-yandex-*]<br />
* URL: https://yandex.ru/search/?text=barack%20obama&lr=10115<br />
<br />
==== raptor-tp6-5 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-5.ini raptor-tp6-5.ini].<br />
<br />
''' Test pages in tp6-5 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-apple-*]<br />
* URL: https://www.apple.com/macbook-pro/<br />
<br />
[raptor-tp6-microsoft-*]<br />
* URL: https://www.microsoft.com/en-us/windows/get-windows-10<br />
<br />
==== raptor-tp6-6 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-6.ini raptor-tp6-6.ini].<br />
<br />
''' Test pages in tp6-6 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-reddit-*]<br />
* URL: https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/<br />
<br />
[raptor-tp6-yahoo-news-*]<br />
* URL: https://www.yahoo.com/lifestyle/police-respond-noise-complaint-end-playing-video-games-respectful-tenants-002329963.html<br />
<br />
==== raptor-tp6-7 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-7.ini raptor-tp6-7.ini].<br />
<br />
''' Test pages in tp6-7 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-instagram-*]<br />
* URL: https://www.instagram.com/<br />
<br />
[raptor-tp6-twitter-*]<br />
* URL: https://twitter.com/BarackObama<br />
<br />
[raptor-tp6-yahoo-mail-*]<br />
* URL: https://mail.yahoo.com/<br />
<br />
==== raptor-tp6-8 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-8.ini raptor-tp6-8.ini].<br />
<br />
''' Test pages in tp6-8 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-ebay-*]<br />
* URL: https://www.ebay.com/<br />
<br />
[raptor-tp6-wikipedia-*]<br />
* URL: https://en.wikipedia.org/wiki/Barack_Obama<br />
<br />
==== raptor-tp6-9 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-9.ini raptor-tp6-9.ini].<br />
<br />
''' Test pages in tp6-9 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-google-mail-*]<br />
* URL: https://mail.google.com/<br />
<br />
[raptor-tp6-pinterest-*]<br />
* URL: https://pinterest.com/<br />
<br />
==== raptor-tp6-10 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-10.ini raptor-tp6-10.ini].<br />
<br />
''' Test pages in tp6-10 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-paypal-*]<br />
* URL: https://www.paypal.com/myaccount/summary/<br />
<br />
==== raptor-tp6m-1 ====<br />
* contact: :rwood, :davehunt<br />
* type: page-load<br />
* browsers: Firefox Android Geckoview Example App<br />
* measuring: time-to-first-non-blank-paint, first-contentful-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* page-cycles: 15<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6m-1.ini raptor-tp6m-1.ini].<br />
<br />
''' Test pages in tp6m-1 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6m-amazon-*]<br />
* URL: https://www.amazon.com<br />
* Hero: None<br />
<br />
[raptor-tp6m-facebook-*]<br />
* URL: https://m.facebook.com (logged into a user account)<br />
* Hero: None<br />
<br />
[raptor-tp6m-google-*]<br />
* URL: https://www.google.com<br />
* Hero: None<br />
<br />
[raptor-tp6m-youtube-*]<br />
* URL: https://www.youtube.com<br />
* Hero: None<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: bholley<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] javascript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor-test<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all setup and connected to USB, invoke the Raptor test normally via ./mach raptor-test<br />
# Startup a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page, select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1206206TestEngineering/Performance/Raptor2019-01-15T12:35:50Z<p>Bebef 1987: /* raptor-tp6-7 */</p>
<hr />
<div>== Raptor ==<br />
<br />
Raptor is a new performance testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android Geckoview, and on Google Chromium.<br />
<br />
Raptor supports two types of performance tests: page-load tests, and standard benchmark tests.<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests basically involve loading a specific web page and measuring the load performance (i.e. time-to-first-non-blank-paint, dom-content-flushed, ttfi). The pageload measurements are 'warm load' in that a new tab is opened only at the start of the test for each new page, and each pagecycle is a reload in the same browser tab.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
For page-load tests, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on tooltool and are automatically downloaded by Raptor when they are required for a test.<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
<br />
* GIT needs to be in the path in the terminal that you build Firefox / run Raptor from, as Raptor uses GIT to check out a local copy of some of the source for some of the performance benchmarks<br />
<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
<br />
* If you plan on running Raptor on android, your android device must already be setup (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see what Raptor performance tests are currently available on all platforms use the 'print-tests' option, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally just build Firefox and then run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name><br />
<br />
For example to run the raptor tp6 pageload test locally just use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Android Geckoview ====<br />
<br />
When running Raptor tests on a local android device, Raptor is expecting the device to already be setup and ready to go.<br />
<br />
First ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your android device to USB and running:<br />
<br />
mozilla-central$ adb devices<br />
<br />
If your device serial number is listed then you're set. If ADB is not found you can install it by running (in your local mozilla development repo):<br />
<br />
mozilla-central$ ./mach bootstrap<br />
<br />
Then in bootstrap select the option for "Firefox for Android Artifact Mode" and that will install the required tools (no need to do an actual build).<br />
<br />
Next make sure your android device is ready to go. Local android device pre-requisites are:<br />
<br />
* Device is rooted<br />
<br />
* Device is in 'superuser' mode<br />
<br />
* The geckoview example app is already installed on the device. Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android android build on treeherder], then install it on your device i.e.:<br />
<br />
mozilla-central$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON which is required. Note, when updating the geckoview example app, you must uninstall the existing one first, i.e.:<br />
<br />
mozilla-central$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on android geckoview is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
A couple of notes about debugging:<br />
<br />
* Raptor browser extension console messages do appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
mozilla-central$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on android using the Firefox WebIDE, click on the android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor-test --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor-test --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
mozilla-central$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with gecko profiling enabled running nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor-test command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile zip file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
mozilla-central$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You may need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile zip files will be listed as job artifacts;<br />
# Select a Raptor profile zip artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests (currently 'tp6', and 'gdocs') use the [https://mitmproxy.org/ Mitmproxy] tool to record and playback page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for each page-load suite, and the test pages contained within each.<br />
<br />
==== raptor-tp6-1 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-1.ini raptor-tp6-1.ini].<br />
<br />
''' Test pages in tp6-1 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-amazon-*]<br />
* URL: https://www.amazon.com/s/url=search-alias%3Daps&field-keywords=laptop<br />
* Hero: string description element for first laptop in search results<br />
<br />
[raptor-tp6-facebook-*]<br />
* URL: https://www.facebook.com (logged into a user account)<br />
* Hero: on the Facebook 'Home' icon<br />
<br />
[raptor-tp6-google-*]<br />
* URL: https://www.google.com/search?hl=en&q=barack+obama&cad=h<br />
* Hero: bigger photo of Obama in search results towards top right<br />
<br />
[raptor-tp6-youtube-*]<br />
* URL: https://www.youtube.com<br />
* Hero: YouTube logo on the top left<br />
<br />
==== raptor-tp6-2 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-2.ini raptor-tp6-2.ini].<br />
<br />
''' Test pages in tp6-2 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-docs-*]<br />
* URL: https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-sheets-*]<br />
* URL: https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-slides-*]<br />
* URL: https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
==== raptor-tp6-3 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-3.ini raptor-tp6-3.ini].<br />
<br />
''' Test pages in tp6-3 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-imdb-*]<br />
* URL: https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2<br />
<br />
[raptor-tp6-imgur-*]<br />
* URL: https://imgur.com/gallery/m5tYJL6<br />
<br />
[raptor-tp6-wikia-*]<br />
* URL: http://fandom.wikia.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase<br />
<br />
==== raptor-tp6-4 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-4.ini raptor-tp6-4.ini].<br />
<br />
''' Test pages in tp6-4 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-bing-*]<br />
* URL: https://www.bing.com/search?q=barack+obama<br />
<br />
[raptor-tp6-yandex-*]<br />
* URL: https://yandex.ru/search/?text=barack%20obama&lr=10115<br />
<br />
==== raptor-tp6-5 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-5.ini raptor-tp6-5.ini].<br />
<br />
''' Test pages in tp6-5 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-apple-*]<br />
* URL: https://www.apple.com/macbook-pro/<br />
<br />
[raptor-tp6-microsoft-*]<br />
* URL: https://www.microsoft.com/en-us/windows/get-windows-10<br />
<br />
==== raptor-tp6-6 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-6.ini raptor-tp6-6.ini].<br />
<br />
''' Test pages in tp6-6 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-reddit-*]<br />
* URL: https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/<br />
<br />
[raptor-tp6-yahoo-news-*]<br />
* URL: https://www.yahoo.com/lifestyle/police-respond-noise-complaint-end-playing-video-games-respectful-tenants-002329963.html<br />
<br />
==== raptor-tp6-7 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-7.ini raptor-tp6-7.ini].<br />
<br />
''' Test pages in tp6-7 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-instagram-*]<br />
* URL: https://www.instagram.com/<br />
<br />
[raptor-tp6-twitter-*]<br />
* URL: https://twitter.com/BarackObama<br />
<br />
[raptor-tp6-yahoo-mail-*]<br />
* URL: https://mail.yahoo.com/<br />
<br />
==== raptor-tp6-8 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-8.ini raptor-tp6-8.ini].<br />
<br />
''' Test pages in tp6-8 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-ebay-*]<br />
* URL: https://www.ebay.com/<br />
<br />
[raptor-tp6-wikipedia-*]<br />
* URL: https://en.wikipedia.org/wiki/Barack_Obama<br />
<br />
<br />
==== raptor-tp6-9 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-9.ini raptor-tp6-9.ini].<br />
<br />
''' Test pages in tp6-9 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-google-mail-*]<br />
* URL: https://mail.google.com/<br />
<br />
[raptor-tp6-pinterest-*]<br />
* URL: https://pinterest.com/<br />
<br />
==== raptor-tp6-10 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-10.ini raptor-tp6-10.ini].<br />
<br />
''' Test pages in tp6-10 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-paypal-*]<br />
* URL: https://www.paypal.com/myaccount/summary/<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: bholley<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] javascript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor-test<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all setup and connected to USB, invoke the Raptor test normally via ./mach raptor-test<br />
# Startup a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page, select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1206202TestEngineering/Performance/Raptor2019-01-15T09:56:30Z<p>Bebef 1987: /* Page-Load Tests */</p>
<hr />
<div>== Raptor ==<br />
<br />
Raptor is a new performance testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android Geckoview, and on Google Chromium.<br />
<br />
Raptor supports two types of performance tests: page-load tests, and standard benchmark tests.<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests basically involve loading a specific web page and measuring the load performance (i.e. time-to-first-non-blank-paint, dom-content-flushed, ttfi). The pageload measurements are 'warm load' in that a new tab is opened only at the start of the test for each new page, and each pagecycle is a reload in the same browser tab.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
For page-load tests, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on tooltool and are automatically downloaded by Raptor when they are required for a test.<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
<br />
* GIT needs to be in the path in the terminal that you build Firefox / run Raptor from, as Raptor uses GIT to check out a local copy of some of the source for some of the performance benchmarks<br />
<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
<br />
* If you plan on running Raptor on android, your android device must already be setup (see more below in the Android section)<br />
<br />
==== Getting a List of Raptor Tests ====<br />
<br />
To see what Raptor performance tests are currently available on all platforms use the 'print-tests' option, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --print-tests<br />
<br />
That will output all available tests on each supported app, as well as each subtest available in each suite (i.e. all the pages in a specific page-load tp6* suite).<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally just build Firefox and then run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name><br />
<br />
For example to run the raptor tp6 pageload test locally just use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-1<br />
<br />
You can run individual subtests too (i.e. a single page in one of the tp6* suites). For example, to run the amazon page-load test on Firefox:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-amazon-firefox<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Android Geckoview ====<br />
<br />
When running Raptor tests on a local android device, Raptor is expecting the device to already be setup and ready to go.<br />
<br />
First ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your android device to USB and running:<br />
<br />
mozilla-central$ adb devices<br />
<br />
If your device serial number is listed then you're set. If ADB is not found you can install it by running (in your local mozilla development repo):<br />
<br />
mozilla-central$ ./mach bootstrap<br />
<br />
Then in bootstrap select the option for "Firefox for Android Artifact Mode" and that will install the required tools (no need to do an actual build).<br />
<br />
Next make sure your android device is ready to go. Local android device pre-requisites are:<br />
<br />
* Device is rooted<br />
<br />
* Device is in 'superuser' mode<br />
<br />
* The geckoview example app is already installed on the device. Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android android build on treeherder], then install it on your device i.e.:<br />
<br />
mozilla-central$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON which is required. Note, when updating the geckoview example app, you must uninstall the existing one first, i.e.:<br />
<br />
mozilla-central$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on android geckoview is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
A couple of notes about debugging:<br />
<br />
* Raptor browser extension console messages do appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
mozilla-central$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on android using the Firefox WebIDE, click on the android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor-test --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor-test --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
mozilla-central$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with gecko profiling enabled running nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor-test command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile zip file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
mozilla-central$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You may need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile zip files will be listed as job artifacts;<br />
# Select a Raptor profile zip artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests (currently 'tp6', and 'gdocs') use the [https://mitmproxy.org/ Mitmproxy] tool to record and playback page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for each page-load suite, and the test pages contained within each.<br />
<br />
==== raptor-tp6-1 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-1.ini raptor-tp6-1.ini].<br />
<br />
''' Test pages in tp6-1 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-amazon-*]<br />
* URL: https://www.amazon.com/s/url=search-alias%3Daps&field-keywords=laptop<br />
* Hero: string description element for first laptop in search results<br />
<br />
[raptor-tp6-facebook-*]<br />
* URL: https://www.facebook.com (logged into a user account)<br />
* Hero: on the Facebook 'Home' icon<br />
<br />
[raptor-tp6-google-*]<br />
* URL: https://www.google.com/search?hl=en&q=barack+obama&cad=h<br />
* Hero: bigger photo of Obama in search results towards top right<br />
<br />
[raptor-tp6-youtube-*]<br />
* URL: https://www.youtube.com<br />
* Hero: YouTube logo on the top left<br />
<br />
==== raptor-tp6-2 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-2.ini raptor-tp6-2.ini].<br />
<br />
''' Test pages in tp6-2 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-docs-*]<br />
* URL: https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-sheets-*]<br />
* URL: https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-slides-*]<br />
* URL: https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
==== raptor-tp6-3 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-3.ini raptor-tp6-3.ini].<br />
<br />
''' Test pages in tp6-3 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-imdb-*]<br />
* URL: https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2<br />
<br />
[raptor-tp6-imgur-*]<br />
* URL: https://imgur.com/gallery/m5tYJL6<br />
<br />
[raptor-tp6-wikia-*]<br />
* URL: http://fandom.wikia.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase<br />
<br />
==== raptor-tp6-4 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-4.ini raptor-tp6-4.ini].<br />
<br />
''' Test pages in tp6-4 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-bing-*]<br />
* URL: https://www.bing.com/search?q=barack+obama<br />
<br />
[raptor-tp6-yandex-*]<br />
* URL: https://yandex.ru/search/?text=barack%20obama&lr=10115<br />
<br />
==== raptor-tp6-5 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-5.ini raptor-tp6-5.ini].<br />
<br />
''' Test pages in tp6-5 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-apple-*]<br />
* URL: https://www.apple.com/macbook-pro/<br />
<br />
[raptor-tp6-microsoft-*]<br />
* URL: https://www.microsoft.com/en-us/windows/get-windows-10<br />
<br />
==== raptor-tp6-6 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-6.ini raptor-tp6-6.ini].<br />
<br />
''' Test pages in tp6-6 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-reddit-*]<br />
* URL: https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/<br />
<br />
[raptor-tp6-yahoo-news-*]<br />
* URL: https://www.yahoo.com/lifestyle/police-respond-noise-complaint-end-playing-video-games-respectful-tenants-002329963.html<br />
<br />
==== raptor-tp6-7 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-7.ini raptor-tp6-7.ini].<br />
<br />
''' Test pages in tp6-7 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-instagram-*]<br />
* URL: https://www.instagram.com/<br />
<br />
[raptor-tp6-twitter-*]<br />
* URL: https://twitter.com/BarackObama<br />
<br />
==== raptor-tp6-8 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-8.ini raptor-tp6-8.ini].<br />
<br />
''' Test pages in tp6-8 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-ebay-*]<br />
* URL: https://www.ebay.com/<br />
<br />
[raptor-tp6-wikipedia-*]<br />
* URL: https://en.wikipedia.org/wiki/Barack_Obama<br />
<br />
<br />
==== raptor-tp6-9 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-9.ini raptor-tp6-9.ini].<br />
<br />
''' Test pages in tp6-9 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-google-mail-*]<br />
* URL: https://mail.google.com/<br />
<br />
[raptor-tp6-pinterest-*]<br />
* URL: https://pinterest.com/<br />
<br />
==== raptor-tp6-10 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-10.ini raptor-tp6-10.ini].<br />
<br />
''' Test pages in tp6-10 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-paypal-*]<br />
* URL: https://www.paypal.com/myaccount/summary/<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: bholley<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] javascript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor-test<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all setup and connected to USB, invoke the Raptor test normally via ./mach raptor-test<br />
# Startup a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page, select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1205914TestEngineering/Performance/Raptor2019-01-08T09:29:02Z<p>Bebef 1987: /* raptor-tp6-8 */</p>
<hr />
<div>== Raptor ==<br />
<br />
Raptor is a new performance testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android Geckoview, and on Google Chromium.<br />
<br />
Raptor supports two types of performance tests: page-load tests, and standard benchmark tests.<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests basically involve loading a specific web page and measuring the load performance (i.e. time-to-first-non-blank-paint, dom-content-flushed, ttfi). The pageload measurements are 'warm load' in that a new tab is opened only at the start of the test for each new page, and each pagecycle is a reload in the same browser tab.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
For page-load tests, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on tooltool and are automatically downloaded by Raptor when they are required for a test.<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
<br />
* GIT needs to be in the path in the terminal that you build Firefox / run Raptor from, as Raptor uses GIT to check out a local copy of some of the source for some of the performance benchmarks<br />
<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
<br />
* If you plan on running Raptor on android, your android device must already be setup (see more below in the Android section)<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally just build Firefox and then run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name><br />
<br />
For example to run the raptor tp6 pageload test locally just use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-1<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Android Geckoview ====<br />
<br />
When running Raptor tests on a local android device, Raptor is expecting the device to already be setup and ready to go.<br />
<br />
First ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your android device to USB and running:<br />
<br />
mozilla-central$ adb devices<br />
<br />
If your device serial number is listed then you're set. If ADB is not found you can install it by running (in your local mozilla development repo):<br />
<br />
mozilla-central$ ./mach bootstrap<br />
<br />
Then in bootstrap select the option for "Firefox for Android Artifact Mode" and that will install the required tools (no need to do an actual build).<br />
<br />
Next make sure your android device is ready to go. Local android device pre-requisites are:<br />
<br />
* Device is rooted<br />
<br />
* Device is in 'superuser' mode<br />
<br />
* The geckoview example app is already installed on the device. Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android android build on treeherder], then install it on your device i.e.:<br />
<br />
mozilla-central$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON which is required. Note, when updating the geckoview example app, you must uninstall the existing one first, i.e.:<br />
<br />
mozilla-central$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on android geckoview is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
A couple of notes about debugging:<br />
<br />
* Raptor browser extension console messages do appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
mozilla-central$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on android using the Firefox WebIDE, click on the android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor-test --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor-test --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
mozilla-central$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with gecko profiling enabled running nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor-test command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile zip file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
mozilla-central$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You may need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile zip files will be listed as job artifacts;<br />
# Select a Raptor profile zip artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests (currently 'tp6', and 'gdocs') use the [https://mitmproxy.org/ Mitmproxy] tool to record and playback page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for each page-load suite, and the test pages contained within each.<br />
<br />
==== raptor-tp6-1 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-1.ini raptor-tp6-1.ini].<br />
<br />
''' Test pages in tp6-1 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-amazon-*]<br />
* URL: https://www.amazon.com/s/url=search-alias%3Daps&field-keywords=laptop<br />
* Hero: string description element for first laptop in search results<br />
<br />
[raptor-tp6-facebook-*]<br />
* URL: https://www.facebook.com (logged into a user account)<br />
* Hero: on the Facebook 'Home' icon<br />
<br />
[raptor-tp6-google-*]<br />
* URL: https://www.google.com/search?hl=en&q=barack+obama&cad=h<br />
* Hero: bigger photo of Obama in search results towards top right<br />
<br />
[raptor-tp6-youtube-*]<br />
* URL: https://www.youtube.com<br />
* Hero: YouTube logo on the top left<br />
<br />
==== raptor-tp6-2 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-2.ini raptor-tp6-2.ini].<br />
<br />
''' Test pages in tp6-2 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-docs-*]<br />
* URL: https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-sheets-*]<br />
* URL: https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-slides-*]<br />
* URL: https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
==== raptor-tp6-3 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-3.ini raptor-tp6-3.ini].<br />
<br />
''' Test pages in tp6-3 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-imdb-*]<br />
* URL: https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2<br />
<br />
[raptor-tp6-imgur-*]<br />
* URL: https://imgur.com/gallery/m5tYJL6<br />
<br />
[raptor-tp6-wikia-*]<br />
* URL: http://fandom.wikia.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase<br />
<br />
==== raptor-tp6-4 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-4.ini raptor-tp6-4.ini].<br />
<br />
''' Test pages in tp6-4 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-bing-*]<br />
* URL: https://www.bing.com/search?q=barack+obama<br />
<br />
[raptor-tp6-yandex-*]<br />
* URL: https://yandex.ru/search/?text=barack%20obama&lr=10115<br />
<br />
==== raptor-tp6-5 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-5.ini raptor-tp6-5.ini].<br />
<br />
''' Test pages in tp6-5 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-apple-*]<br />
* URL: https://www.apple.com/macbook-pro/<br />
<br />
[raptor-tp6-microsoft-*]<br />
* URL: https://www.microsoft.com/en-us/windows/get-windows-10<br />
<br />
==== raptor-tp6-6 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-6.ini raptor-tp6-6.ini].<br />
<br />
''' Test pages in tp6-6 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-reddit-*]<br />
* URL: https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/<br />
<br />
[raptor-tp6-yahoo-news-*]<br />
* URL: https://www.yahoo.com/lifestyle/police-respond-noise-complaint-end-playing-video-games-respectful-tenants-002329963.html<br />
<br />
==== raptor-tp6-7 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-7.ini raptor-tp6-7.ini].<br />
<br />
''' Test pages in tp6-7 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-instagram-*]<br />
* URL: https://www.instagram.com/<br />
<br />
[raptor-tp6-twitter-*]<br />
* URL: https://twitter.com/BarackObama<br />
<br />
==== raptor-tp6-8 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-8.ini raptor-tp6-8.ini].<br />
<br />
''' Test pages in tp6-8 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-ebay-*]<br />
* URL: https://www.ebay.com/<br />
<br />
[raptor-tp6-wikipedia-*]<br />
* URL: https://en.wikipedia.org/wiki/Barack_Obama<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: bholley<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] javascript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor-test<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all setup and connected to USB, invoke the Raptor test normally via ./mach raptor-test<br />
# Startup a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page, select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1205913TestEngineering/Performance/Raptor2019-01-08T09:28:23Z<p>Bebef 1987: /* Page-Load Tests */</p>
<hr />
<div>== Raptor ==<br />
<br />
Raptor is a new performance testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production on Firefox Desktop, Firefox Android Geckoview, and on Google Chromium.<br />
<br />
Raptor supports two types of performance tests: page-load tests, and standard benchmark tests.<br />
<br />
=== Page-Load Tests ===<br />
<br />
Page-load tests basically involve loading a specific web page and measuring the load performance (i.e. time-to-first-non-blank-paint, dom-content-flushed, ttfi). The pageload measurements are 'warm load' in that a new tab is opened only at the start of the test for each new page, and each pagecycle is a reload in the same browser tab.<br />
<br />
=== Benchmark Tests ===<br />
<br />
Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
For page-load tests, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on tooltool and are automatically downloaded by Raptor when they are required for a test.<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
<br />
* GIT needs to be in the path in the terminal that you build Firefox / run Raptor from, as Raptor uses GIT to check out a local copy of some of the source for some of the performance benchmarks<br />
<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
<br />
* If you plan on running Raptor on android, your android device must already be setup (see more below in the Android section)<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally just build Firefox and then run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name><br />
<br />
For example to run the raptor tp6 pageload test locally just use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-1<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Android Geckoview ====<br />
<br />
When running Raptor tests on a local android device, Raptor is expecting the device to already be setup and ready to go.<br />
<br />
First ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your android device to USB and running:<br />
<br />
mozilla-central$ adb devices<br />
<br />
If your device serial number is listed then you're set. If ADB is not found you can install it by running (in your local mozilla development repo):<br />
<br />
mozilla-central$ ./mach bootstrap<br />
<br />
Then in bootstrap select the option for "Firefox for Android Artifact Mode" and that will install the required tools (no need to do an actual build).<br />
<br />
Next make sure your android device is ready to go. Local android device pre-requisites are:<br />
<br />
* Device is rooted<br />
<br />
* Device is in 'superuser' mode<br />
<br />
* The geckoview example app is already installed on the device. Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android android build on treeherder], then install it on your device i.e.:<br />
<br />
mozilla-central$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON which is required. Note, when updating the geckoview example app, you must uninstall the existing one first, i.e.:<br />
<br />
mozilla-central$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on android geckoview is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
A couple of notes about debugging:<br />
<br />
* Raptor browser extension console messages do appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
mozilla-central$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on android using the Firefox WebIDE, click on the android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor-test --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor-test --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
mozilla-central$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Nightly Profiling Jobs in Production ====<br />
We have Firefox desktop Raptor jobs with gecko profiling enabled running nightly in production on Mozilla Central (on Linux64, Win10, and OSX). This provides a steady cache of gecko profiles for the Raptor tests. Search for the [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=Rap-Prof "Rap-Prof" treeherder group on Mozilla Central].<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor-test command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile zip file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
mozilla-central$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You may need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile zip files will be listed as job artifacts;<br />
# Select a Raptor profile zip artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests (currently 'tp6', and 'gdocs') use the [https://mitmproxy.org/ Mitmproxy] tool to record and playback page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for each page-load suite, and the test pages contained within each.<br />
<br />
==== raptor-tp6-1 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-1.ini raptor-tp6-1.ini].<br />
<br />
''' Test pages in tp6-1 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-amazon-*]<br />
* URL: https://www.amazon.com/s/url=search-alias%3Daps&field-keywords=laptop<br />
* Hero: string description element for first laptop in search results<br />
<br />
[raptor-tp6-facebook-*]<br />
* URL: https://www.facebook.com (logged into a user account)<br />
* Hero: on the Facebook 'Home' icon<br />
<br />
[raptor-tp6-google-*]<br />
* URL: https://www.google.com/search?hl=en&q=barack+obama&cad=h<br />
* Hero: bigger photo of Obama in search results towards top right<br />
<br />
[raptor-tp6-youtube-*]<br />
* URL: https://www.youtube.com<br />
* Hero: YouTube logo on the top left<br />
<br />
==== raptor-tp6-2 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, hero element, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-2.ini raptor-tp6-2.ini].<br />
<br />
''' Test pages in tp6-2 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-docs-*]<br />
* URL: https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-sheets-*]<br />
* URL: https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-slides-*]<br />
* URL: https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
==== raptor-tp6-3 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-3.ini raptor-tp6-3.ini].<br />
<br />
''' Test pages in tp6-3 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-imdb-*]<br />
* URL: https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2<br />
<br />
[raptor-tp6-imgur-*]<br />
* URL: https://imgur.com/gallery/m5tYJL6<br />
<br />
[raptor-tp6-wikia-*]<br />
* URL: http://fandom.wikia.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase<br />
<br />
==== raptor-tp6-4 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-4.ini raptor-tp6-4.ini].<br />
<br />
''' Test pages in tp6-4 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-bing-*]<br />
* URL: https://www.bing.com/search?q=barack+obama<br />
<br />
[raptor-tp6-yandex-*]<br />
* URL: https://yandex.ru/search/?text=barack%20obama&lr=10115<br />
<br />
==== raptor-tp6-5 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-5.ini raptor-tp6-5.ini].<br />
<br />
''' Test pages in tp6-5 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-apple-*]<br />
* URL: https://www.apple.com/macbook-pro/<br />
<br />
[raptor-tp6-microsoft-*]<br />
* URL: https://www.microsoft.com/en-us/windows/get-windows-10<br />
<br />
==== raptor-tp6-6 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-6.ini raptor-tp6-6.ini].<br />
<br />
''' Test pages in tp6-6 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-reddit-*]<br />
* URL: https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/<br />
<br />
[raptor-tp6-yahoo-news-*]<br />
* URL: https://www.yahoo.com/lifestyle/police-respond-noise-complaint-end-playing-video-games-respectful-tenants-002329963.html<br />
<br />
==== raptor-tp6-7 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-7.ini raptor-tp6-7.ini].<br />
<br />
''' Test pages in tp6-7 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-instagram-*]<br />
* URL: https://www.instagram.com/<br />
<br />
[raptor-tp6-twitter-*]<br />
* URL: https://twitter.com/BarackObama<br />
<br />
==== raptor-tp6-8 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive, loadtime<br />
* measuring on Chrome: first-contentful-paint, loadtime<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-7.ini raptor-tp6-7.ini].<br />
<br />
''' Test pages in tp6-8 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-ebay-*]<br />
* URL: https://www.ebay.com/<br />
<br />
[raptor-tp6-wikipedia-*]<br />
* URL: https://en.wikipedia.org/wiki/Barack_Obama<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: bholley<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] javascript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor-test<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all setup and connected to USB, invoke the Raptor test normally via ./mach raptor-test<br />
# Startup a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page, select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=TestEngineering/Performance/Raptor&diff=1204563TestEngineering/Performance/Raptor2018-11-29T09:23:04Z<p>Bebef 1987: Added raptor-tp6-3 to 7 test info</p>
<hr />
<div>== Raptor ==<br />
<br />
Raptor is a new performance testing framework for running browser pageload and browser benchmark tests. The core of Raptor was designed as a browser extension, therefore Raptor is cross-browser compatible and is currently running in production (tier 2) on Firefox and Google Chrome.<br />
<br />
Raptor supports two types of performance tests: page-load tests, and standard benchmark tests. Page-load tests basically involve loading a specific web page and measuring the load performance (i.e. time-to-first-non-blank-paint). Standard benchmarks are third-party tests (i.e. Speedometer) that we have integrated into Raptor to run per-commit in our production CI.<br />
<br />
For page-load tests, instead of using live web pages for performance testing, Raptor uses a tool called [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]]. Mitmproxy allows us to record and playback test pages via a local Firefox proxy. The Mitmproxy recordings are stored on tooltool and are automatically downloaded by Raptor when they are required for a test.<br />
<br />
=== Running Locally ===<br />
<br />
==== Prerequisites ====<br />
<br />
In order to run Raptor on a local machine you need:<br />
* A local mozilla repository clone with a [https://developer.mozilla.org/en-US/docs/Mozilla/Developer_guide/Build_Instructions successful Firefox build] completed<br />
<br />
* GIT needs to be in the path in the terminal that you build Firefox / run Raptor from, as Raptor uses GIT to check out a local copy of some of the source for some of the performance benchmarks<br />
<br />
* If you plan on running Raptor tests on Google Chrome, you need a local install of Google Chrome and know the path to the chrome binary<br />
<br />
* If you plan on running Raptor on android, your android device must already be setup (see more below in the Android section)<br />
<br />
==== Running on Firefox ====<br />
<br />
To run Raptor locally just build Firefox and then run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name><br />
<br />
For example to run the raptor tp6 pageload test locally just use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-tp6-1<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Google Chrome ====<br />
<br />
To run Raptor locally on Google Chrome, make sure you already have a local version of Google Chrome installed, and then from within your mozilla-repo run:<br />
<br />
mozilla-central$ ./mach raptor-test --test <raptor-test-name> --app=chrome --binary="<path to google chrome binary>"<br />
<br />
For example to run the raptor-speedometer benchmark on Google Chrome use:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Running on Android Geckoview ====<br />
<br />
When running Raptor tests on a local android device, Raptor is expecting the device to already be setup and ready to go.<br />
<br />
First ensure your local host machine has the Android SDK/Tools (i.e. ADB) installed. Check if it is already installed by attaching your android device to USB and running:<br />
<br />
mozilla-central$ adb devices<br />
<br />
If your device serial number is listed then you're set. If ADB is not found you can install it by running (in your local mozilla development repo):<br />
<br />
mozilla-central$ ./mach bootstrap<br />
<br />
Then in bootstrap select the option for "Firefox for Android Artifact Mode" and that will install the required tools (no need to do an actual build).<br />
<br />
Next make sure your android device is ready to go. Local android device pre-requisites are:<br />
<br />
* Device is rooted<br />
<br />
* Device is in 'superuser' mode<br />
<br />
* The geckoview example app is already installed on the device. Download the geckoview_example.apk from the appropriate [https://treeherder.mozilla.org/#/jobs?repo=mozilla-central&searchStr=android android build on treeherder], then install it on your device i.e.:<br />
<br />
mozilla-central$ adb install -g ../Downloads/geckoview_example.apk<br />
<br />
The '-g' flag will automatically set all application permissions ON which is required. Note, when updating the geckoview example app, you must uninstall the existing one first, i.e.:<br />
<br />
mozilla-central$ adb uninstall org.mozilla.geckoview_example<br />
<br />
Once your android device is ready, and attached to local USB, from within your local mozilla repo use the following command line to run speedometer:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-speedometer --app=geckoview --binary="org.mozilla.geckoview_example"<br />
<br />
Note: Speedometer on android geckoview is currently running on two devices in production - the Google Pixel 2 and the Moto G5 - therefore it is not guaranteed that it will run successfully on all/other untested android devices. There is an intermittent failure on the Moto G5 where speedometer just stalls ([https://bugzilla.mozilla.org/show_bug.cgi?id=1492222 Bug 1492222]).<br />
<br />
A couple of notes about debugging:<br />
<br />
* Raptor browser extension console messages do appear in adb logcat via the GeckoConsole - so this is handy:<br />
<br />
mozilla-central$ adb logcat | grep GeckoConsole<br />
<br />
* You can also debug Raptor on android using the Firefox WebIDE, click on the android device listed under "USB Devices" and then "Main Process" or the 'localhost: Speedometer.." tab process<br />
<br />
Raptor test results will be found locally in <your-repo>/testing/mozharness/build/raptor.json.<br />
<br />
==== Page-Timeouts ====<br />
<br />
On different machines the Raptor tests will run at different speeds. The default page-timeout is defined in each Raptor test INI file. On some machines you may see a test failure with a 'raptor page-timeout' which means the page-load timed out, or the benchmark test iteration didn't complete, within the page-timeout limit.<br />
<br />
You can override the default page-timeout by using the --page-timeout command line arg. In this example, each test page in tp6-1 will be given two minutes to load during each page-cycle:<br />
<br />
./mach raptor-test --test raptor-tp6-1 --page-timeout 120000<br />
<br />
If an iteration of a benchmark test is not finishing within the allocated time, increase it by:<br />
<br />
./mach raptor-test --test raptor-speedometer --page-timeout 600000<br />
<br />
==== Page-Cycles ====<br />
<br />
Page-cycles is the number of times a test page is loaded (for page-load tests); for benchmark tests, this is the total number of iterations that the entire benchmark test will be run. The default page-cycles is defined in each Raptor test INI file.<br />
<br />
You can override the default page-cycles by using the --page-cycles command line arg. In this example, the test page will only be loaded twice:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox --page-cycles 2<br />
<br />
=== Running Raptor on Try ===<br />
<br />
Raptor tests can be run on [https://treeherder.mozilla.org/#/jobs?repo=try try] on both Firefox and Google Chrome. (Raptor pageload-type tests are not supported on Google Chrome yet, as mentioned above).<br />
<br />
'''Note:''' Raptor is currently 'tier 2' on [https://treeherder.mozilla.org/#/jobs?repo=try Treeherder], which means to see the Raptor test jobs you need to ensure 'tier 2' is selected / turned on in the Treeherder 'Tiers' menu.<br />
<br />
The easiest way to run Raptor tests on try is to use mach try fuzzy:<br />
<br />
mozilla-central$ ./mach try fuzzy --full<br />
<br />
Then type 'raptor' and select which Raptor tests (and on what platforms) you wish to run.<br />
<br />
To see the Raptor test results on your try run:<br />
<br />
# In treeherder select one of the Raptor test jobs (i.e. 'sp' in 'Rap-e10s', or 'Rap-C-e10s')<br />
# Below the jobs, click on the "Performance" tab; you'll see the aggregated results listed<br />
# If you wish to see the raw replicates, click on the "Job Details" tab, and select the "perfherder-data.json" artifact<br />
<br />
==== Raptor Hardware in Production ====<br />
<br />
The Raptor performance tests run on dedicated hardware (the same hardware that the Talos performance tests use). See the [[https://wiki.mozilla.org/Performance_sheriffing/Talos/Misc#Hardware_Profile_of_machines_used_in_automation|Talos hardware used in automation wiki page]] for more details.<br />
<br />
=== Profiling Raptor Jobs ===<br />
<br />
Raptor tests are able to create gecko profiles which can be viewed in [https://perf-html.io/ perf-html.io.] This is currently only supported when running Raptor on Firefox desktop.<br />
<br />
==== Profiling Locally ====<br />
<br />
To tell Raptor to create gecko profiles during a performance test, just add the '--gecko-profile' flag to the command line, i.e.:<br />
<br />
mozilla-central$ ./mach raptor-test --test raptor-sunspider --gecko-profile<br />
<br />
When the Raptor test is finished, you will be able to find the resulting gecko profiles (ZIP) located locally in:<br />
<br />
mozilla-central/testing/mozharness/build/blobber_upload_dir/<br />
<br />
Note: While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 3. If you wish to override this, add the --page-cycles argument to the raptor-test command line. <br />
<br />
Raptor will automatically launch Firefox and load the latest gecko profile in [https://perf-html.io perfhtml.io]. To turn this feature off, just set the DISABLE_PROFILE_LAUNCH=1 env var.<br />
<br />
If auto-launch doesn't work for some reason, just start Firefox manually and browse to [https://perf-html.io perfhtml.io], click on "Browse" and select the Raptor profile zip file noted above.<br />
<br />
If you're on Windows and want to profile a Firefox build that you compiled yourself, make sure it contains profiling information and you have a symbols zip for it, by following the [https://developer.mozilla.org/en-US/docs/Mozilla/Performance/Profiling_with_the_Built-in_Profiler_and_Local_Symbols_on_Windows#Profiling_local_talos_runs directions on MDN].<br />
<br />
==== Profiling on Try Server ====<br />
<br />
To turn on gecko profiling for Raptor test jobs on try pushes, just add the '--gecko-profile' flag to your try push i.e.:<br />
<br />
mozilla-central$ ./mach try fuzzy --gecko-profile<br />
<br />
Then select the Raptor test jobs that you wish to run. The Raptor jobs will be run on try with profiling included. While profiling is turned on, Raptor will automatically reduce the number of pagecycles to 2.<br />
<br />
See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Add Profiling to Previously Completed Jobs ====<br />
<br />
Note: You may need treeherder 'admin' access for the following.<br />
<br />
Gecko profiles can now be created for Raptor performance test jobs that have already completed in production (i.e. mozilla-central) and on try. To repeat a completed Raptor performance test job on production or try, but add gecko profiling, do the following:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Below, and to the left of the 'Job Details' tab, select the '...' to show the menu<br />
# On the pop-up menu, select 'Create Gecko Profile'<br />
<br />
The same Raptor test job will be repeated but this time with gecko profiling turned on. A new Raptor test job symbol will be added beside the completed one, with a '-p' added to the symbol name. Wait for that new Raptor profiling job to finish. See below for how to view the gecko profiles from within treeherder.<br />
<br />
==== Viewing Profiles on Treeherder ====<br />
When the Raptor jobs are finished, to view the gecko profiles:<br />
<br />
# In treeherder, select the symbol for the completed Raptor test job (i.e. 'ss' in 'Rap-e10s')<br />
# Click on the 'Job Details' tab below<br />
# The Raptor profile zip files will be listed as job artifacts;<br />
# Select a Raptor profile zip artifact, and click the 'view in perf-html.io' link to the right<br />
<br />
=== Recording Pages for Raptor Pageload Tests ===<br />
<br />
Raptor pageload tests (currently 'tp6', and 'gdocs') use the [https://mitmproxy.org/ Mitmproxy] tool to record and playback page archives. For more information on creating new page playback archives, please see [[Performance_sheriffing/Raptor/Mitmproxy|Raptor and Mitmproxy]].<br />
<br />
== Raptor Test List ==<br />
<br />
Currently the following Raptor tests are available. Note: Check the test details below to see which browser (i.e. Firefox, Google Chrome, Android) each test is supported on.<br />
<br />
=== Page-Load Tests ===<br />
<br />
For all Raptor page-load tests, the pages are played back from [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy Mitmproxy]] recordings. If you need the HTML page source (outside of the Mitmproxy recording) for debugging, the raw HTML can be found in our [https://github.com/mozilla/perf-automation/tree/master/pagesets perf-automation github repo].<br />
<br />
All the pages in a test suite an be run by calling the top-level test name, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-1<br />
<br />
Individual test pages can be ran by calling the subtest, i.e.:<br />
<br />
./mach raptor-test --test raptor-tp6-google-firefox<br />
<br />
Some of the page recordings contain [[https://wiki.mozilla.org/Performance_sheriffing/Raptor/Mitmproxy#Adding_Hero_Elements hero elements]]. When hero elements are measured, the value is the time until the hero element appears on the page (in MS).<br />
<br />
Below are the details for each page-load suite, and the test pages contained within each.<br />
<br />
==== raptor-tp6-1 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive<br />
* measuring on Chrome: first-contentful-paint, hero element<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-1.ini raptor-tp6-1.ini].<br />
<br />
''' Test pages in tp6-1 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-amazon-*]<br />
* URL: https://www.amazon.com/s/url=search-alias%3Daps&field-keywords=laptop<br />
* Hero: string description element for first laptop in search results<br />
<br />
[raptor-tp6-facebook-*]<br />
* URL: https://www.facebook.com (logged into a user account)<br />
* Hero: on the Facebook 'Home' icon<br />
<br />
[raptor-tp6-google-*]<br />
* URL: https://www.google.com/search?hl=en&q=barack+obama&cad=h<br />
* Hero: bigger photo of Obama in search results towards top right<br />
<br />
[raptor-tp6-youtube-*]<br />
* URL: https://www.youtube.com<br />
* Hero: YouTube logo on the top left<br />
<br />
==== raptor-tp6-2 ====<br />
* contact: :rwood, :jmaher<br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, hero element, time-to-first-interactive<br />
* measuring on Chrome: first-contentful-paint, hero element<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-2.ini raptor-tp6-2.ini].<br />
<br />
''' Test pages in tp6-2 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-docs-*]<br />
* URL: https://docs.google.com/document/d/1US-07msg12slQtI_xchzYxcKlTs6Fp7WqIc6W5GK5M8/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-sheets-*]<br />
* URL: https://docs.google.com/spreadsheets/d/1jT9qfZFAeqNoOK97gruc34Zb7y_Q-O_drZ8kSXT-4D4/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
[raptor-tp6-slides-*]<br />
* URL: https://docs.google.com/presentation/d/1Ici0ceWwpFvmIb3EmKeWSq_vAQdmmdFcWqaiLqUkJng/edit?usp=sharing<br />
* Hero: blue 'Sign In' button on the top right<br />
<br />
==== raptor-tp6-3 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive<br />
* measuring on Chrome: first-contentful-paint<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-3.ini raptor-tp6-3.ini].<br />
<br />
''' Test pages in tp6-3 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-imdb-*]<br />
* URL: https://www.imdb.com/title/tt0084967/?ref_=nv_sr_2<br />
<br />
[raptor-tp6-imgur-*]<br />
* URL: https://imgur.com/gallery/m5tYJL6<br />
<br />
[raptor-tp6-wikia-*]<br />
* URL: http://fandom.wikia.com/articles/fallout-76-will-live-and-die-on-the-creativity-of-its-playerbase<br />
<br />
==== raptor-tp6-4 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive<br />
* measuring on Chrome: first-contentful-paint<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-4.ini raptor-tp6-4.ini].<br />
<br />
''' Test pages in tp6-4 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-bing-*]<br />
* URL: https://www.bing.com/search?q=barack+obama<br />
<br />
[raptor-tp6-yandex-*]<br />
* URL: https://yandex.ru/search/?text=barack%20obama&lr=10115<br />
<br />
==== raptor-tp6-5 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive<br />
* measuring on Chrome: first-contentful-paint<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-5.ini raptor-tp6-5.ini].<br />
<br />
''' Test pages in tp6-5 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-apple-*]<br />
* URL: https://www.apple.com/macbook-pro/<br />
<br />
[raptor-tp6-microsoft-*]<br />
* URL: https://www.microsoft.com/en-us/windows/get-windows-10<br />
<br />
==== raptor-tp6-6 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive<br />
* measuring on Chrome: first-contentful-paint<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-6.ini raptor-tp6-6.ini].<br />
<br />
''' Test pages in tp6-6 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-reddit-*]<br />
* URL: https://www.reddit.com/r/technology/comments/9sqwyh/we_posed_as_100_senators_to_run_ads_on_facebook/<br />
<br />
==== raptor-tp6-7 ====<br />
* contact: :rwood, :jmaher, :bebe <br />
* type: page-load<br />
* browsers: Firefox desktop, Google Chrome desktop (Windows and OSX only at the moment)<br />
* measuring on Firefox: time-to-first-non-blank-paint, dom-content-flushed, time-to-first-interactive<br />
* measuring on Chrome: first-contentful-paint<br />
* page-cycles: 25<br />
* reporting: Each pagecycle measures all the values (in MS). The first page-cycle is dropped due to the initial extra loading time/noise. The overall result reported for each test page is the median of the values reported for each pagecycle (in MS).<br />
* test INI: [https://searchfox.org/mozilla-central/source/testing/raptor/raptor/tests/raptor-tp6-7.ini raptor-tp6-7.ini].<br />
<br />
''' Test pages in tp6-7 (* = firefox or chrome):'''<br />
<br />
[raptor-tp6-instagram-*]<br />
* URL: https://www.instagram.com/<br />
<br />
<br />
=== Benchmark Tests ===<br />
<br />
==== raptor-assorted-dom ====<br />
* contact: bholley<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-motionmark-animometer, raptor-motionmark-htmlsuite ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: benchmark measuring the time to animate complex scenes<br />
* summarization:<br />
** subtest: FPS from the subtest, each subtest is run for 15 seconds, repeat this 5 times and report the median value<br />
** suite: we take a geometric mean of all the subtests (9 for animometer, 11 for html suite)<br />
<br />
==== raptor-speedometer ====<br />
* contact: :selena<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* measuring: responsiveness of web applications<br />
* reporting: runs/minute score<br />
* data: there are 16 subtests in Speedometer; each of these are made up of 9 internal benchmarks.<br />
* summarization:<br />
** subtest: For all of the 16 subtests, we collect the sum of all their internal benchmark results.<br />
** score: geometric mean of the 16 sums<br />
<br />
This is the [http://browserbench.org/Speedometer/ Speedometer] javascript benchmark taken verbatim and slightly modified to work with the Raptor harness.<br />
<br />
==== raptor-stylebench ====<br />
* contact: :emilio<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* measuring: speed of dynamic style recalculation<br />
* reporting: runs/minute score<br />
<br />
==== raptor-sunspider ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-unity-webgl ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop, Firefox Android Geckoview<br />
* TODO<br />
<br />
==== raptor-wasm-misc, raptor-wasm-misc-baseline, raptor-wasm-misc-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
==== raptor-wasm-godot, raptor-wasm-godot-baseline, raptor-wasm-godot-ion ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop only<br />
* TODO<br />
<br />
==== raptor-webaudio ====<br />
* contact: ?<br />
* type: benchmark<br />
* browsers: Firefox desktop, Chrome desktop<br />
* TODO<br />
<br />
== Debugging the Raptor Web Extension ==<br />
<br />
When developing on Raptor and debugging, there's often a need to look at the output coming from the [https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor Raptor Web Extension]. Here are some pointers to help.<br />
<br />
=== Raptor Debug Mode ===<br />
<br />
The easiest way to debug the Raptor web extension is to run the Raptor test locally and invoke debug mode, i.e. for Firefox:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-firefox --debug-mode<br />
<br />
Or on Chrome, for example:<br />
<br />
./mach raptor-test --test raptor-tp6-amazon-chrome --app=chrome --binary="/Applications/Google Chrome.app/Contents/MacOS/Google Chrome" --debug-mode<br />
<br />
Running Raptor with debug mode will:<br />
<br />
* Automatically set the number of test page-cycles to 2 maximum<br />
* Reduce the 30 second post-browser startup delay from 30 seconds to 3 seconds<br />
* On Firefox, the devtools browser console will automatically open, where you can view all of the console log messages generated by the Raptor web extension<br />
* On Chrome, the devtools console will automatically open<br />
* The browser will remain open after the Raptor test has finished; you will be prompted in the terminal to manually shutdown the browser when you're finished debugging.<br />
<br />
=== Manual Debugging on Firefox Desktop ===<br />
<br />
The main Raptor runner is '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/runner.js runner.js]' which is inside the web extension. The code that actually captures the performance measures is in the web extension content code '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/measure.js measure.js]'.<br />
<br />
In order to retrieve the console.log() output from the Raptor runner, do the following:<br />
<br />
# Invoke Raptor locally via ./mach raptor-test<br />
# During the 30 second Raptor pause which happens right after Firefox has started up, in the ALREADY OPEN current tab, type "about:debugging" for the URL.<br />
# On the debugging page that appears, make sure "Add-ons" is selected on the left (default).<br />
# Turn ON the "Enable add-on debugging" check-box<br />
# Then scroll down the page until you see the Raptor web extension in the list of currently-loaded add-ons. Under "Raptor" click the blue "Debug" link.<br />
# A new window will open in a minute, and click the "console" tab<br />
<br />
To retrieve the console.log() output from the Raptor content 'measure.js' code:<br />
# As soon as Raptor opens the new test tab (and the test starts running / or the page starts loading), in Firefox just choose "Tools => Web Developer => Web Console", and select the "console' tab.<br />
<br />
Raptor automatically closes the test tab and the entire browser after test completion; which will close any open debug consoles. In order to have more time to review the console logs, Raptor can be temporarily hacked locally in order to prevent the test tab and browser from being closed. Currently this must be done manually, as follows:<br />
<br />
# In the Raptor web extension runner, comment out the line that closes the test tab in the test clean-up. That line of [https://searchfox.org/mozilla-central/rev/3c85ea2f8700ab17e38b82d77cd44644b4dae703/testing/raptor/webext/raptor/runner.js#357 code is here].<br />
#Add a return statement at the top of the Raptor control server method that shuts-down the browser, the browser shut-down [https://searchfox.org/mozilla-central/rev/924e3d96d81a40d2f0eec1db5f74fc6594337128/testing/raptor/raptor/control_server.py#120 method is here].<br />
<br />
For '''benchmark type tests''' (i.e. speedometer, motionmark, etc.) Raptor doesn't inject 'measure.js' into the test page content; instead it injects '[https://searchfox.org/mozilla-central/source/testing/raptor/webext/raptor/benchmark-relay.js benchmark-relay.js]' into the benchmark test content. Benchmark-relay is as it sounds; it basically relays the test results coming from the benchmark test, to the Raptor web extension runner. Viewing the console.log() output from benchmark-relay is done the same was as noted for the 'measure.js' content above.<br />
<br />
Note, [https://bugzilla.mozilla.org/show_bug.cgi?id=1470450 Bug 1470450] is on file to add a debug mode to Raptor that will automatically grab the web extension console output and dump it to the terminal (if possible) that will make debugging much easier.<br />
<br />
=== Debugging TP6 and Killing the Mitmproxy Server ===<br />
<br />
Regarding debugging Raptor pageload tests that use Mitmproxy (i.e. tp6, gdocs). If Raptor doesn't finish naturally and doesn't stop the Mitmproxy tool, the next time you attempt to run Raptor it might fail out with this error:<br />
<br />
INFO - Error starting proxy server: OSError(48, 'Address already in use')<br />
INFO - raptor-mitmproxy Aborting: mitmproxy playback process failed to start, poll returned: 1<br />
<br />
That just means the Mitmproxy server was already running before so it couldn't startup. In this case, you need to kill the Mitmproxy server processes, i.e:<br />
<br />
mozilla-unified rwood$ ps -ax | grep mitm<br />
5439 ttys000 0:00.09 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5440 ttys000 0:01.64 /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/mitmdump -k -q -s /Users/rwood/mozilla-unified/testing/raptor/raptor/playback/alternate-server-replay.py /Users/rwood/mozilla-unified/obj-x86_64-apple-darwin17.7.0/testing/raptor/amazon.mp<br />
5509 ttys000 0:00.01 grep mitm<br />
<br />
Then just kill the first mitm process in the list and that's sufficient:<br />
<br />
mozilla-unified rwood$ kill 5439<br />
<br />
Now when you run Raptor again, the Mitmproxy server will be able to start.<br />
<br />
=== Manual Debugging on Firefox Android ===<br />
<br />
Be sure to read the above section first on how to debug the Raptor web extension when running on Firefox Desktop.<br />
<br />
When running Raptor tests on Firefox on Android (i.e. geckoview), to see the console.log() output from the Raptor web extension, do the following:<br />
<br />
# With your android device (i.e. Google Pixel 2) all setup and connected to USB, invoke the Raptor test normally via ./mach raptor-test<br />
# Startup a local copy of the Firefox Nightly Desktop browser<br />
# In Firefox Desktop choose "Tools => Web Developer => WebIDE"<br />
# In the Firefox WebIDE dialog that appears, look under "USB Devices" listed on the top right. If your device is not there, there may be a link to install remote device tools - if that link appears click it and let that install.<br />
# Under "USB Devices" on the top right your android device should be listed (i.e. "Firefox Custom on Android Pixel 2" - click on your device.<br />
# The debugger opens. On the left side click on "Main Process", and click the "console" tab below - and the Raptor runner output will be included there.<br />
# On the left side under "Tabs" you'll also see an option for the active tab/page, select that and the Raptor content console.log() output should be included there.<br />
<br />
Also note: When debugging Raptor on Android, the 'adb logcat' is very useful. More specifically for 'geckoview', the output (including for Raptor) is prefixed with "GeckoConsole" - so this command is very handy:<br />
<br />
adb logcat | grep GeckoConsole<br />
<br />
=== Manual Debugging on Google Chrome ===<br />
<br />
Same as on Firefox desktop above, but use the Google Chrome console: View ==> Developer ==> Developer Tools.</div>Bebef 1987https://wiki.mozilla.org/index.php?title=B2G/QA/Automation/UI/Scrum/Sprint_4&diff=1034047B2G/QA/Automation/UI/Scrum/Sprint 42014-11-17T17:17:42Z<p>Bebef 1987: /* Other */</p>
<hr />
<div>== Sprint 4 2014-11-10 -> 2014-11-21 ==<br />
<br />
<onlyinclude><br />
=== Focuses ===<br />
<br />
* Continued cleanup of carryovers<br />
* New app tests<br />
<br />
=== Tasks ===<br />
<br />
==== Bugzilla ====<br />
<br />
* [https://bugzilla.mozilla.org/buglist.cgi?f1=cf_qa_whiteboard&list_id=11496372&o1=substring&query_format=advanced&v1=fxosqa-auto-s4 Bugzilla Query (QA Whiteboard contains fxosqa-auto-s4)]<br />
<br />
<bugzilla>{<br />
"f1": "cf_qa_whiteboard",<br />
"o1": "substring",<br />
"v1": "fxosqa-auto-s4",<br />
"include_fields": "id, assigned_to, summary, resolution, priority, cf_qa_whiteboard"<br />
}</bugzilla><br />
<br />
==== Other ====<br />
<br />
* [Bebe] Find out about suggested reviewers<br />
* [Bebe] Cross check with the past automation reports if the reported failures (intermittent and expected), are consistent with manual fails <br />
* PARTIAL, CARRY: Fix queries for Scrum wiki pages [geo]<br />
* CARRY: Fill out rest of info for team wiki pages [geo]<br />
* CARRY: Create legacy pages for and tag Sprint 2 and Sprint 1 tasks [geo]<br />
* DONE: Create performance acceptance report for 11/07 builds [geo]<br />
* Create performance acceptance report for 11/14 builds [geo]<br />
<br />
</onlyinclude><br />
<br />
== Sprint Planning ==<br />
<br />
* [https://etherpad.mozilla.org/fxosqa-auto-s4 Etherpad]<br />
<br />
== Sprint Meetings ==<br />
<br />
* TBD<br />
<br />
== Retrospective ==<br />
<br />
* TBD</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-06-12&diff=988481QA/Execution/Web Testing/Meetings/2014-06-122014-06-12T15:54:43Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: Rebecca <br />
** .<br />
* Scribe: Johan (J-Lo)<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
** Bob - Make input-tests unfriendly for contributors (!)<br />
*** Done - just awaiting a merge of https://github.com/mozilla/input-tests/pull/170<br />
** Dave - grant merge access to Vio and Robbie C [done]<br />
** Zac - Speak to a11y team about running the tests on device<br />
** Rebecca - Create xfail test day next week [done]<br />
** Robert C - Update Saucelabs mobile selenium version [done]<br />
** Bebe - Send email about etherpad for grid issues<br />
<br />
= Discussion Items / Updates =<br />
* We have an Xfail Automation day tomorrow [!] from 8-noon PST. Please be available if possible.<br />
* The Create Event ReMo test is failing:<br />
**Here is the pull: https://github.com/mozilla/remo-tests/pull/114<br />
**The failure = ".env/lib/python2.7/site-packages/selenium/webdriver/remote/errorhandler.py:164: MoveTargetOutOfBoundsException"<br />
** Identical error: https://github.com/mozilla/marketplace-tests/issues/471<br />
** It seems this was fixed by adding New Relic Browser agent v378<br />
** I asked the Reps team and Giorgos responded by saying they use NR, but not the NR Browser Agent- that the agent is JS that runs on the client side only. Also that it's a python error. Does anyone have the Marketplace bug, or the v387 version of NR agent or any helpful info that I can pass on to Giorgos??<br />
* Do we have budget to make buttons for a test event? [rbillings]<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** Development work post-relaunch has slowed to nil<br />
*** thank you to everyone who helped test and get this release out the door<br />
** many test automation tasks that people can involve themselves on -- come join us<br />
*** https://github.com/mozilla/Affiliates-Tests/issues?labels=Community&state=open<br />
*Bouncer<br />
*Engagement<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** 1/2 the commtools team is at a workweek in Greece<br />
** meeting notes + web qa presentation [given remotely] http://secretmustache.com/2014/06/commtools-meetup.html<br />
** new geo location feature should land on stage for testing today<br />
** post-manual testing additional automation test coverage tasks will be identified and captured in http://secretmustache.com/2014/06/commtools-meetup.html<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
* Moztrap<br />
** Improved https://github.com/camd/jira2moztrap/pull/3 script coming<br />
*Socorro<br />
** Meeting agenda and notes from this week<br />
*** https://wiki.mozilla.org/Breakpad/Status_Meetings/2014-06-11<br />
*** https://etherpad.mozilla.org/breakpad-2014-06-11<br />
*** https://etherpad.mozilla.org/webeng-q22014<br />
** Q3 goal planning has begun https://etherpad.mozilla.org/webeng-q32014<br />
** due-diligence is being done to move to true Continuous Deployment by the dev team.<br />
*** test automation is scheduled to be reviewed for coverage gaps as well as unneeded as coverage as part of the move to CD<br />
** new hardware has been approved and will be ordered<br />
* PluginCheck<br />
** Team will meet next week to discuss potential test automation solutions. If you are interested in joining please reach out to :espressive in #plugincheck<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
* bsilverberg - PTO - June 13 - 20<br />
* AndreiH - PTO - June 12 - 16<br />
* Bebe - PTO - June 19 - 20<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-06-05&diff=986204QA/Execution/Web Testing/Meetings/2014-06-052014-06-05T16:03:27Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: <br />
** Viorela<br />
* Scribe:<br />
** Zac<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
**bob: re-open discussion w/stephend & willkg regarding reviewers + input tests<br />
<br />
= Discussion Items / Updates =<br />
* Viorela/RobertC merge access on mozilla-b2g/gaia<br />
* Create a Flame/UI job for a11y tests? We won't maintain the tests, but we can send the reports to a11y team.<br />
* Given that http://mozilla.github.io/mozwebqa-dashboard is fixed and seems up-to-date, can we try our "xfail" (or whatever we choose to call it) Friday, and reach out to the community? [stephend]<br />
* 1 more Flame? (is 1 Hamachi enough?)<br />
*Bebe: saucelabs selenium version<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** New site is up and live -- lots of movement within the community to help add additional automated test coverage<br />
*** open tasks https://github.com/mozilla/Affiliates-Tests/issues?labels=Community&state=open<br />
** bug verification Spring cleaning has occurred<br />
*Bouncer<br />
** No updates<br />
*Engagement<br />
** Newsletter release<br />
*Firefox Health Report<br />
** No updates<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** bug verification Spring cleaning has occurred<br />
** mapbox integration should land on dev sometime this week and will be ready for community <br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Deploy scripts for stage were broken, this has been remedied<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
09 June all SV teams will be out of the office:<br />
http://en.wikipedia.org/wiki/Whitsuntide<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-06-05&diff=986203QA/Execution/Web Testing/Meetings/2014-06-052014-06-05T16:03:15Z<p>Bebef 1987: /* Action Items / Takeaways from Last Week */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: <br />
** Viorela<br />
* Scribe:<br />
** Zac<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
**bob: re-open discussion w/stephend & willkg regarding reviewers + input tests<br />
<br />
= Discussion Items / Updates =<br />
* Viorela/RobertC merge access on mozilla-b2g/gaia<br />
* Create a Flame/UI job for a11y tests? We won't maintain the tests, but we can send the reports to a11y team.<br />
* Given that http://mozilla.github.io/mozwebqa-dashboard is fixed and seems up-to-date, can we try our "xfail" (or whatever we choose to call it) Friday, and reach out to the community? [stephend]<br />
* 1 more Flame? (is 1 Hamachi enough?)<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** New site is up and live -- lots of movement within the community to help add additional automated test coverage<br />
*** open tasks https://github.com/mozilla/Affiliates-Tests/issues?labels=Community&state=open<br />
** bug verification Spring cleaning has occurred<br />
*Bouncer<br />
** No updates<br />
*Engagement<br />
** Newsletter release<br />
*Firefox Health Report<br />
** No updates<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** bug verification Spring cleaning has occurred<br />
** mapbox integration should land on dev sometime this week and will be ready for community <br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Deploy scripts for stage were broken, this has been remedied<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
09 June all SV teams will be out of the office:<br />
http://en.wikipedia.org/wiki/Whitsuntide<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-06-05&diff=986202QA/Execution/Web Testing/Meetings/2014-06-052014-06-05T16:02:54Z<p>Bebef 1987: /* Action Items / Takeaways from Last Week */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: <br />
** Viorela<br />
* Scribe:<br />
** Zac<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
**bob: re-open discussion w/stephend & willkg regarding reviewers + input tests<br />
**Bebe: saucelabs selenium version<br />
<br />
= Discussion Items / Updates =<br />
* Viorela/RobertC merge access on mozilla-b2g/gaia<br />
* Create a Flame/UI job for a11y tests? We won't maintain the tests, but we can send the reports to a11y team.<br />
* Given that http://mozilla.github.io/mozwebqa-dashboard is fixed and seems up-to-date, can we try our "xfail" (or whatever we choose to call it) Friday, and reach out to the community? [stephend]<br />
* 1 more Flame? (is 1 Hamachi enough?)<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** New site is up and live -- lots of movement within the community to help add additional automated test coverage<br />
*** open tasks https://github.com/mozilla/Affiliates-Tests/issues?labels=Community&state=open<br />
** bug verification Spring cleaning has occurred<br />
*Bouncer<br />
** No updates<br />
*Engagement<br />
** Newsletter release<br />
*Firefox Health Report<br />
** No updates<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** bug verification Spring cleaning has occurred<br />
** mapbox integration should land on dev sometime this week and will be ready for community <br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Deploy scripts for stage were broken, this has been remedied<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
09 June all SV teams will be out of the office:<br />
http://en.wikipedia.org/wiki/Whitsuntide<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-06-05&diff=986201QA/Execution/Web Testing/Meetings/2014-06-052014-06-05T16:01:43Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: <br />
** Viorela<br />
* Scribe:<br />
** Zac<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
**bob: re-open discussion w/stephend & willkg regarding reviewers + input tests<br />
<br />
<br />
= Discussion Items / Updates =<br />
* Viorela/RobertC merge access on mozilla-b2g/gaia<br />
* Create a Flame/UI job for a11y tests? We won't maintain the tests, but we can send the reports to a11y team.<br />
* Given that http://mozilla.github.io/mozwebqa-dashboard is fixed and seems up-to-date, can we try our "xfail" (or whatever we choose to call it) Friday, and reach out to the community? [stephend]<br />
* 1 more Flame? (is 1 Hamachi enough?)<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** New site is up and live -- lots of movement within the community to help add additional automated test coverage<br />
*** open tasks https://github.com/mozilla/Affiliates-Tests/issues?labels=Community&state=open<br />
** bug verification Spring cleaning has occurred<br />
*Bouncer<br />
** No updates<br />
*Engagement<br />
** Newsletter release<br />
*Firefox Health Report<br />
** No updates<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** bug verification Spring cleaning has occurred<br />
** mapbox integration should land on dev sometime this week and will be ready for community <br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Deploy scripts for stage were broken, this has been remedied<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
09 June all SV teams will be out of the office:<br />
http://en.wikipedia.org/wiki/Whitsuntide<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-05-22&diff=981647QA/Execution/Web Testing/Meetings/2014-05-222014-05-22T13:25:17Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** Stephen Donner<br />
* Scribe:<br />
** Andrei Hutusaurus<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
* Action Items:<br />
** Zac to email qa-staff, see who is using ActiveSync accounts. Re-enable tests if possible.<br />
<br />
= Discussion Items / Updates =<br />
<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
* [davehunt] iPython and iPython Notebook<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
<br />
*Engagement<br />
<br />
*Firefox Health Report<br />
<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
<br />
* Plugin Check:<br />
<br />
* Socorro <br />
*SUMO<br />
<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
* Bebe PTO 23.05<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
* Action Items:<br />
<br />
* Next owner / scribe: <br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-05-15&diff=977363QA/Execution/Web Testing/Meetings/2014-05-152014-05-15T16:03:16Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** Matt <br />
* Scribe:<br />
** Zac<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** Future work - Build out a new Jenkins instance just for b2g testing (stephend, retornam)<br />
* Action Items:<br />
** Raymond and Stephen to work with jabba to get ldap auth back on Jenkins<br />
** Dave - Lightning talk about iPython and iPython Notebook<br />
** Raymond - add a Flame to the current Jenkins<br />
*** Switch out one Hamachi for the Flame<br />
*** Zac to set up the job(s) for the Flame<br />
** bitgeeky to move forward with oneanddone-tests using the "old way"<br />
*** tests in their own repo and using the API for data population<br />
<br />
= Discussion Items / Updates =<br />
* [zac] Krupa, Marketplace dev is not on master/v1.4, blocks smoketest https://bugzilla.mozilla.org/show_bug.cgi?id=1011016<br />
* [zac] Can we get ActiveSync accounts for all nodes (each node to have a unique account). Jsmith requested we try harder because without it we lose smoketest coverage.<br />
* [zac] Flame seems to be holding up well but still no carrier functionality so no serious results.<br />
* [stephend] [https://bugzilla.mozilla.org/show_bug.cgi?id=1008413 Bouncer/its tests] finally seem stable<br />
* [stephend] I'll be getting Flames to the rest of the team + Softvision (need to double-check that I have enough for all)<br />
* [stephend] Lab wi-fi update: I'll switch us to "ateam" node which is a) faster b) more supported c) stability tweaks have been made to it, and I've tested locally<br />
* [bebe] I sent a proposal about bug priority to gaia-ui-tests mail list please send us feedback<br />
<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
* [davehunt] iPython and iPython Notebook<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** launch of redesign has been bumped to 2014-05-21<br />
** refactoring of test automation has begun in earnest https://github.com/mozilla/Affiliates-Tests<br />
*Bouncer<br />
<br />
*Engagement<br />
<br />
*Firefox Health Report<br />
<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** Weekly status - https://mozillians.etherpad.mozilla.org/status-2014-05-15<br />
*Mozilla.com<br />
*MDN<br />
<br />
* Plugin Check:<br />
<br />
* Socorro <br />
** No updates<br />
*SUMO<br />
<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
* Julian - PTO: May 13 - 19<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
* Action Items:<br />
<br />
* Next owner / scribe: <br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-05-08&diff=974898QA/Execution/Web Testing/Meetings/2014-05-082014-05-08T16:04:23Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** Stephen <br />
* Scribe:<br />
** Matt<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
<br />
* Action Items:<br />
** {{done|}}Dave - organise meeting between mbrandt, dave, psiinon about ZAP job on Jenkins<br />
** Stephen - Follow up with Krupa about keeping Mac job for AMO<br />
** Dave - Lightning talk about iPython and iPython Notebook<br />
<br />
= Discussion Items / Updates =<br />
* The Intro to Web Testing day went well! [rbillings]<br />
* Jenkins user accounts + flame devices<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
* [davehunt] iPython and iPython Notebook<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
<br />
*Bouncer<br />
<br />
*Engagement<br />
<br />
*Firefox Health Report<br />
<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
<br />
*Mozilla.com<br />
*MDN<br />
<br />
* Plugin Check:<br />
<br />
* Socorro <br />
<br />
*SUMO<br />
<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
* Julian - PTO: May 13 - 19<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-05-01&diff=971449QA/Execution/Web Testing/Meetings/2014-05-012014-04-30T15:18:18Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** bsilverberg<br />
* Scribe:<br />
** zac<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
**Stephen - see if Tony is available to talk to the teams regarding the future of gaia branches<br />
***Just emailed; awaiting a response :-)<br />
**[carryover] Bebe - Update nodes to use selenium 2.41<br />
<br />
* Action Items:<br />
** retornam: first check FM signal manually on hamachi, and then string the earbud pieces outside the metal racks<br />
** rbillings: set up meeting with bob & liz- coordinate upcoming intern work<br />
** rbillings: hit up dev-quality & mozwebqa w/Intro test day event info<br />
<br />
= Discussion Items / Updates =<br />
* [davehunt] ZAP's back!<br />
** Should we start running it on a pilot project?<br />
** We'd need to inform the dev team (and IT?) before running any active scans<br />
** We'd need it to be lead by a member of the Web QA team, ideally the selected project lead<br />
** Is Simon available to assist the lead regarding the results?<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
* rbillings has meeting conflict & will be late<br />
* 1'st May all SV teams are off<br />
** http://en.wikipedia.org/wiki/International_Workers%27_Day <br />
* Bebe - PTO 2 May<br />
* Viorela - PTO 2 May<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-24&diff=968416QA/Execution/Web Testing/Meetings/2014-04-242014-04-24T14:55:40Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: Stephen<br />
** .<br />
* Scribe: Rebecca<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** No more support for v1.3, only marketplace tests should run on v1.3<br />
* Action Items:<br />
** Stephen - create QMO credentials for Julian<br />
** Stephen - talk to Tony regarding the gaia reports for 21 April when the SV team will be PTO<br />
** Stephen - see if Tony is available to talk to the teams regarding the future of gaia branches<br />
** Bebe - Update nodes to use selenium 2.41 <br />
*** [carryover] - we still don't have access to the nodes<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
** Glow testing<br />
** Newsletter bug research<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
* Intro to Web testing [https://quality.mozilla.org/2014/04/intro-to-web-testing/ next Friday]<br />
<br />
= Time off / Out-of-office =<br />
* 1'st May all SV teams are off<br />
** http://en.wikipedia.org/wiki/International_Workers%27_Day<br />
* Bebe 2 May - PTO<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-24&diff=968415QA/Execution/Web Testing/Meetings/2014-04-242014-04-24T14:55:09Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: Stephen<br />
** .<br />
* Scribe: Rebecca<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** No more support for v1.3, only marketplace tests should run on v1.3<br />
* Action Items:<br />
** Stephen - create QMO credentials for Julian<br />
** Stephen - talk to Tony regarding the gaia reports for 21 April when the SV team will be PTO<br />
** Stephen - see if Tony is available to talk to the teams regarding the future of gaia branches<br />
** Bebe - Update nodes to use selenium 2.41 <br />
*** [carryover] - we still don't have access to the nodes<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
** Glow testing<br />
** Newsletter bug research<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
* Intro to Web testing [https://quality.mozilla.org/2014/04/intro-to-web-testing/ next Friday]<br />
<br />
= Time off / Out-of-office =<br />
* 1'st May - http://en.wikipedia.org/wiki/International_Workers%27_Day<br />
* Bebe 2 May - PTO<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-24&diff=968414QA/Execution/Web Testing/Meetings/2014-04-242014-04-24T14:54:59Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: Stephen<br />
** .<br />
* Scribe: Rebecca<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** No more support for v1.3, only marketplace tests should run on v1.3<br />
* Action Items:<br />
** Stephen - create QMO credentials for Julian<br />
** Stephen - talk to Tony regarding the gaia reports for 21 April when the SV team will be PTO<br />
** Stephen - see if Tony is available to talk to the teams regarding the future of gaia branches<br />
** Bebe - Update nodes to use selenium 2.41 <br />
*** [carryover] - we still don't have access to the nodes<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
** Glow testing<br />
** Newsletter bug research<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
* Intro to Web testing [https://quality.mozilla.org/2014/04/intro-to-web-testing/ next Friday]<br />
<br />
= Time off / Out-of-office =<br />
** 1'st May - http://en.wikipedia.org/wiki/International_Workers%27_Day<br />
** Bebe 2 May - PTO<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-24&diff=968409QA/Execution/Web Testing/Meetings/2014-04-242014-04-24T14:31:19Z<p>Bebef 1987: /* Action Items / Takeaways from Last Week */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: Stephen<br />
** .<br />
* Scribe: Rebecca<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** No more support for v1.3, only marketplace tests should run on v1.3<br />
* Action Items:<br />
** Stephen - create QMO credentials for Julian<br />
** Stephen - talk to Tony regarding the gaia reports for 21 April when the SV team will be PTO<br />
** Stephen - see if Tony is available to talk to the teams regarding the future of gaia branches<br />
** Bebe - Update nodes to use selenium 2.41 <br />
*** [carryover] - we still don't have access to the nodes<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-24&diff=968408QA/Execution/Web Testing/Meetings/2014-04-242014-04-24T14:27:37Z<p>Bebef 1987: /* Lead / Scribe */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead: Stephen<br />
** .<br />
* Scribe: Rebecca<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** No more support for v1.3, only marketplace tests should run on v1.3<br />
* Action Items:<br />
** Stephen - create QMO credentials for Julian<br />
** Stephen - talk to Tony regarding the gaia reports for 21 April when the SV team will be PTO<br />
** Stephen - see if Tony is available to talk to the teams regarding the future of gaia branches<br />
** Bebe - Update nodes to use selenium 2.41<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-24&diff=968407QA/Execution/Web Testing/Meetings/2014-04-242014-04-24T14:27:19Z<p>Bebef 1987: /* Action Items / Takeaways from Last Week */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** .<br />
* Scribe:<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** No more support for v1.3, only marketplace tests should run on v1.3<br />
* Action Items:<br />
** Stephen - create QMO credentials for Julian<br />
** Stephen - talk to Tony regarding the gaia reports for 21 April when the SV team will be PTO<br />
** Stephen - see if Tony is available to talk to the teams regarding the future of gaia branches<br />
** Bebe - Update nodes to use selenium 2.41<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-24&diff=968406QA/Execution/Web Testing/Meetings/2014-04-242014-04-24T14:26:33Z<p>Bebef 1987: Created page with "<small>Web QA Home</small> = Meeting Details = {{:QA/Execution/Web Testing/Meetings}} = Lead / Scribe = * Lead: ** . * Scribe: ** . ** Scribe i..."</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** .<br />
* Scribe:<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-10&diff=963092QA/Execution/Web Testing/Meetings/2014-04-102014-04-10T16:03:40Z<p>Bebef 1987: /* Time off / Out-of-office */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** Julian<br />
* Scribe:<br />
** Stephen<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
** Stephen to look into finding how to get UX vetting of dev changes in Gaia (to reduce churn)<br />
*** Done; recommendation is to need-info? :jsmith, and/or ping him in IRC<br />
** Krupa to follow up with Raymond on why http://mozilla.github.io/mozwebqa-dashboard/#/marketplace isn't updating<br />
** Bob to start on the Marketplace Payments automation next week<br />
*** Started: Three tests complete (search for paid app, create/confirm PIN, purchase app). The first two already in production the third still in review.<br />
** Disable the 1.4 jobs (and message that out) - or find resources for it<br />
*** New decision not to disable these and to continue running and reporting on them.<br />
** rbillings to send One and Done blurb for next Reps newsletter [done]<br />
** rbillings to contact Ben Sullins re: adding new MozTrap metrics to the dashboard [done]<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q2 Goals Check-in] =<br />
<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Bouncer<br />
*Engagement<br />
*Firefox Health Report<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
* All SV team's will be out on 21 April and 1 May<br />
** http://en.wikipedia.org/wiki/Easter<br />
** http://en.wikipedia.org/wiki/International_Workers%27_Day<br />
* bsilverberg - PyCon April 10 - 16<br />
* Viorela - PTO: April 17 - 18<br />
* Andrei - PTO: April 22 - 25<br />
* Robert - PTO: April 22 - 23<br />
* Bebe - PTO: May 2<br />
* Madalin - PTO: 22 - 25<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-04-03&diff=960802QA/Execution/Web Testing/Meetings/2014-04-032014-04-03T16:00:23Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** bsilverberg<br />
* Scribe:<br />
** stephend<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
** [bebe] gave feedback after the Testing camp Event in Iasi<br />
*** https://reps.mozilla.org/e/firefox-os-automation-testing-tdt-iasi/<br />
*** http://www.meetup.com/Tabara-de-Testare-Iasi/events/162803322/<br />
** We have wicked-cool team badges if you have suggestions for changes ping rbillings.<br />
** A big thank you to Ivana - http://imgur.com/AuZYKB0<br />
** Webmaker project - if you have time for community driven work the Webmaker project code reviews<br />
** Elastic Search Cluster<br />
** Grid Cluster - Raymond brought one of the machines back online; sauce labs is still the best fallback<br />
<br />
* Action Items:<br />
** Jenkins CI and Grid<br />
*** [retornam] continue working on getting grid online in MV<br />
*** [everyone] watch for incoming pull requests that update configs/etc for our grid and help get them quickly merged<br />
** [everyone] send stephend links to where each project tracks the work that is being accomplished and what is in the pipeline for the future, ex. A project's kanban board<br />
** [everyone] The Reps project is a good avenue for highlighting projects that students can get involved on. Ping rbillings with projects ideas and/or your projects if you would like them included in upcoming newsletters.<br />
** [everyone] Continue to add suggestions to https://etherpad.mozilla.org/QA-Q2Goals-Brainstorm<br />
<br />
= Discussion Items / Updates =<br />
* Check out our awesome new badges! https://badges.mozilla.org/en-US/badges/badge/Web-QA-Enthusiast<br />
** Retweet Ivana's post: https://twitter.com/IvanaCatovic<br />
* [mbrandt] Should we put in for a Web QA Discourse instance? https://discourse.mozilla-community.org/categories<br />
* New priority on 1.3 <br />
** Will the grid support the new load (2 more test runs at the same hour)<br />
** 1.4 priority and report<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** redesign is still in progress; not much there, yet:<br />
*** http://affiliates-dev.allizom.org/<br />
** near-final mockups are in https://bugzilla.mozilla.org/show_bug.cgi?id=981070<br />
** Last week's status meeting notes: https://affiliates.etherpad.mozilla.org/status-2014-03-28<br />
** A series of test events and refactoring of the current automated tests are scheduled for late this month (or early next month)<br />
*** current schedule - potentially optimistic - https://app.smartsheet.com/b/publish?EQBCT=85fca9767833481ea75885a7de7a8d39<br />
*** Waiting for the redesigned site to land on dev for testing<br />
*Bouncer<br />
** https://bugzilla.mozilla.org/show_bug.cgi?id=916181 landed - Update firefox-latest, firefox-stub, firefox-latest-euballot bouncer aliases as a part of post-release builder (and same for beta)<br />
*** all seems well (Mozilla.org and automation are happy)<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** Last week's status meeting notes: https://mozillians.etherpad.mozilla.org/status-2014-03-27<br />
** The team is working on creating solutions for:<br />
*** Signup refactor proposed bugs: https://mozillians.etherpad.mozilla.org/signup-change-bugs-2014-03-12<br />
*** API authorization proposed bugs: https://mozillians.etherpad.mozilla.org/api-change-bugs-2014-03-17<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Continuous deployment.<br />
** Team is finishing off Q1 goals<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Webmaker<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
* There is a Community Champions work week 4/7-9/14 in SF<br />
<br />
= Time off / Out-of-office =<br />
* rbillings @ Community Champions work week Monday-Wednesday next week<br />
* bsilverberg - April 10 - 16 - PyCon in Montreal<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-03-20&diff=954055QA/Execution/Web Testing/Meetings/2014-03-202014-03-20T15:23:07Z<p>Bebef 1987: /* Action Items / Takeaways from Last Week */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** .<br />
* Scribe:<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
** rbillings to create an xFail etherpad for tracking, and will send info out to mailing list [done] https://etherpad.mozilla.org/webqa-xfails<br />
** Retronam send a email on Monday morning about the Work Week room schedule<br />
** Bebe to give feedback after the Testing camp Event in Iasi<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-03-20&diff=954054QA/Execution/Web Testing/Meetings/2014-03-202014-03-20T15:22:17Z<p>Bebef 1987: Created page with "<small>Web QA Home</small> = Meeting Details = {{:QA/Execution/Web Testing/Meetings}} = Lead / Scribe = * Lead: ** . * Scribe: ** . ** Scribe i..."</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** .<br />
* Scribe:<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-03-13&diff=950048QA/Execution/Web Testing/Meetings/2014-03-132014-03-13T16:30:41Z<p>Bebef 1987: /* Takeaways and Action Items */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** Viorela<br />
* Scribe:<br />
** Bebe<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways: none<br />
* Action Items:<br />
** Stephen to get Softvision accounts for One and Done [carryover]<br />
*** rbillings to look into this. UPDATE: rbillings can create accounts, just send an email to get one as was noted in the email to the mailing list, or log in with your Persona account. [done]<br />
** rbillings to create an xFail etherpad for tracking, and will send info out to mailing list<br />
** rbillings will send info to retornam about the Takaro build note [done]<br />
** retornam will update the grid<br />
** mbrandt will write a blog post<br />
<br />
= Discussion Items / Updates =<br />
* Team meat Madalin Coteiu - He will join Victor and Iulian Working with Krupa testing AMO and Marketplace<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** Waiting for first iteration of site redesign to land on dev for manula testing + rejiggering of automation to match new workflows<br />
*Engagement<br />
*Firefox Health Report<br />
** Android l10n released yesterday - https://bugzilla.mozilla.org/show_bug.cgi?id=982366<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** Weekly status meeting notes - https://mozillians.etherpad.mozilla.org/status-2014-03-13<br />
** Work will begin shortly on the new vouching workflow - bugs are being created now<br />
*** Blog post http://dailycavalier.com/2014/02/plans-to-improve-the-vouching-process-for-mozillians-org/<br />
*** Proposal<br />
**** https://wiki.mozilla.org/Mozillians/Vouching<br />
**** https://mozillians.etherpad.mozilla.org/signup-change-bugs-2014-03-12<br />
** Researching adding a geolocation field to profiles https://bugzilla.mozilla.org/show_bug.cgi?id=920651<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Continuous deployment, not updates<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
** rbillings to create an xFail etherpad for tracking, and will send info out to mailing list <br />
** Retronam send a email on Monday morning about the Work Week room schedule<br />
** Bebe to give feedback after the Testing camp Event in Iasi<br />
* Next owner / scribe:Andrei/Matt<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-03-13&diff=950031QA/Execution/Web Testing/Meetings/2014-03-132014-03-13T16:03:00Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** Viorela<br />
* Scribe:<br />
** Bebe<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways: none<br />
* Action Items:<br />
** Stephen to get Softvision accounts for One and Done [carryover]<br />
*** rbillings to look into this. UPDATE: rbillings can create accounts, just send an email to get one as was noted in the email to the mailing list, or log in with your Persona account. [done]<br />
** rbillings to create an xFail etherpad for tracking, and will send info out to mailing list<br />
** rbillings will send info to retornam about the Takaro build note [done]<br />
** retornam will update the grid<br />
** mbrandt will write a blog post<br />
<br />
= Discussion Items / Updates =<br />
* Team meat Madalin Coteiu - He will join Victor and Iulian Working with Krupa testing AMO and Marketplace<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** Waiting for first iteration of site redesign to land on dev for manula testing + rejiggering of automation to match new workflows<br />
*Engagement<br />
*Firefox Health Report<br />
** Android l10n released yesterday - https://bugzilla.mozilla.org/show_bug.cgi?id=982366<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** Weekly status meeting notes - https://mozillians.etherpad.mozilla.org/status-2014-03-13<br />
** Work will begin shortly on the new vouching workflow - bugs are being created now<br />
*** Blog post http://dailycavalier.com/2014/02/plans-to-improve-the-vouching-process-for-mozillians-org/<br />
*** Proposal<br />
**** https://wiki.mozilla.org/Mozillians/Vouching<br />
**** https://mozillians.etherpad.mozilla.org/signup-change-bugs-2014-03-12<br />
** Researching adding a geolocation field to profiles https://bugzilla.mozilla.org/show_bug.cgi?id=920651<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Continuous deployment, not updates<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-03-13&diff=950024QA/Execution/Web Testing/Meetings/2014-03-132014-03-13T15:44:37Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** Viorela<br />
* Scribe:<br />
** Bebe<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways: none<br />
* Action Items:<br />
** Stephen to get Softvision accounts for One and Done [carryover]<br />
*** rbillings to look into this. UPDATE: rbillings can create accounts, just send an email to get one as was noted in the email to the mailing list, or log in with your Persona account. [done]<br />
** rbillings to create an xFail etherpad for tracking, and will send info out to mailing list<br />
** rbillings will send info to retornam about the Takaro build note [done]<br />
** retornam will update the grid<br />
** mbrandt will write a blog post<br />
<br />
= Discussion Items / Updates =<br />
* Team meat Madalin Coteiu - He will join Victor and Iulian Working with Krupa testing AMO and Marketplace<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** Waiting for first iteration of site redesign to land on dev for manula testing + rejiggering of automation to match new workflows<br />
*Engagement<br />
*Firefox Health Report<br />
** Android l10n released yesterday - https://bugzilla.mozilla.org/show_bug.cgi?id=982366<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** Weekly status meeting notes - https://mozillians.etherpad.mozilla.org/status-2014-03-13<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Continuous deployment, not updates<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-02-27&diff=941653QA/Execution/Web Testing/Meetings/2014-02-272014-02-27T10:17:02Z<p>Bebef 1987: Created page with "<small>Web QA Home</small> = Meeting Details = {{:QA/Execution/Web Testing/Meetings}} = Lead / Scribe = * Lead: ** . * Scribe: ** . ** Scribe i..."</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= Lead / Scribe =<br />
* Lead:<br />
** .<br />
* Scribe:<br />
** .<br />
** Scribe is responsible for:<br />
*** adding the action items and takeaways to the current meeting's minutes<br />
*** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
*** sending out a notice of the availability of that agenda along with the meeting invite<br />
*** completing all of the above by the end of the week during which the meeting occurred<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-02-13&diff=926152QA/Execution/Web Testing/Meetings/2014-02-132014-02-13T16:56:35Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div>= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= User / Scribe =<br />
* stephend / AndreiH<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeways:<br />
** Think about ways to start measuring and reporting on the goals for the team<br />
* Action Items:<br />
** Bob to consider giving an overview/demo of the new mozwebqa dashboard<br />
*** [bsilverberg] My apologies but I'm not sure what the action here is. I believe I demoed the Marketplace Tests dashboard at a previous meeting, but would be happy to show the issues dashboard. Is the intended audience of this a group other than Web QA?<br />
<br />
= Discussion Items / Updates =<br />
* Let's discuss and then make team decisions to end the confusion around:<br />
** Whose job it is to send next week's meeting invite, and by when<br />
** Who populates/carries over the previous week's action items and takeaways<br />
*** [bsilverberg] A suggestion: The scribe is responsible for:<br />
**** adding the action items and takeaways to the current meeting's minutes<br />
**** creating the agenda for the next meeting and carrying the action items and takeaways over to it<br />
**** sending out a notice of the availability of that agenda along with the meeting invite<br />
**** completing all of the above by the end of the week during which the meeting occurred<br />
* I'd like -- where possible -- Travis jobs for our automation<br />
** see https://travis-ci.org/mozilla/remo-tests and https://travis-ci.org/mozilla/mdn-tests for examples<br />
** [bsilverberg] +1<br />
* We started tracking our work on WebQA in:<br />
** https://docs.google.com/spreadsheet/ccc?key=0AgbJACdAek5ndFg0WjVvZWZMMGktd0lLLWkxZU5Ec0E&usp=drive_web#gid=4<br />
** What do you think?<br />
<br />
<br />
* Tutorial for getting started with Firefox OS is almost complete:<br />
** https://docs.google.com/document/d/1FFgITAa6trKkCypkvu81Dg-E0EEp8OdzmgFRk0yFdAk/edit<br />
** Please read and feedback (annotate in the doc)<br />
** Even if you want to *do* the tutorial that would be even better; it should take 1-2 hours.<br />
** It is aimed at being a hands off way to give newb contributors skills in test automation and Firefox OS<br />
** Plan to move to MDN where we can translate it and also track page metrics through each section of the test (ie how far the person gets)<br />
** Want to have it uploaded next week<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q1 Goals Check-in] =<br />
<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
*<br />
= Lightning talk / Show 'n Tell =<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** Team is scoping feature work/redesign goals<br />
*Engagement<br />
** Newsletter testing released update<br />
*Firefox Health Report<br />
** A potentially large release is scheduled for the coming week(s); the project will likely reach out to the team/community for help with testing<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** This week's meeting notes - https://mozillians.etherpad.mozilla.org/status-2014-02-13<br />
** Successfully releases the new groups feature, curated groups<br />
*** https://bugzilla.mozilla.org/show_bug.cgi?id=936569<br />
** Big thank you to :justinpotts for working on automation coverage for the groups features<br />
** Scoping work for geolocation - https://mozillians.etherpad.mozilla.org/true-geolocation-bugs-2014-02-12<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Continuous deployment<br />
** Stability team is hosting their workweek March 24 - 28<br />
*** QA will try to have a presence at this event -- need redefine risk areas, coverage needs, and quality expectations<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
= Community Update =<br />
* Created a blog post trying to fill the gap between getting automation running & making a first pull request: [https://quality.mozilla.org/2014/02/make-the-leap/ https://quality.mozilla.org/2014/02/make-the-leap/]<br />
<br />
= Time off / Out-of-office =<br />
* [mbrandt] Java Posse Roundup February 24 - 28: http://www.mindviewinc.com/Conferences/JavaPosseRoundup/<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-02-06&diff=918748QA/Execution/Web Testing/Meetings/2014-02-062014-02-06T15:04:31Z<p>Bebef 1987: /* User / Scribe */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= User / Scribe =<br />
Bebe / Rebecca<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
** [Stephen] Share with the team a shareable document the goals ideas for the team to comment/add suggestions<br />
*** Including jgmize <br />
** Bob Take a look on having all the git issues in one place<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-02-06&diff=918744QA/Execution/Web Testing/Meetings/2014-02-062014-02-06T15:01:29Z<p>Bebef 1987: /* Action Items / Takeaways from Last Week */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= User / Scribe =<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
** [Stephen] Share with the team a shareable document the goals ideas for the team to comment/add suggestions<br />
*** Including jgmize <br />
** Bob Take a look on having all the git issues in one place<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-02-06&diff=918743QA/Execution/Web Testing/Meetings/2014-02-062014-02-06T15:00:57Z<p>Bebef 1987: Created page with "<small>Web QA Home</small> = Meeting Details = {{:QA/Execution/Web Testing/Meetings}} = User / Scribe = = Action Items / Takeaways from Last Wee..."</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= User / Scribe =<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
<br />
= Discussion Items / Updates =<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-01-30&diff=911089QA/Execution/Web Testing/Meetings/2014-01-302014-01-30T18:09:20Z<p>Bebef 1987: /* Takeaways and Action Items */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= User / Scribe =<br />
* Stephen / Bebe<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
* Action Items:<br />
** [stephend] setup separate meeting with the team to discuss goals and how to deliver on them<br />
*** welcome to the hijacked meeting :-)<br />
** [stephend] send a webqa zimbra-vite to jgmize from webprod<br />
*** done!<br />
** [stephend][bsilverberg][bitgeeky] to work on building a GSOC’14 project proposal<br />
*** have reached out to bitgeeky + bob -- work in progress<br />
** [mbrandt] work with Ashley Wilson to put together a Meetup event during our UnWorkWeek<br />
<br />
= Discussion Items / Updates =<br />
* Q1 Goals we're responsible for:<br />
** https://wiki.mozilla.org/QA/Goals/2014q1<br />
*** Questions to ask & answer:<br />
**** What can we reasonably measure, across projects, that's valuable to us + upper management?<br />
* Update on One and Done? [mbrandt][bebe]<br />
** Mozillians + Affliates would like to use it to track highlevel automation needs. Example - advocate for good locators while the UX redesign is taking place.<br />
* [mbrandt] Mozillians + Affiliates - Have asked team RO to work on the more difficult automation tasks and leave the /good starter/ tasks for community.<br />
** Would the label "difficulty beginner" work?<br />
* Are we using this to prioritize automation work? If not, why not? (We should be.)<br />
** http://bobsilverberg.github.io/mozwebqa-dashboard/#/xfails<br />
<br />
= [https://wiki.mozilla.org/QA/Execution/Web_Testing/Goals/2013/Q4 Goals Check-in] =<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/testday-20140207<br />
** Stephen scheduling this today, for February 7th<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** Show notes from the last meeting - Afflilaties 2.0 redesign wireframes were discussed - https://affiliates.etherpad.mozilla.org/status-2014-01-24<br />
*Bouncer<br />
** No updates<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** Curated Groups set to launch next week<br />
** Seeking feedback from the community on the https://mozillians.etherpad.mozilla.org/signup-change-proposal<br />
** Hosted a targeted testing event for the new Curate Groups feature - 15 people attended - https://mozillians.etherpad.mozilla.org/curated-groups-ux-qa<br />
** Team RO - can you take a stab at https://github.com/mozilla/mozillians-tests/issues/108<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** Milestone 72 releaed yesterday - https://bugzilla.mozilla.org/show_bug.cgi?id=960656<br />
*** Major refactoring of production configs shipped, getting us further down the road to getting the majority of configs open sourced.<br />
*** [stage]Bug 965244 - release 72 has broken the cron submitter - https://bugzilla.mozilla.org/show_bug.cgi?id=965244<br />
** Moving to a daily release window of 10am - we will ship daily if we have something worthy<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
* 1/31 mbrandt Birthday pto<br />
* 2/5 - 2/6 bsilverberg TRIBE in Toronto<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
** [Stephen] Share with the team a shareable document the goals ideas for the team to comment/add suggestions<br />
*** Including jgmize <br />
** Bob Take a look on having all the git issues in one place<br />
* Next owner / scribe:<br />
Bebe/ Bob<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-01-23&diff=903870QA/Execution/Web Testing/Meetings/2014-01-232014-01-23T17:12:00Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= User / Scribe =<br />
Bob / Matt<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
**Add goals to the etherpad [1](https://etherpad.mozilla.org/webqa-Q1-2014-goals) - everyone [carry over]<br />
***These are largely supplanted by https://wiki.mozilla.org/QA/Goals/2014q1<br />
**Stephen schedule Vidyo "TestDay" to respond to community questions [carry over]<br />
**Dave to ask Naoki to about putting the flash scrips in to the builds [carry over]<br />
* Action Items:<br />
**Stephen to inform the whole team how to get issues into the Oneanddone project [carry over]<br />
*** Send them to kthiessen@mozilla.com directly, please (cc'ing Rebecca and me is fine, too)<br />
<br />
= Discussion Items / Updates =<br />
* Guys please meet Robert<br />
* Volunteer to help lead/co-lead a "Testday" or Meetup, during our don't-call-it-a-work-week "team meetup," March 17th - 21st?<br />
** Also, a badge for participating in said event? :-)<br />
* [bitgeeky]Discuss about potential projects for GSOC'14 (projects which are important to team and can be outsourced to community)<br />
* [davehunt] {{bug|879192}} - App update notifications are finally disabled in gaiatest thanks to gerard-majax. We should be able to remove a bunch of hacks! :)<br />
* [davehunt] {{bug|942840}} - Beware of background Phone launches if you have the wrong headset plugged in. Thanks Stephen for fixing the device pool.<br />
* [davehunt] {{bug|959217}} - It looks like switching to the Marionette Wait class will save us around 30 minutes on device testruns.<br />
* [mbrandt] Mozillians "Dazzling & New!" curated groups feature has landed on stage and is ready for hungry exploratory testers - https://mozillians.etherpad.mozilla.org/curated-groups-ux-qa<br />
* [mbrandt] mkelly wanted to remind us that Josh Mize from Webprod is the QA champion - https://wiki.mozilla.org/Webdev/Web_Production/Champions<br />
* [Bebe] 1 On 1 meetings https://etherpad.mozilla.org/WeQA-1-on-1<br />
* [Bebe] Selenium grid issues<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q1 Goals Check-in] =<br />
* These are/will be real, but not-yet set in stone<br />
** Need the team's help to vet and deliver, where we can, and reset expectations, where we can't<br />
<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
* Marketplace-tests dashboard - http://bobsilverberg.github.io/mozwebqa-dashboard/marketplace_dashboard.html [bsilverberg]<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** QA Kick off meeting for the Affliates redesign took place 2014-01-21 - https://affiliates.etherpad.mozilla.org/qa-kickoff-2014-01-21<br />
** Prioritized features for this quarter- https://bugzilla.mozilla.org/buglist.cgi?list_id=9257820&classification=Other&query_format=advanced&component=affiliates.mozilla.org&product=Firefox%20Affiliates&target_milestone=5<br />
** This would be an excellent project for community members to collaborate on.<br />
*Bouncer<br />
** No planned release this week<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** The first iteration of the new Curated Groups feature has landed on stage (mozillians.allizom.org).<br />
*** https://bugzilla.mozilla.org/show_bug.cgi?id=936569 - [tracker] Allow users to create/manage groups ("Curated Groups")<br />
** Mozillians "Dazzling & New!" curated groups feature has landed on stage and is ready for hungry exploratory testers - https://mozillians.etherpad.mozilla.org/curated-groups-ux-qa<br />
** Proposal for refactoring mozillians signup - https://mozillians.etherpad.mozilla.org/signup-change-proposal<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** No release this week. Most of the team is in Canada at a conference<br />
** Milestone 72 slated to be released next week - https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&target_milestone=72&product=Socorro&list_id=9257790<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
* Raymond's installed "YourKit Java Profiler" atop the Grid Hub; monitoring for memory usage<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
*AndreiH - next week 2 days off (30.01 and 31.01)<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987https://wiki.mozilla.org/index.php?title=QA/Execution/Web_Testing/Meetings/2014-01-23&diff=903842QA/Execution/Web Testing/Meetings/2014-01-232014-01-23T16:49:11Z<p>Bebef 1987: /* Discussion Items / Updates */</p>
<hr />
<div><small>[[QA/Execution/Web_Testing|Web QA Home]]</small> <br />
<br />
= Meeting Details =<br />
{{:QA/Execution/Web Testing/Meetings}}<br />
= User / Scribe =<br />
Bob / Matt<br />
<br />
= Action Items / Takeaways from Last Week =<br />
* Takeaways:<br />
**Add goals to the etherpad [1](https://etherpad.mozilla.org/webqa-Q1-2014-goals) - everyone [carry over]<br />
***These are largely supplanted by https://wiki.mozilla.org/QA/Goals/2014q1<br />
**Stephen schedule Vidyo "TestDay" to respond to community questions [carry over]<br />
**Dave to ask Naoki to about putting the flash scrips in to the builds [carry over]<br />
* Action Items:<br />
**Stephen to inform the whole team how to get issues into the Oneanddone project [carry over]<br />
*** Send them to kthiessen@mozilla.com directly, please (cc'ing Rebecca and me is fine, too)<br />
<br />
= Discussion Items / Updates =<br />
* Guys please meet Robert<br />
* Volunteer to help lead/co-lead a "Testday" or Meetup, during our don't-call-it-a-work-week "team meetup," March 17th - 21st?<br />
** Also, a badge for participating in said event? :-)<br />
* Discuss about potential projects for GSOC'14 (projects which are important to team and can be outsourced to community)<br />
* [davehunt] {{bug|879192}} - App update notifications are finally disabled in gaiatest thanks to gerard-majax. We should be able to remove a bunch of hacks! :)<br />
* [davehunt] {{bug|942840}} - Beware of background Phone launches if you have the wrong headset plugged in. Thanks Stephen for fixing the device pool.<br />
* [davehunt] {{bug|959217}} - It looks like switching to the Marionette Wait class will save us around 30 minutes on device testruns.<br />
* [mbrandt] Mozillians "Dazzling & New!" curated groups feature has landed on stage and is ready for hungry exploratory testers - https://mozillians.etherpad.mozilla.org/curated-groups-ux-qa<br />
* [mbrandt] mkelly wanted to remind us that Josh Mize from Webprod is the QA champion - https://wiki.mozilla.org/Webdev/Web_Production/Champions<br />
<br />
= [https://wiki.mozilla.org/QA/Goals/2014q1 Goals Check-in] =<br />
* These are/will be real, but not-yet set in stone<br />
** Need the team's help to vet and deliver, where we can, and reset expectations, where we can't<br />
<br />
= Testday Ideas =<br />
* https://etherpad.mozilla.org/webqa-testdays<br />
<br />
= Blog Ideas =<br />
If you have any ideas for blog posts, share them here.<br />
* <br />
<br />
= Lightning talk / Show 'n Tell =<br />
<br />
= Project Status / goals for next week (keep it brief) =<br />
*Affiliates<br />
** QA Kick off meeting for the Affliates redesign took place 2014-01-21 - https://affiliates.etherpad.mozilla.org/qa-kickoff-2014-01-21<br />
** Prioritized features for this quarter- https://bugzilla.mozilla.org/buglist.cgi?list_id=9257820&classification=Other&query_format=advanced&component=affiliates.mozilla.org&product=Firefox%20Affiliates&target_milestone=5<br />
** This would be an excellent project for community members to collaborate on.<br />
*Bouncer<br />
** No planned release this week<br />
*Engagement<br />
*Firefox OS<br />
*Marketplace / AMO<br />
*Mozillians<br />
** The first iteration of the new Curated Groups feature has landed on stage (mozillians.allizom.org).<br />
*** https://bugzilla.mozilla.org/show_bug.cgi?id=936569 - [tracker] Allow users to create/manage groups ("Curated Groups")<br />
** Mozillians "Dazzling & New!" curated groups feature has landed on stage and is ready for hungry exploratory testers - https://mozillians.etherpad.mozilla.org/curated-groups-ux-qa<br />
** Proposal for refactoring mozillians signup - https://mozillians.etherpad.mozilla.org/signup-change-proposal<br />
*Mozilla.com<br />
*MDN<br />
** Continuous deployment, no updates<br />
*Socorro<br />
** No release this week. Most of the team is in Canada at a conference<br />
** Milestone 72 slated to be released next week - https://bugzilla.mozilla.org/buglist.cgi?query_format=advanced&target_milestone=72&product=Socorro&list_id=9257790<br />
*SUMO<br />
** Continuous deployment, no updates<br />
*MozTrap<br />
*Wiki<br />
<br />
= CI Updates =<br />
If you've worked on Jenkins or Selenium Grid in the last week, add the necessary info in the Wiki<br />
* Raymond's installed "YourKit Java Profiler" atop the Grid Hub; monitoring for memory usage<br />
<br />
= Community Update =<br />
<br />
= Time off / Out-of-office =<br />
<br />
<br />
= Takeaways and Action Items =<br />
* Takeways:<br />
**<br />
* Action Items:<br />
**<br />
* Next owner / scribe:<br />
* Next week's meeting notes:</div>Bebef 1987