From MozillaWiki
Jump to: navigation, search

Talos Data

Raw data is generated by Talos. We apply some filters to summarize and reduce the data, then we post it to a server:

  • Graphserver
  • Perfherder



When a build is completed we run a series of test (unittest and performance "talos") jobs. Each job reserves a machine for itself, then runs the script which sets up, installs, executes the test, generates the results, and cleans up after itself. In general we try to ensure the jobs complete in 30 minutes or less.

For Talos we have a series of jobs and each job runs 1 or more test suites (with the exception of tp and xperf, jobs run 2-4 suites at a time). A suite would be something like 'ts_paint', 'Canvasmark', or 'tp5'. Each Suite will run its respective subtests, and provide a summary number which is representative of a meaningful aggregation of the individual subtest results. When all the suites in a job have completed, the results will be output (uploaded in some cases) and we will be able to look for regressions, view data in a graph, and query for the summarized data.


A collection of subtests which run. Often these are referred to as 'tests'. Some examples are "tresize", "TART", "tp5", "ts_paint".

  • in graph server this is the lowest level of granularity available in the UI
  • in Perfherder suite-level results are called a 'summary' (e.g "tp5o summary opt")


A specific test (usually a webpage to load) which we collect data points from. Typically we run many cycles of each subtest to build up a representative collection of data points to make sure the data is meaningful.

  • in graph server Talos upload a single number for each subtest, the data points are summarized by Talos prior to uploading.
  • in Perfherder the subtest data is preserved as raw data points as well as summarized by Talos. We use the summarizations when showing a graph.

Data Points (aka Replicates)

Data Points refer to the single numbers or replicates we collect while executing a talos test. In this regard, we collect a series of numbers (usually 20 or more) for each subtest. Each of these 20+ numbers are called data points.

We do filtering on the data points, mainly because the first few data points are not a representative sample of the remaining data points we collect. The one exception would be internal benchmarks (generally suites which measure something other than time). For Benchmarks, there is usually a special formula applied to the data points.

Subtest Filters

We have a variety of filters defined for Talos. I will explain what each filter is, and you can see the exact settings used for each filter by looking at the individual tests.


This filter ignores the first 'X' data points allowing us to ignore warmup runs.

  • input: an array of subtest data points
  • returns: an array of data points
  • source:
  • used in most tests with X=1, X=2, and X=5 (5 is the normal case)


This filter takes in an array of data points and returns the median of the data points (a single value).

  • input: an array of subtest data points
  • returns: a single value
  • source:
  • used in most tests


This filter takes in an array of data points and returns the mean value of the data points (a single value).

  • input: an array of subtest data points
  • returns: a single value
  • source:
  • used in kraken for subtests


This filter is a specific filter defined by dromaeo and respects the data points as every 5 data points represents a different metric being measured.

  • input: an array of dromaeo (DOM|CSS) subtest data points
  • returns: a single number (geometric_mean of the metric summarization)
  • source:
  • used in dromaeo_dom and dromaeo_css to build a single value for the subtests


  • input: an array of v8_7 subtest data points
  • returns: a single value representing the benchmark weighted score for the subtest (see for details)
  • source:
  • used in v8_7 for the subtests

NOTE: this deviates from the exact definition of v8 as we retain the Encrypt and Decrypt as subtests (instead of combining them into Crypto) as well as keeping Earley and Boyer (instead of combining them into EarleyBoyer). There is a slight tweak in the final suite score, but it is <1% different.

Suite Summarization Filters

Once we have a single number from each of the subtests, we need to generate a single number for the suite. There are 4 specific calculations used.


This is a standard geometric mean of the data:

  • inputs: array of subtest summarized data points (one point per subtest)
  • returns: a single value representing the geometric mean of all the subtests
  • source:
  • used for most tests


this is a custom metric which take the geometric_mean of the subtests and multiplies it by 100.

  • inputs: array of v8 subtest summaries
  • returns: a single v8 score
  • source:
  • used for v8 version 7 only


This is the metric used to calculate the Canvasmark score from the subtest summarized results. Essentially it is a sum of the subtests. This is identical to the js_metric


This is the metric used to calculate the Kraken score from the subtest summarized results. Essentially it is a sum of the subtests.

  • inputs: array of Kraken subtest results
  • returns: a single Kraken score
  • source:
  • used for Kraken only


Perfherder ingests data from talos by parsing the raw log, then it stores the data in a database while preparing it for regression detection and displaying on graphs.

Raw Data

In the log files, we look for "TALOSDATA: " text followed by a valid json blob. An example TALOSDATA blob looks like:

 [{"talos_counters": {}, "results": {"tresize": [23.26174999999999, 22.99621666666672, 22.66563333333331, 23.99620000000002, 22.940849999999948, 22.26951666666664, 22.975350000000006, 24.96453333333337, 23.6878333333334, 23.21740000000001, 24.743699999999976, 23.507333333333282, 22.927800000000033, 22.292066666666653, 23.28364999999999, 23.361950000000004, 22.18191666666666, 22.996466666666684, 23.54029999999997, 22.873883333333342]}, "summary": {"suite": 23.21740000000001, "subtests": {"tresize": {"std": 0.7716690474213389, "min": 22.18191666666666, "max": 24.96453333333337, "median": 23.21740000000001, "filtered": 23.21740000000001, "mean": 23.254913333333334}}}, "test_machine": {"platform": "x86", "osversion": "Ubuntu 12.04", "os": "linux", "name": "talos-linux32-ix-040"}, "testrun": {"date": 1440091515, "suite": "tresize", "options": {"responsiveness": false, "cycles": 20, "tpmozafterpaint": true, "shutdown": false, "rss": false}}, "test_build": {"name": "Firefox", "version": "43.0a1", "id": "20150820095841", "branch": "Mozilla-Inbound-Non-PGO", "revision": "bb85ec539217b9d3a5e83c40538d8565d292e72b"}}, {"talos_counters": {}, "results": {"Plasma - Maths- canvas shapes": [545.0, 572.0, 598.0, 662.0, 588.0], "Asteroids - Shapes- shadows- blending": [748.0, 737.0, 720.0, 742.0, 743.0], "Asteroids - Bitmaps- shapes- text": [1031.0, 1011.0, 913.0, 1063.0, 888.0], "Arena5 - Vectors- shadows- bitmaps- text": [892.0, 738.0, 900.0, 920.0, 806.0], "Asteroids - Vectors": [675.0, 735.0, 659.0, 789.0, 768.0], "3D Rendering - Maths- polygons- image transforms": [306.0, 434.0, 388.0, 426.0, 389.0], "Pixel blur - Math- getImageData- putImageData": [1291.0, 1435.0, 1553.0, 1461.0, 1521.0], "Asteroids - Bitmaps": [435.0, 418.0, 410.0, 403.0, 380.0]}, "summary": {"suite": 6204.0, "subtests": {"Plasma - Maths- canvas shapes": {"std": 34.19064199455752, "min": 572.0, "max": 662.0, "median": 593.0, "filtered": 593.0, "mean": 605.0}, "Asteroids - Shapes- shadows- blending": {"std": 9.233092656309694, "min": 720.0, "max": 743.0, "median": 739.5, "filtered": 739.5, "mean": 735.5}, "Asteroids - Bitmaps- shapes- text": {"std": 71.23333138355947, "min": 888.0, "max": 1063.0, "median": 962.0, "filtered": 962.0, "mean": 968.75}, "Arena5 - Vectors- shadows- bitmaps- text": {"std": 73.40980860893181, "min": 738.0, "max": 920.0, "median": 853.0, "filtered": 853.0, "mean": 841.0}, "Asteroids - Vectors": {"std": 49.37294299512639, "min": 659.0, "max": 789.0, "median": 751.5, "filtered": 751.5, "mean": 737.75}, "3D Rendering - Maths- polygons- image transforms": {"std": 20.94486810653149, "min": 388.0, "max": 434.0, "median": 407.5, "filtered": 407.5, "mean": 409.25}, "Pixel blur - Math- getImageData- putImageData": {"std": 46.82680856090878, "min": 1435.0, "max": 1553.0, "median": 1491.0, "filtered": 1491.0, "mean": 1492.5}, "Asteroids - Bitmaps": {"std": 14.16642156650719, "min": 380.0, "max": 418.0, "median": 406.5, "filtered": 406.5, "mean": 402.75}}}, "test_machine": {"platform": "x86", "osversion": "Ubuntu 12.04", "os": "linux", "name": "talos-linux32-ix-040"}, "testrun": {"date": 1440091515, "suite": "tcanvasmark", "options": {"responsiveness": false, "tpmozafterpaint": false, "tpchrome": true, "tppagecycles": 1, "tpcycles": 5, "tprender": false, "shutdown": false, "cycles": 1, "rss": false}}, "test_build": {"name": "Firefox", "version": "43.0a1", "id": "20150820095841", "branch": "Mozilla-Inbound-Non-PGO", "revision": "bb85ec539217b9d3a5e83c40538d8565d292e72b"}}]

Filtering & Calculations

When the raw data comes in, we look for the summary tag in the json.

{"suite": 23.21740000000001, ... }

In this case we would use 23.22 for the value inside of perfherder (perfherder rounds to two decimal places). This is the value that will be used for calculating alerts, displaying points on the graph, and for data when comparing two revisions.

In all cases there should be a 'subtests' field as well that lists out each page loaded along with a set of values:

"subtests": {"tresize": {"std": 0.7716690474213389, "min": 22.18191666666666, "max": 24.96453333333337, "median": 23.21740000000001, "filtered": 23.21740000000001, "mean": 23.254913333333334}

These values are used in the sub test specific view (not the suite summary). When viewing a graph, you can switch between different values for each data point to see what the mean, median, etc. are. This is where we get the fields. In addition, the default value is the 'filtered' value, this takes into account filters (ignore first 'x' data points, median|mean, etc.) on the raw data so we have summarized data being calculated at a single point.

Each suite has the ability to set custom filters and keeping this logic inside of talos ensures that it is always done in a single place, in source code, where developers can easily look and find it.

Graph Server

Data is packaged as a file in an HTTP post object.


Two different types of data to be sent:

  1. A single value to be stored as the 'average' in the test_runs table
  2. A set of (interval, value) pairs to be stored in the test_run_values table, 'average' to be calculated by collector script

First type will be called 'AVERAGE' second called 'VALUES'. All data is formatted using comma separated notation.

date_run = seconds since epoch (linux time stamp) page_name = is unique to pages when combined with the pageset_id from test table

  • for sending interval, value pairs
  • for sending a single value


values input:

machine_1, test_1, branch_1, changeset_1, 13, 1229477017


Content-type: text/plain 


average input:

machine_1, test_1, branch_1, changeset_1, 13, 1229477017


Content-type: text/plain



The data is harvested from browser_output.txt: