From MozillaWiki
Jump to: navigation, search


ActiveData is a collection of about 8 billion records (Feb 2016) covering unit tests, Buildbot jobs, performance data, and mercurial. This collection is publicly available, and can be queried directly, similar to any database.

ActiveData is built on top of ElasticSearch, a fast, distributed, redundant document store. ActiveData provides the benefits of familiar and succinct SQL by translating SQL-like queries to ElasticSearch queries.


In order to improve our testing infrastructure we require data on how that infrastructure is performing. That information can be extracted from the raw logs, but that requires downloading samples, parsing data, insertion into a database (or worse, writing queries in an imperative language, like Python). When we are done an analysis we have effectively built an ETL pipeline that does not scale, and is too specific to be reused elsewhere. The next project does this work all over again.


ActiveData will serve as a reusable ETL pipeline; annotating the test results with as much relevant data as possible. It also provides a query service to explore and aggregate the data, so there is minimal setup required to access this data.


ActiveData has a Redash connector for stmo and it accepts SQL. see main doc for details


ActiveData is fast enough to support dashboards.

Build times

End to End Times Shows overall time from when a build is first requested to the time tests on that build are complete.
Build Times Time series view of build times by platform and build type. Click on a bar to get a scatter plot view.
Detailed Build Times Scatter plot of build times. Use the left navigation panel to choose a combination. Click on a data point to see the Buildbot step times and Mozharness step times.
Buildbot Simulator
An incomplete Buildbot scheduling simulator. It can be used to see past wait times, queue size, and inter-job delays.
Test Runtimes Choose test suite and machine pool to get an average run time for each of the buildbot steps, and Mozharness steps.

Unit Test Visualization

With all unit test results in ActiveData, we can get accurate estimates of "failure rate"; and be able to focus on the most-failing tests.

Top Intermittent Failures List of top 30 most-failing unit tests, and list of top 30 most-recent failing tests. Click on the link to get a scatterplot.
Find Test Results Use the search bar to find a test. A list of matching tests, and platform combinations will show the unit test failures and durations.
Neglected Oranges Cross reference OrangeFactor and Bugzilla to give a list of frequent intermittents that have no bug activity.


ActiveData attempts to provide the benefits of an available database to the public; except larger and faster.


An active data instance distinguishes itself from a static resource, or database, or big data solution, by delivering a particular set of features:

  • A service, open to third party clients - By providing the service, clients don't need to setup their own datastore
  • Fast filtering - Sub-second filtering over the contents of the whole datastore, independent of size, saves the application developer from declaring and managing indexes that do the same: There is sufficient information in the queries to determine which indexes should be built to deliver a quick response.
  • Fast aggregates - Sub-second calculation of statistics over the whole datastore saves the application developer from building and managing caches of those aggregates.
  • API is a query language (SQL?, MDX?) - Building upon the formalisms, and familiarity, of existing query languages, we reduce the learning curve, and also provide Active Data implementations with more insight into the intent of the client application; and optimize for its use cases.
  • Uniform, Cartesian space of values - Mozilla has a mandate of data driven decision making. Data analysis tools, like Spreadsheets, R, Scipy, Numpy, and Pandas are used to perform data analysis, and they all require uniform data in multi-dimensional arrays, commonly known as "pivot tables" or "data frames". ActiveData's objective is to provide query results in these formats
  • Metadata on dimensions and measures - ActiveData also provides context to the data it holds. It serves the purpose to allow exploration and discovery by third parties; by describing unit-of-measure, how dimensions relate to others, and provide human descriptions of the columns stored. This metadata is also invaluable in automating the orientation and formatting of dashboard charts: Knowing the domain of an axis allows code to decide the best (default) chart form, and provides logically reasonable aggregate options.
  • Has a security model - Simpler applications can avoid the complications of a security model if it is baked into the ActiveData solution. If ActiveData is to become mainstream it is important that it can manage sensitive data and PII.


The unittest data is limited to those test suites that generate structured logs. Currently (Feb, 2016) the following do NOT have structured logs, and are NOT in ActiveData:

  • cppunittest
  • and any of the js based gaia suites (e.g Gij)

Specifically, you can see if a structured log is being generated: In Treeherder, click a job. Under the "Job details" pane at the bottom, look for a line similar to:

artifact uploaded: <suite>_raw.log

If you see that, it is using structured logging.

ActiveData makes specific tradeoffs to achieve its goals. It has the following limitations:

  • large memory requirements
  • low add/update/remove speeds
  • strict data model (snowflake schema, hierarchical relations only)
  • non-relational
  • ETL work required to de-normalize data
  • ETL work required to provide dimension metadata

Non Goals

ActiveData is not meant to replace an application database. Applications often track significantly more data related to good interface design, process sequences, complex relations, and object life cycles. ActiveData's simple model makes it difficult to track object life cycles and impossible to model many-to-many relations. Data is not live, and definitely does not track "pending jobs" like TreeHerder or TaskCluster do. Test results may take a day, or more, to be indexed.

Dependencies / Who will use this


ActiveData's ETL pipeline ingests data from a variety of sources:

  • Structured Logs from Unittests
  • Task Cluster tasks
    • including mozharness timings
  • PerfHerder at a per-replicate level
  • Treeherder
  • branches and revisions
  • code coverage (all tests, all files, all lines)
  • various logs from ETL pipeline
  • all firefox files and related components
  • firefox testing results
  • old buildbot jobs


ActiveData's primary goal is to support dashboards that give Mozilla useful perspectives into the large amount of data:

  • ActiveData Recipes has a variety of use cases
  • Individual unit test results
  • Task cluster test timing
  • Firefox compile times
  • Recently new, removed, and disabled tests
  • Buildbot wait times
  • CodeCoverage aggregates and per-file detail

Let's Use It!

The service listens at and accepts queries in JSON Query Expression format.

   curl -XPOST -d "{\"from\":\"unittest\"}"

The Query Tool

The ActiveData service is intended for use by automated clients, not humans. The Query Tool is a minimal web page for humans to do some exploration, and to test phrasing queries.



Development is still in the early stages, setting up your own service


  • Kyle Lahnakoski
    • IRC:
    • Email:
    • Bugzilla: :ekyle

More Context

Mostly rambling, optional reading.


This project is inspired by the data warehouse and data mart technology that is common inside large corporations. These warehouses are useful because they are "active" services: This means the data is not only available, but it can be explored interactively by large audience using a query language.

General Problem

A significant portion of any application is the backend database/datastore, which include:

  • Managing resources and machines to support the datastore
  • Data migrations on schemas during application lifetime
  • Manually defining database indexes for responsive data retrieval
  • Coding caching logic to reduce application latency

The manual effort put toward these features becomes significant as the amount of data grows in size and complexity. More importantly, this effort is being spent over and over on a multitude of applications, each a trivial variation of the next.

General Solution

Abstractly, we desire to reduce this redundant workload by adding a layer of abstraction called ActiveData: Clients using ActiveData benefit from the features it provides and avoid the datastore management complexities. While the ActiveData implementers can focus on these common issues while being given a simpler data model, and simpler query language, upon which to calculate optimizations.

Columnar datastores, have solved many (but not all) problems with changing schema. Query-directed indexing has been around for decades in Oracle's query optimization algorithms, and are available for free in ElasticSearch. We now have the technology to build an ActiveData solution.

By defining an ActiveData standard, we can innovate on both sides of the ActiveData abstraction layer independently

Client Architecture

Applications that leverage an active data warehouse can forgo significant server side development, if not all, and put the logic on the client side.