Firefox/Go Faster/Release Pipeline

From MozillaWiki
Jump to: navigation, search

Testing | CI


See: mrrrgn

The initial pipeline should be very lightweight. Tests/Builds can be run via a TaskCluster GitHub bridge. The idea being:

  • Pull Requests trigger testing (autolanding may need to come later)
  • Untagged pushes trigger testing
  • Tagged pushes trigger builds which result in an upload being sent to AMO/Balrog

A/B testing and allowing public access to addons will need to be handled by hand after this process is completed.


  • Can we get away with doing only Linux initially? There is no good solution for Mac/Windows via TaskCluster at the moment and buildbot changes are not generally within the realm of "light weight." | No, we'll need to use the TaskCluster generic worker to support Windows first and foremost.

Pieces Needed

  • A test project: This will be Hello (see: rhelmer, standard8, dmose, abr)
  • TaskCluster -> GitHub bridge: this will allow us to set up automated testing and builds very quickly. note why not Autolander? It only handles pull requests, so it could provide automated testing, but would not allow us to handle automated builds.
  • Test/Build TaskCluster Tasks: this can be completed in parallel with the TC-GH bridge.
  • TaskCluster Task for Addon uploads: First we need to decide on Balrog or AMO. This task should be a child of a successful build (fetching the built addon from its parent). Can also be worked in parallel to TaskCluster -> GitHub bridge development.


(mrrrgn) TC->GitHub bridge

Release Triggering

see: ???

Developers will need a system for notifying us when they are ready for a release to be rolled out. We also need procedures for things like tagging.

  • Could this be a self-serve UI? (mrrrgn) : No, this is too heavy/slow. (mrrrgn)
  • Could this be related to .taskclusterrc and tied to either release tagging or pinning to a revision? (selenamarie) : Yes! :) (mrrrgn)

Shipping Infrastructure

We'll be using AMO + Balrog

See: mrrrgn, bhearsum, ??? (we'll probably need help from other RelEng members)


Balrog will be in charge of making decision about which updates to serve. Firefox will talk to it periodically, and its response will point at AMO and other places where appropriate (eg, Kinto) to serve the actual updates.

AMO Needed Features / Status:

There are some difficulties around using it, but it seems possible to create a working prototype without actually changing its code base (though it will require a workflow with human intervention).

tl;dr: We'll need a special "GoFast" addon type with improved permissions, and a few REST endpoints (instead of crufty old web forms).

Ability to host multiple addon versions at the same time.

Status: This is supported but only for "listed" [public] addons.

API for uploading addons

This is currently a multi-part web form. This will need to be turned into an API asap.


Ability to partially roll out updates

Balrog already supports a rolling out changes at different rates. Eg, new Firefox versions are shipped to only 25% of update requests initially, and then rolled out fully later. This allows for widespread testing without rolling out to our entire userbase.

Balrog does not currently support true A/B testing, as it cannot identify unique browsers across multiple requests for updates. Morgan has some ideas about we could build a proxy on top of Balrog that would allow us to do this, if desired.

Multiple types of updates in a single request/response

Balrog's current architecture makes it very difficult to return updates for multiple things in the same request/response. Each update ping can only match one set of data object (which is responsible for generating the response), which means that each unique combination of things that we may want to return requires a new copy of the data. To properly support all of the different types of updates we want to do (system addons, security policy, Fennec stuff, etc.) we will need to rearchitect Balrog to be able to map a single request to multiple pieces of data, and collapse those back into a single response.