This page will continue to host information about Balrog that doesn't make sense to put into the repository, such as meeting notes and things related to our hosted versions of Balrog.
Balrog is the software that runs the server side component of the update system used by Firefox and other Mozilla products. It is the successor to AUS (Application Update Service), which did not scale to our current needs nor allow us to adapt to more recent business requirements. Balrog helps us ship updates faster and with much more flexibility than we’ve had in the past.
- 1 Infrastructure
- 2 Meeting Notes
We have a number of different Balrog environments with different purposes:
|Production||Admin||https://aus4-admin.mozilla.org (VPN Required)||Manually by CloudOps||Manage and serve production updates|
|Public||https://aus5.mozilla.org and others (see the Client Domains page for details)|
|Stage||Admin||https://balrog-admin.stage.mozaws.net/ (VPN Required)||When version tags (eg: v2.40) are created||A place to submit staging Releases and verify new Balrog code with automation|
|Dev||Admin||https://balrog-admin.dev.mozaws.net (VPN Required)||Whenever new code is pushed to Balrog's master branch||Manual verification of Balrog code changes in a deployed environment|
Support & Escalation
If the issue may be visible to users, please make sure #moc is also notified. They can also assist with the notifications below.
RelEng is the first point of contact for issues. To contact them, follow the standard RelEng escalation path.
If RelEng is unable to correct the issue, they may escalate to CloudOps.
Monitoring & Metrics
Metrics from RDS, EC2, and Nginx are available in the Datadog Dashboard.
NOTE: These instructions were written before Amazon Athena existed. The next time we need to do such analysis, it's probably worth giving it a try. Other techniques may be better too - the instructions below are just something we've done in the past.
The ELB logs for the public-facing application are replicated to the balrog-us-west-2-elb-logs S3 bucket, located in us-west-2. Logs are rotated very quickly, and we end up with tens of thousands of separate files each day. Because of this, and the fact that S3 has a lot of overhead per-file, it can be tricky to do analysis on them. You're unlikely to be able to download the logs locally in any reasonable amount of time (ie, less than a day), but mounting them on an EC2 instance in us-west-2 should provide you with reasonably quick access. Here's an example:
- Launch EC2 instance (you probably a compute-optimized one, and at least 100GB of storage).
- Generate an access token for your CloudOps AWS account. If you don't have a CloudOps AWS account, talk to Ben Hearsum or Bensong Wong. Put the token in a plaintext file somewhere on the instance.
- If you've chosen local storage, you'll probably need to format and mount volume.
- Install s3fs by following the instructions on https://github.com/s3fs-fuse/s3fs-fuse.
- Mount the bucket on your instance, eg:
s3fs balrog-us-west-2-elb-logs /media/bucket -o passwd_file=pw.txt
- Do some broad grepping directly on the S3 logs, and store it in a local file. This should speed up subsequent queries. Eg:
grep '/Firefox/.*WINNT.*/release/' /media/bucket/AWSLogs/361527076523/elasticloadbalancing/us-west-2/2016/09/17/* | gzip > /media/ephemeral0/sept-17-winnt-release.txt.gz
- Do additional queries on the new logfile.
Nginx logs for the admin app are available (on a ~1 day time delay) in the "net-mozaws-prod-us-west-2-logging-balrog" S3 bucket. These logs are small enough that downloading and querying them locally is generally the most efficient thing to do.
Balrog uses the built-in RDS backups. The database in snapshotted nightly, and incremental backups are done throughout the day. If necessary, we have the ability to recover to within a 5 minute window. Database restoration is done by CloudOps, and they should be contacted immediately if needed.
Balrog's stage and production infrastructure is managed by the Cloud Operations team. This section describes how to go from a reviewed patch to deploying it in production. You should generally begin this process at least 24 hours before you want the new code live in production. This gives the new code a chance to bake in stage.
At a high level, the deployment process looks like this:
- Verify the new code in dev
- Bake the new code in stage
- Deploy to prod
Each part of this process is described in more detail below.
Is now a good time?
Before you deploy, consider whether or not it's an appropriate time to. Some factors to consider:
- Are we in the middle of an important release such as a chemspill? If so, it's probably not a good time to deploy.
- Is it Friday? You probably don't want to deploy on a Friday except in extreme circumstances.
- Do you have enough time to safely do a push? Most pushes take at most 60 minutes to complete once the production push has begun.
If you need to do a schema change you must ensure that either the current production code can run with your schema change applied, or that your new code can run with the old schema. Code and schema changes cannot be done at the same instant, so you must be able to support one of these scenarios. Generally, additive changes (column or table additions) should do the schema change first, while destructive changes (column or table deletions) should do the schema change second. You can simulate the upgrade with your local Docker containers to verify which is right for you.
When you file the deployment bug (see below), include a note about the schema change in it. Something like:
This push requires a schema change that needs to be done _prior_ to the new code going out. That can be performed by running the Docker image with the "upgrade-db" command, with DBURI set.
bug 1295678 is an example of a push with a schema change.
Verification in dev
The dev environment automatically deploys new code from the master branch of the Balrog repository (including any necessary schema changes). Before beginning the deployment procedure, you should do some functional testing there. At the very least, you should do explicit testing of all the new code that would be included in the push. Eg: if you're changing the format of a blob, make sure that you can add a new blob of that type, and that the XML response looks correct.
If you have schema changes you must also ensure that the existing deployed code will work with the new schema. To do this, CloudOps will downgrade the dev apps. You should do some routine testing (make some changes to some objects, try some update requests) to ensure that everything works. If you have any issues you CANNOT proceed to production.
Baking in stage
To get the new code in stage you must create a new Release in Github as follows:
- Tag the repository with a "vX.Y" tag. Eg: "git tag -s vX.Y && git push --tags"
- Diff against the previous release tag. Eg: "git diff v2.24 v2.25"
- Look for anything unexpected, or any schema changes. If schema changes are present, see the above section for instructions on handling them.
- Create a new Release on Github. This create new Docker images tagged with your version, and deploy them to stage. It may take upwards of 30 minutes for the deployment to happen.
Once the changes are deployed to stage, let them bake for at least 24 hours. You can do additional targeted testing here if you wish, or simply wait for nightlies/releases to prod things along. It's a good idea to watch Sentry for new exceptions that may show up, and Datadog for any notable changes in the shape of the traffic.
Pushing to production
Pushing live requires CloudOps. For non-urgent pushes, you should begin this procedure a few hours in advance to give CloudOps time to notice and respond. For urgent pushes, file the bug immediately and escalate if no action is taken quickly. Either way, you must follow this procedure to push:
- File a bug to have the new version pushed to production.
- Wednesdays between 11am and 1pm are usually the best day to push to production, because they are generally free of release events, nightlies, and cronjobs. Unless you have a specific need to deploy on a different day, you should request the prod push for a Wednesday between those hours
- You should link any bugs being deployed is the "Blocks" field.
- Make sure you substitute the version number and choose the correct rollback option from the bug template.
- Once the push has happened, verify that the code was pushed to production by checking the __version__ endpoints on the Admin and Public apps.
- Bump the in-repo version to the next available one to ensure the next push gets a new version.
- CloudOps Migration Meeting - June 22, 2016
- Balrog/absearch Information Exchange - June 28, 2016
- CloudOps Migration Meeting - June 29, 2016
- CloudOps Migration Meeting - July 6, 2016
- CloudOps Final Cut Over Planning - July 12, 2016
- Balrog Worker Brainstorming - July 19, 2016
- CloudOps Meeting - July 27, 2016
- CloudOps Meeting - August 10, 2016
- CloudOps Meeting - September 7, 2016
- CloudOps Meeting - September 14, 2016
- CloudOps Meeting - September 21, 2016
- CloudOps Meeting - September 28, 2016
- CloudOps Meeting - October 19, 2016
- CloudOps Meeting - November 2, 2016
- CloudOps Meeting - November 9, 2016
- CloudOps Meeting - January 17, 2017
- CloudOps Meeting - February 22, 2017