Calendar:Release Build Instructions
- L10n lead needs to approve or reject any existing sign-offs for milestone on l10n dashboard.
- L10n lead needs to make sure shipped-locales is up to date (in comm-beta/comm-release).
- Release driver needs to determine the changesets & relbranches for the release.
Preparing for the builds
- Someone needs to hit the ship-it button on the dashboard for the milestone.
- You won't have to file a bug anymore to create a new l10n milestone, but if you don't have a milestone to ship, then you can file a bug.
- Update calendar/locales/shipped-locales from the l10n dashboard, make sure to remove en-US from the list.
- Modify calendar/lightning/install.rdf to use fixed min/maxVersions for Seamonkey and Thunderbird
- Push everything to comm-beta or comm-release. Remember this changeset for the following section.
- Land the l10n changesets.
- You need to update release_calendar.py with all information mentioned in the first section
- branchSuffix = 'release' or 'beta'
- sourceRepoRevision, mozillaRepoVersion
- version, milestone (= gecko version), buildNumber
- The oldVersion, oldBuildNumber are usually updated, but it isn't vital in the current configuration.
- Push everything to buildbot-configs.
Starting the builds
- Clobber the calendar release builders (currently: 'tag', 'source', 'win32_build', 'macosx64_build', 'linux64_build', 'linux_build').
- ssh into the Calendar buildbot master
- Pull the latest configs and reconfigure the master:
$ cd /buildbot/calendar $ ./update.sh
- If you see "Reconfiguration appears to have completed successfully." then continue, else wait 5 minutes and then continue.
- Kick off the build using release.sh. Note that beta/release is not the type of build, but the branch you are building from!
$ # Usage: release.sh <beta|release> <version> [<buildnr>] [<kicknr>] $ ./release.sh beta 1.3b1 $ # or: $ ./release.sh release 1.3
- At this stage, the master should show that the "tag" builder either has a build pending, or is running.
- If the tag build is pending, you might want to stop it using the buildbot web interface
- It is a good idea to check the comm-* and mozilla-* repositories during/after tagging to ensure the correct thing has been tagged and version bump is correct. Also, stick around for the first 2 minutes of the platform builds in case they error out.
Post Build Step
- ssh into stage.mozilla.org and run the following to unify the Linux builds (currently run in the home directory, should really move to be running on buildbot, there's a bug on that somewhere):
$ # <version> is the version number e.g. 1.1, <build> is the build number e.g. 1. $ sh fix_linux_xpi.sh <version> <build>
- Test the Lightning builds in the nightly/*-candidates directory
- Make sure the maxVersion is set to <major>.* instead of <major>.0
- Upload to AMO
- Notify the right people to create the Solaris contrib builds
- ssh to stage.mozilla.org to move files from <version>-candidates to releases/<version>. There is a script in the home directory that takes the same parameters as fix_linux_xpi.sh:
$ # <version> is the version number e.g. 1.1, <build> is the build number e.g. 1. $ sh move_to_release.sh <version> <build>
What to do if something fails?
- The tag builder fails
- If the whole job failed you can just restart with sendchange. Make sure you clobber again.
- Check if all that fails is the hg out step. If so, then the push step could have worked and tagging is correct. Use to force build button on the release and source builders (linux_build, linux64_build, win32_build, macosx64_build, source)
- If tagging partially failed, modify l10n-calendar-changesets to only include locales that weren't tagged and restart the tag builder.
- A platform builder fails (win32_build, linux_build, ...)
- Just hit the 'force build' button on builder (NOTE: Resist using the 'Restart Build' button on failed jobs). The build properties will be used from the failed build, so you don't have to reconfigure anything
- You may even clobber a builder inbetween and then restart build. This won't disrupt anything.