48
edits
No edit summary |
No edit summary |
||
Line 38: | Line 38: | ||
It wasn't clear which was better from a price perspective, but from a performance perspective EBS is clearly superior and AMI creation and everything related to that is significantly easier given that Amazon provides GUI tools and snapshot tools for AMI creation. As a result the AMIs that are prepared right now are EBS-backed and not S3-backed. | It wasn't clear which was better from a price perspective, but from a performance perspective EBS is clearly superior and AMI creation and everything related to that is significantly easier given that Amazon provides GUI tools and snapshot tools for AMI creation. As a result the AMIs that are prepared right now are EBS-backed and not S3-backed. | ||
==Preliminary Design== | ==Architecture== | ||
===Preliminary Design=== | |||
One server remains always-on (probably an in-house box of some sort). It contains the keypair necessary to access Amazon. A python script listens to mozilla pulse messages and simultaneously listens to requests on a socket port. A client program will send a changeset # to the socket server, and the server will in turn spin up an instance (using synchronization primitives to keep track of how many resources are in use), or it will queue the job if too many instances are already running. When a job completes, the server downloads the built binary from the EC2 instance, shuts down the instance (freeing the resource unless another queued job exists in which case it gets the job and executes it), and serves the binary via HTTP. It provides the download URL via a pulse message. The client program knows its build is complete when the pulse message comes through. Binaries older than 3 days are deleted via a cron job. | One server remains always-on (probably an in-house box of some sort). It contains the keypair necessary to access Amazon. A python script listens to mozilla pulse messages and simultaneously listens to requests on a socket port. A client program will send a changeset # to the socket server, and the server will in turn spin up an instance (using synchronization primitives to keep track of how many resources are in use), or it will queue the job if too many instances are already running. When a job completes, the server downloads the built binary from the EC2 instance, shuts down the instance (freeing the resource unless another queued job exists in which case it gets the job and executes it), and serves the binary via HTTP. It provides the download URL via a pulse message. The client program knows its build is complete when the pulse message comes through. Binaries older than 3 days are deleted via a cron job. | ||
== Usage == | == Usage == | ||
Straight up building a changeset with the server: | |||
mozcloudbuilder -s cloudbuilder.server.somesite.org -p 9999 --changeset=xxxxxxxxxx << (returns url to built binary) | mozcloudbuilder -s cloudbuilder.server.somesite.org -p 9999 --changeset=xxxxxxxxxx << (returns url to built binary) | ||
Bisect, build, and run interactively via changelog: | |||
mozremotebuilder -g 2011-06-15 -b 2011-06-18 | mozremotebuilder -g 2011-06-15 -b 2011-06-18 | ||
edits