Within releng, the puppet master should respond at the unqualified hostname puppet. This is adjustable through manifests/settings.pp for other environments.
Masters are defined by the 'puppetmaster' module. The module is designed to run on CentOS 6, but in principle there's no reason it couldn't run on any supported operating system (with some modifications). See that module for all of the gory details of how that works. This page just highlights some of the cooler parts.
The puppet manifests at are checked out at /etc/puppet/production. The masters update their manifests from mercurial once every 5 minutes, with a bit of "splay" added (so it does not always occur on the 5-minute mark). Any errors during the update are emailed, as well as a diff of the manifests when they change; the latter forms a kind of change control.
The puppet configuration includes 'node_name = cert' and 'strict_host_checking = true' to ensure that a host can only get manifests for the hostname in its certificate (which the deployment system gets from DNS).
The closest master is available at the unqualified hostnames puppet and repos (assuming the DNS search path is set correctly), on ports 8140 (puppet), 80 (http), and 443 (https). The http/https URI space looks like this:
- / - see ReleaseEngineering/PuppetAgain/Data
- /deploy (HTTPS only) - deployment CGI script
Note: environments don't work yet in 3.2.0
For each of the members of release engineering, an environment is set up with e.g.,
[jford] modulepath = /etc/puppet/environments/jford/env/modules templatedir = /etc/puppet/environments/jford/env/templates manifestdir = /etc/puppet/environments/jford/env/manifests manifest = $manifestdir/site.pp
and per-user logins are enabled. A clone of the hg library at this location, along with any necessary secrets and settings, can be used to test and develop changes to puppet. (See also HowTo: Set up a user environment)
Releng users will all have sudo access on the puppet masters, allowing them to diagnose and solve any small issues that come up without depending on IT, although IT is happy to help (and will be required for any changes to the sysadmins puppet configs).
One master in a cluster is designated as the "distinguished master" (DM). This host serves as the hub in a hub-and-spoke synchronization model -- much easier to implement than a full mesh. If the distinguished master is down for a short time, no harm is done - masters can't synchronize, but agents can continue to generate catalogs and receive files.
Masters synchronize secrets by rsyncing the secrets file from the distinguished master periodically. Similarly, data is synchronized from the DM periodically using rsync. If desired, the DM can itself sync from http://puppetagain.pub.build.mozilla.org periodically.
All of the SSL key and certificate materials are synchronized using git. There are two git repositories (one bare, one for editing) under /var/lib/puppetmaster/ssl/. See the manifests for details on how all of this fits together.
All of our installation tools are scriptable. These tools are responsible for fetching a signed certificate from the puppet master and installing it on the client before its first boot. This transaction is authenticated using a protected secret. The shared secret is a password. For systems where the base image is access-restricted, this password is embedded in the image. For other systems (e.g., kickstart), the password must be supplied by the person doing the imaging, at the beginning of the process.
See Puppetization Process and Certificate Chaining for details on this system.