Web Operations/Reference Specification/Platform Blueprint

From MozillaWiki
Jump to: navigation, search

Networking

Each app functions in a completely isolated private network.

Instances

Instances connect to ports which are in turn connected to networks.

Images

A hybrid-golden image approach is used for building base images. To build an image, a tool called diskimage-builder is used.

A tool called refspec-diskimage-builder is currently used to do all mozilla specific things to the image. This tool does a few main things:

  • Install the base OS and all base system packages
  • Install the mozilla puppet tree at a certain git reference
  • Install elements needed for connecting to the heat api for further configuration.

With refspec-dib (refspec diskimage builder) the base image distro and release name is specified by the MOZ_DIB_DISTRO and MOZ_DIB_RELEASE environment variables.

The puppet tree that is baked into the image is specified by the environment variable DIB_MOZ_PUPPET_REF.

Other elements are baked into the image and are specified via the MOZ_DIB_ELEMENTS environment variable.

The diskimage-builder tool allows you to do anything you want to an image being constructed by specifying elements to use. To read more about elements and how to build your own, see the diskimage-builder readme.

Building an image

An example invocation of the image building tool is as follow:

 DIB_MOZ_PUPPET_REF=0.1.0 MOZ_DIB_DISTRO=ubuntu MOZ_DIB_RELEASE=trusty ./build.sh 

This will build an image named ubuntu-trusty.amd64-0.1.0-2014073016. The name of the image follows the following format:

<distro>-<distro-version>-<arch>.<mozpuppet-version>.<YYYYMMDDSS>

The architecture, mozpuppet-version, distro, distro-version, and image type can all be customized by setting environment variables.


Installing the image in the image store

By default, the ./build.sh command uploads the image to glance using the keystone profile active in the build shell. You can also determine which project an image shows up in by passing a keystone rc file using the MOZ_KEYSTONE_PROFILE_PATH environment variable. For example:

MOZ_KEYSTONE_PROFILE_PATH=~/keystone_my_user MOZ_DIB_DISTRO=ubuntu MOZ_DIB_RELEASE=trusty ./build.sh

The build tools will source the value of $MOZ_KEYSTONE_PROFILE_PATH before attempting to upload the image to glance.

Service Configuration

The services which are installed on instances are defined in the service matrix (TODO, link to service matrix). A service will be implemented by two halves: a front end interface and back end implmentation.

The back end implementation of a service is done with puppet. The implementation should be completely stand alone and not rely on Mozilla specifics. A good example of an implementaiton of a back end service would be the puppet labs apache module. Back end implementations of a module should always exist in their own git repository.

The front end implementation is a consumes a backend service so that users can more easily use a service implementation. For example, an apache vhost definition can have many parameters passed to it. There may be parameters that we always want to set for every vhost, so we would capture that instantiation behind a front end and offer that to consumers of the platform for easier use. For example:

define mozpuppet::apache::python_vhost (
    vhost_name,
    ...
    ...
) {
    apache::vhost { "$vhost_name":
      port                        => '80',
      default_vhost               => true,
      wsgi_application_group      => '%{GLOBAL}',
      wsgi_daemon_process         => 'wsgi',
      wsgi_daemon_process_options => {
        processes    => '2',
        threads      => '15',
        display-name => '%{GROUP}',
      },
      ...
      ... (More wsgi stuff)
      ...
    }
}

Now instead of tediously defining an apache definition for running a wsgi app a user can more easily define a 'mozpuppet::apache::python_vhost'.

An important note about writing front ends for service a module:

  • Front ends can be used to make using back end modules easier.
  • A guideline is that puppet built in types should be contained within the external component modules. Exceptions should only be made when its apparent that a back end module is lacking a small bit of functionality and that an FEM can could easily provide the missing parts.
  • Generally, front end definitions should only use user defined resource definitions

The Service Matrix

All services listed in the service matrix will have a single corresponding back end module. For each service there can be multiple front end implmentations. For example:

Module Backend Module Front End Interfaces
Apache puppetlabs-apache mozpuppet::apache::python_vhost
mozpuppet::apache::php_vhost
mozpuppet::apache::ruby_vhost
Mysql puppetlabs-apache mozpuppet::mysql::server
mozpuppet::mysql::python_client
... ... ...

Location of puppet code

Fully featured back end implementations of services will always live in external git repositories. There is a need for a centeral puppet repository where front end definitions can be written and maintained. For this purpose there exists a root puppet tree. Required backend modules are defined in a Puppetfile. A tool called [https://github.com/rodjek/librarian-puppet Librarian Puppet] is used to resolve dependancies and install specific versions of backend modules into the root puppet tree's module/ directory. This compilation is done at image build time described in the "Buildin an Image" section.

Versioning of the puppet tree

The root puppet tree will be versioned with three numbers:

<major-release-number.<mindor-release-number>.<secerurty-release-number>

This section needs more thought

Consuming Services

An application consumes application services. The format by which comsumption described is via the SoftwareConfig and SoftwareDeploy HOT resources. An in depth explaination of HOT templates can be found here (TODO make a subpage? This page should link to the documented heat template)

At this layer of the platform, a SC (SoftwareConfig) describes how a service module is applied to an instance. A SC usually describes its action using a service module front end, but it can use the service backend directly. The following is an example of puppet SC that would install the mozpuppet::apache::vhost_python front end:

apache_config:
  type: OS::Heat::SoftwareConfig
  properties:
    group: puppet
    inputs:
    - name: vhost_name
    ...
    ...
    outputs:
    - name: result
    config: |
      apache::python_vhost(
          vhost_name=$::vhost_name
          ...
      )


To actually deploy this SC onto an instance (called web_server in this example) you would define a SD (SoftwareDeploy) resource:

apache_deployment:
  type: OS::Heat::SoftwareDeployment
  properties:
    config:
      get_resource: apache_config
    server:
      get_resource: web_server
    input_values:
      vhost_name: my.python.site.com

Application Data

Early prototypes feed data directly into SoftwareDeployments via environment variables. We eventually want to replace heat directly passing variables with heat passing data to zookeper and then configuration actions consuming configuration data from zookeeper.

Installing the Application Code

Scripts will have an install script in their application code that will be run when the code is installed onto an instance.

Deploying Code Changes to the Application

Captain Shove will be used to push changes to applications.

Design Principles