CloudServices/Sagrada/Metlog

From MozillaWiki
Jump to: navigation, search

Overview

The Metlog project is part of Project Sagrada, providing a service for applications to capture and inject arbitrary data into a back end storage suitable for out-of-band analytics and processing.

Project

Engineers

  • Rob Miller
  • Victor Ng

User Requirements

The first version of the Metlog system will focus on providing an easy mechanism for the Sync and BrowserID projects (and any other internal Mozilla services) to efficiently send profiling data and any other arbitrary metrics information that may be desired into one or more back end storage locations. Once the data has made it to its final destination, there should be available to those w/ appropriate access the ability to do analytics queries and report generation on the accumulated data.

Requirements:

  • Services apps should be provided an easy to use API that will allow them to send arbitrary text data into the metrics and reporting infrastructure.
  • Processing and I/O load generated by the API calls made by the services apps must be extremely small to allow for minimal impact on app performance even when there is a very high volume of messages being passed.
  • API should provide a mechanism for arbitrary metadata to be attached to every message payload.
  • Overall system should provide a sensible set of message categories so that commonly generated types of messages can be labeled as such, and so that the processing and reporting functionality can easily distinguish between the various types of message payloads.
  • Message taxonomy must be easily extendable to support message types that are not defined up front.
  • Message processing system must be able to distinguish between different message types, so the various types can be routed to the appropriate back end(s) for effective analysis and reporting.
  • Service app owners must have access to an interface (or interfaces) that will provide reporting and querying capabilities appropriate to the various types of messages that have been sent into the system.

Proposed Architecture

The proposed Services Metlog architecture will consist of 3 layers:

generator 
The generator portion of the system is the actual service application that is generating the data that is to be sent into the system. We will provide libraries (described below) that app authors can use to easily plug in. The libraries will take messages generated by the applications, serialize them, and then send them out (using ZeroMQ as the transport, by default). The metrics generating apps that need to be supported initially are based on the following platforms:
  • Mozilla Services team's Python app framework (sync, reg, sreg, message queue, etc.)
  • Node.js (BrowserID).
router 
The router is what will be listening for the messages sent out by the provided libraries. It will deserialize these messages and examine the metadata to determine the appropriate back end(s) to which the message should be delivered. The format and protocol for delivering these messages to the endpoints will vary from back end to back end. We plan on initially using logstash as the message router, because it is already planned to be installed on every services server machine, and it is built specifically for this type of event-based message routing.
endpoints 
Different types of messages lend themselves to different types of presentation, processing, and analytics. We will start with a small selection of back end destinations, but we will be able to add to this over time as we generate more types of metrics data and we spin up more presentation and query layers. Proposed back ends are as follows:
  • ruby-statsd: (Phase 1) ruby-statsd is already in the pipeline to be running on every Services machine.
  • HDFS (Phase 1) Some data will be inserted into the Mozilla Metrics team's HDFS infrastructure where it will be available for later Hive and/or map reduce based queries.
  • ArcSight ESM (Phase 1) A "security correlation engine" already in use throughout the Mozilla organization.
  • Sentry: (Phase 2) Sentry is an exception logging infrastructure that provides useful debugging tools to service app developers. Sentry is not yet planned on being provided by any Mozilla operations team, using it would require buy-in from and coordination with a Mozilla internal service provider (probably the Services Ops team).
  • Esper: (Phase 3?) System for "complex event processing", i.e. one which will watch various statistic streams in real time looking for anomalous behavior.
  • OpenTSDB (Phase 3?) A "Time Series Database" providing fine grained real time monitoring and graphing.

API

The atomic unit for the Metlog system is the "message". The structure of a message is inspired by that of the well known syslog message standard, with some slight extensions to allow for more rich metadata. Each message will consist of the following fields:

  • timestamp: Time at which the message is generated.
  • logger: String token identifying the message generator, such as the name of the service application in question.
  • type: String token identifying the type of message payload
  • severity: Numerical code from 0-7 indicating the severity of the message, as defined by RFC 5424.
  • payload: Actual message contents.
  • fields: Arbitrary set of key/value pairs that includes any additional data that may be useful for back end reporting or analysis.
  • env_version: API version number of the "message envelope", i.e. any changes to the message data structure (exclusive of message-type-specific changes that may be embedded within the fields or the payload) must increment the env_version value. The structure described in this document is envelope version 0.8.

We will provide Metlog client libraries that will both ease generation of these messages and that will handle packaging them up and delivering them into the message processing infrastructure. Implementations of this library are available in both Python and Node.js style Javascript. Please see the documentation for these client libraries to learn more about the specific APIs available in each environment.

Use Cases

Python App Framework performance metrics

The Python framework that underlies the Services Apps will be annotated w/ timer calls to automatically generate performance metrics for such key activities as authentication and execution of the actual view callable. The sample rate of these calls will be able to be specified in the app configuration, where a value of 0 can be entered to turn off the timers altogether. These will ultimately feed into a ruby-statsd / graphite back end provided by Services Ops, where app owners will be able to see graphs of the captured data.

Python App Framework exception logging

In addition to timing information, the Python framework for services apps can automatically capture exceptions, sending a full traceback and some amount of local variable information as part of the message payload. This can ultimately be delivered to a Sentry installation for developer introspection and debugging.

Ad-Hoc service app metrics gathering

Any service app will have the ability to easily generate arbitrary message data and metadata for delivery into the Metlog system. Any messages not specifically recognized as being intended for another back end will be delivered to an HDFS cluster provided by the Metrics team, allowing for later analysis via custom map-reduce jobs or Hive queries.

CEF security logging

Several groups in Mozilla are already using ArcSight ESM to track events and to evaluate them looking for patterns that may indicate attempts at security or abuse violations. ArcSight expects messages in the "Common Event Format". Rather than talking to ArcSight directly, services developers could send messages of type "cef" through metlog, decoupling service applications from a vendor-specific back end.


Setting up Metlog with Logstash

Setting up logstash to operate with metlog involves installing the logstash-metlog package.

You can find the latest version of the code on github and the latest documentation at logstash-metlog.rtfd.org.

We keep a working vagrant instance as well - the logstash.conf configuration file is a useful reference point for setting up your own metlog server instance.