Services/F1/Server/ServicesStatusDB

From MozillaWiki
Jump to: navigation, search

Goal

When a third party service like Twitter gets down or is starting to be very slow, or our own servers are down, we need to inform the client of the outage, and ask them to retry later. If possible, we should tell us what's going on.

Outage definition

The Share server can have three types of outages:

  • a Mozilla infrastructure outage
  • a third-party outage, like "Twitter is down"
  • a scheduled maintenance window

Client UX on outage

When an outage happens, the server returns a 503 + Retry-After + X-Strict-Retries, and possibly an explanation in the response body.

When a client tries to send a request and gets back a 503, a bar pops-up :

  Twitter seems to be unresponsive, we will try again automatically in 5 mn. [Force retry]

The "Force retry" button that appears in the pop-up will let the end-user force a retry.

When possible, the server informs the client why the service is down via the body, and the client can display the information:

  As scheduled, our system is currently in a maintenance window that will be over in 25mn. 

In case the X-Strict-Retries header is present, like in this example, the "Force retry" button will not appear.

After three tentatives, automatic or forced, the message to be sent is discarded and the user gets a popup bar:

  Failed to send the message after 3 attempts, Discarded.


Principle

This section describes the user flow when a request is made:

1. On every request the client adds a X-Target-Service header containing the domain of the service it wants to reach. For example, if the clients want to share on Twitter, a "X-Target-Service: twitter.com" is added.

2. The web server (NGinx) that receives the request, asks the Services DB what is the status of the service (as described later) and decides if the request should go through or not.

3. If the request is rejected, the client receives a 503 + Retry-After header and has to wait before it retries. It also possibly gets a reason in the body, and a X-Strict-Retries header.

4. In case the request is accepted, it is passed to the upstream server (Python) that does the job

5. If the upstream server succeeds, it notifies asynchronously the Services DB

6. If the upstream server fails. e.g. if the third party service is considered down, it notifies asynchronously the Services DB, and send back a 503 + Retry-After.


Database

The DB is a membase key/value storage, and stores for each service:

  • GR: the number of good requests TTL-ed
  • BR: the number of bad requests - TTL-ed
  • Disable: a disabled flag
  • Retry-After: the value of the Retry-After header
  • TTL: the time after which GR and GB are initialized to 0
  • MinReqs: the minimum number of requests (GR+GB) before the ratio is considered meaningful
  • MinRatio: the value between 0 and 1 under which the service is considered unreliable

The DB is replicated in several places and is eventually consistent.

Back-off decision process

For every services, the parameters that can be configured are:

  • Retry-After: the value of the header
  • TTL: the time after which GR and GB are initialized to 0
  • MinReqs: the minimum number of requests (GR+GB) before the ratio is considered meaningful
  • Threshold: the value between 0 and 1 under which the service is considered unreliable

These parameters have a default value are stored in configuration files but pushed into the DB. In other words they can be changed dynamically by the workers or an admin application.

Every-time a request comes in, the web server gets the disable, GR and BR values and calculates the ratio of successes to decide if the request should be rejected.

In pseudo-code:

 if disabled:
     # oops, the service was completely disabled
     res = 503()
     res.headers = {"Retry-After": retry_after}
     if disabling_reason:
       res.headers["X-Strict-Retries"] = 'on'
       res.body = disabling_reason
     
     raise res  
 
 num_reqs = BR + GR
 ratio = GR / (BR + GR)
 
 if num_reqs < min_reqs:
     # not meaningfull 
     return
   
 if ratio < threshold:
     # oops, too many failures
     res = 503()
     res.headers = {"Retry-After": retry_after}
     raise res

What are "Good" and "Bad" responses from the third-party services is to the workers discretion.

Rationale for the threshold

When a third party service like Twitter gets down or is starting to be very slow, clients will retry to send to our servers more and more requests and our infrastructure will be overloaded and potentially unresponsive.

The goal of the threshold is to provide to every front web server a way to pre-emptively back-off for a limited time any new request to the services that are down in order to avoid piling up unnecessary work and triggering infrastructure alarms. This regulation will let our servers get back to flow once Twitter is up again.

What happens without a threshold

Let's say we get 100 requests per second (RPS) per server (including the time Twitter takes to answer). Each server is able to handle a maximum of 500 RPS.

Since Twitter is down they're all timing out after 30s. After 30 s, our Twitter server is handling 3000 concurrent requests. The system has already started to send back errors because it's unable to hold that many concurrent requests.

Since we don't do any monitoring on Twitter, we're unable to determine if the outage is a problem on our side -- like a sudden spike of activity, or on Twitter side.

We're just backing off all requests and the client gets a message saying that it did not work. We are unable to tell the client why it is happening. And all the Nagios turn RED on all those servers because the heartbeat is not responding anymore. Ops have to intervene, and need to check what's going on. "Grrr Twitter is down...not my fault"

What happens with a threshold

Under the same conditions.

Since Twitter is down they're all timing out after 30s. After a few seconds, the ratio of success/failures get below the threshold, and we're able to tell the end-users that there's a Twitter outage. There is a limited activity of requests on our servers that are going to Twitter, so we don't trigger any infrastructure alerts.

We're just backing off all requests and the client gets a message saying that it did not work. We can regulate and handle this automatically. We can tell the end-user that Twitter is down.

Scenario

Here's a full example. Let's say we have these values :

  • ttl = 5 minutes
  • retry-after = 5 minutes 1 s.
  • [R] ratio under which we back off clients: 0.3
  • [T] total number of requests before we check the ratio: 3
  • [G] good requests
  • [B] bad requests.

Let's make the assumption that we start with G at 1 and B at 0, and we have 10 requests done while twitter is down.

sequence:

0s : twitter is down. G = 1, B = 0. ttl starts.
2s: G = 1, B = 1, T = 1
3s: G = 1, B = 2, T = 1
4s: G = 1, B = 3, T = 3
--- at this time we're backing off requests--
5s: client 4 backed-off, retry at 5m6s
15s: client 5 backed-off, retry at 5m16s
30s: client 6 backed-off, retry at 5m31s
45s: client 7 backed-off, retry at 5m46s
50s: client 8 backed-off, retry at 5m51s
52s: client 9 backed-off, retry at 5m53s
---
60s: twitter is back online 
-- calls still being rejected --
4mn59s: client 10 backed-off, retry at 10m
5 mn: G = 0, B = 0. ttl starts.
5 mn6s: client 4 retries, success, G = 1, B = 0
5 mn16s: client 5 retries, success, G = 2, B = 0
5 mn31s: client 6 retries, success, G = 3, B = 0
5 mn46s: client 7 retries, success, G = 4, B = 0
5 mn51s: client 8 retries, success, G = 5, B = 0
5 mn53s: client 9 retries, success, G = 6, B = 0
...
10mn: client 10 retries, success, G = 7, B = 0


As you can see, one particularity of the system is that once Twitter is back online, we have a bit of inertia before all clients are hitting back our servers.

One small caveat: when the ttl-ed values are back to 0, if the service is still down we will have a small amount of requests that make it through before the threshold is calculated correctly again.

The configuration values are really depending on what kind of outage we want to target. Long ones (hours), medium (a few minutes)