From MozillaWiki
< Services‎ | Sync
Jump to: navigation, search

The current state of Sync crypto

See Labs/Weave/Developer/Crypto for thorough details.

In short:

  • Generate a RSA key pair
  • Generate a symmetric key from the passphrase, using PBKDF2 and a random salt, and encrypt ("wrap") the private key using that symmetric key
  • Upload public key, wrapped private key + IV + salt to server
  • For each collection, generate a random symkey, encrypt it using the public key, and upload it to the server.
  • Each encrypted object contains a relative URI pointing to the key that can decrypt it (in 99.99999% percent of the case this is same, except when the WBO IDs contain slashes and clients get very confused).
  • Fetching a decrypted object involves:
    • Fetching the encrypted object from the server
    • Looking at the key URI in that JSON blob to find the symkey URI
    • Fetching (if necessary) from the server and RSA-decrypting the symkey
    • Using the symkey to AES-decrypt the object.

Goal and motivation

We want to drop the PKI layer. We don't use it (for the original speculative sharing scenarios), and it costs client computation, server storage, and network bandwidth (~ 16% of our API transactions are key fetches).


tl;dr: replace the passphrase with an AES key, which will be schlepped around using J-PAKE (so typing it is likely unnecessary). Use this key to indirectly encrypt the 'bulk' symkeys. No RSA involved.

Existing passphrases will be upgraded to this scheme using PBKDF2.

Passphrase / Sync Key

Rather than have a user enter a passphrase (which will likely be weak), we have already transitioned to having them generate a "sync key" (which they can replace if they so choose). This is 20 alphanumeric characters.

We propose to expand this to 26 characters, enough for a base32-encoded 128-bit AES key. This avoids the use of PBKDF2 to routinely bootstrap the sync key into an AES key. Remove the ability for users to enter a key; it's always generated (giving us more confidence in the amount of entropy), and can be regenerated if desired.

The length of this key is not a big issue: we intend to use J-PAKE for the (infrequent) migration of keys between devices. In any case, 26 is not significantly worse than 20 if typing it does enter the picture, and the use of a nice base32 alphabet makes keyboard entry less error-prone.

As before, the Sync Key is stored on the client. The encryption and hmac keys are derived from it.

Deriving encryption and HMAC keys from the Sync Key

First we base32-decode the syncKey from 26 characters to 16 bytes (128 bits). The base32 alphabet is the one specified in RFC 4648 except with l replaced by 8 and o replaced by 9:

     prk = decodeKeyBase32(syncKey)

The resulting key is then expanded to an encryption key T(1) and HMAC key T(2) using the algorithm described in RFC 5869

     info = "Sync-AES_256_CBC-HMAC256" + username
     T(1) = HMAC-SHA256(prk, "" + info + 0x01)
     T(2) = HMAC-SHA256(prk, T(1) + info + 0x02)

Upgrading existing Sync Keys to the new AES key

Existing users will have their passphrase bootstrapped into an AES key using PBKDF2:

  • Spot old version
  • Get a salt (Services.syncID from the meta/global object. The client will be bumping this…)
  • Apply PBKDF2 to salt and passphrase to yield our new AES key
  • Generate bulk keys, encrypt
  • Attempt to store, using appropriate race-avoidance technique in case there are multiple clients attempting to upgrade.
  • Wipe old key data.
 -- Note that we leave key data in pubkey/privkey, because earlier versions of Sync unfortunately check for these prior to checking storage version. We opted to leave the keys there to get better user experience when an upgrade is needed.

So long as the salt is available, other clients can apply PBKDF2 to their stored passphrase and the salt to yield the new key without any re-entry or J-PAKE-style key distribution.

Note that each major storage change alters the syncID, and thus PBKDF2 will only work for a single such change -- afterwards, one's passphrase is upgraded into a different sync key, and that won't work.

Bulk keys

The server stores one or more bulk keys: one default, and an optional set of keys associated with specific collections. This will allow rudimentary sharing scenarios (provide your bookmarks collection key to a web app, and your passwords remain secure). A default key is simpler than having per-engine/collection keys without an obvious need.

Bulk keys are stored in the single WBO storage/crypto/keys.

Bulk keys are encrypted and HMACed using the sync key outputs, and cached on the client. (Current caching is per-session, but they're stored as identities to make persistence easier to implement.)

The timestamp on the collections record allows clients to invalidate their key cache when a new key is associated with a collection: the 'crypto' collection will appear to have changed.


It's a good practice to use separate keys for HMAC and for encryption. Bulk keys are really pairs of keys, each of which is randomly generated.

This approach was selected over having a single HMAC key because of the convenience for implementing some sharing-like scenarios.

Proposed flows

New user

  • Generate a 128-bit Sync key (25 characters in base36). Store it as an Identity (as we do now.)
  • Generate a random default key and HMAC. Encrypt it with the sync key, upload it to the server. Store it as an Identity.
  • Encrypt and upload collections in the obvious way.

Existing user

(See above.)

Fetching objects

  • (On startup: invalidate/refresh key cache if keys collection has changed. I believe we make this fetch anyway...)
  • Retrieve object from collection.
  • Look up key for collection name (defaulting to "keys/default"). Fetch if necessary.
  • Verify HMAC using appropriate per-collection or default key. On failure, check for changed keys.
  • Decrypt object.

Version bump

This change is incompatible with older clients: not only due to reorganizing the storage namespace, but also because existing clients will be unaware of the simpler encryption mechanism. That means a storage version bump (from 3 to 4).

An error in the argument order of the PBKDF2 function necessitated another version bump, from 4 to 5. Only a small set of nightly builds and add-ons used version 4; these will all prompt to upgrade on launch if a v5 storage version is found on the server.