Labs/Identity/VerifiedEmailProtocol

From MozillaWiki
< Labs‎ | Identity
Jump to: navigation, search
Note: The Verified Email Protocol has been deprecated. Please check the BrowserID protocol.
Note: this document has been merged with others, see Identity/Verified_Email_Protocol/Latest for the latest version.

Verified Email Protocol: Overview and Introduction

WikiMedia editor tip: If you aren't seeing numbered section headings in the body of this document, you can force them to appear by logging in and clicking the "My preferences" link, at left, and select "Auto-number headings"


Feedback is very welcome; please send to Michael Hanson (mhanson at mozilla dot com)

The really short version

The goal of this proposal is to provide a very simple web-centric binding to a well-understood identity token.

Specifically: this proposal defines a way for a user to prove to a website that they control an email address.

It does not require that email providers support the system, but provides a better experience and more control if they do. Since most user logins have an email-based password reset option, this system is effectively a universal login system for most of the web; websites that have stronger login requirements are free to use them, and have a more secure way to federate their logins to other sites. It provides more control over the duration and scope of a user login than is currently available in browser-based systems.

Introduction

A number of web-scale identity proposals start by creating a new identity token - for example a user ID or personal URL - and go on to describe how to use that token to authenticate the user. What we've learned from several years of experience with OpenID (and related protocols) is that this isn't quite good enough: establishing an identity token, in isolation from the rest of the web, doesn't actually help a site engage with its users.

This proposal instead focuses on an identity that is universally understood and useful for users and service operators: the email address. Email is already a fully-distributed system, with millions of participating hosts and billions of accounts. It is deeply interdependent with the Domain Name System, which provides a globally-distributed name lookup system. It is understood that a single human may have more than one address, and that an address may represent shared authority between several persons. Email already supports pseudonymous identity, through anonymous remailers. And, most importantly, users understand what an email address represents.

It is understood that "alice@site.com" means that there is a person, here called "alice", who has agreed to trust "site.com" to test her identity and to act as a secure relay for messages. The fact that we use this identifier only for SMTP mail delivery is an accident of history; there is no reason we can't bootstrap from this identifier to other protocols (as recent proposal like Webfinger have made clear).

Many systems that are based on username/password authentication have an email password reset mechanism. This fact means that, for those users, the username/password pair is effectively a shorthand reference for "control of this email address". This proposal asserts that, for those use cases, proof of control of the email address is a better authentication mechanism than a username/password; if the site then chooses to provide a pseudonymous "user name" for their site, they are free to do so, but the verified email address is paramount.

This system is completely neutral on the question of how an email host authenticates a user to prove control of the identity. If the host wants to use hardware, biometric, or social keys for their authentication scheme, they are completely free to do so. What this proposal describes is a way for this host to tell other hosts what identity the user has, and even to explain to them how that identity was established.

It is an implicit feature of the modern mail delivery system that a mail host administrator can impersonate any account on that host -- that is, the owner of site.com can read all the mail of any user of site.com, which includes any password reset messages delivered to those users. This system does not change that fact. Protecting a user from malicious host administrators is beyond the scope of this proposal (though this system could be part of a more universal system that would help with it).

It is understood that a system like this cannot expect anything close to universal deployment overnight, or ever. This system therefore makes use of another special property of the email address -- the fact that it can be independently verified by a third party -- to propose a system of "secondary identity authorities". These are servers, operated by trusted entities, that authenticate an email address on behalf of a server that does not participate in the protocol described here. A relying party can choose which, if any, secondary authorities they trust. These secondary authorities can help bootstrap the system by providing verified email addresses to relying parties, even if the mail host isn't able to support the system.

Scope of the system

The system proposed here allows a user agent (that is, a web browser) to:

  • Maintain a list of the user's verified email addresses
  • and demonstrate control of an email address to a website through cryptographic means

The system allows an identity authority (that is, a mail server host) to:

  • Store a secure, revocable key representing a user authentication into a browser
  • Indicate to the browser the terms of use of the key, so that it can be expired, invalidated, and refreshed as needed
  • With some additional work, to create pseudonymous identities that allow a user to provide a different address per relying site
  • Optionally certify a user authentication to allow the browser to present proof of authentication to a relying site without a pingback to the authority.

The site allows a relying site (that is, any website) to:

  • Indicate to the browser that they can accept verified email addresses
  • Ask the browser to prompt the user to select an address
  • Receive a cryptographically-verifiable assertion of the user's identity
  • Verify the identity assertion locally, without leaking information about the user to third parties, or, optionally, use a trusted verification service.

Operation of the system

The basic message flow that makes this system work is independent of the exact cryptographic protocols and message formats that encode the messages. For purposes of clarity, however, it is described it using a specific set of protocols. The reader is asked to understand that those choices are for illustrative purposes, and that multiple encodings of the trust relationships described herein are possible.

Specifically: The explanation contained here will assume that user data lookups occur through the Webfinger protocol, that site-level metadata is retrieved through HTTPS using the .well-known/host-meta mechanism described in IETF RFC 5785 and draft-hammer-hostmeta, that assertions are generated and signed according to the JSON Web Tokens draft, and that asymmetric cryptography is performed using either RSA or ECDSA keypairs. When reference to a public key certificate is made, it is usually assumed that this would be an X509 certificate but there is no strong requirement that it be.

To avoid describing the system all at once, this description will build up, starting with primary authorities, then adding a relying party, then adding secondary authorities, and then describing verification services. Finally, the certification process, which removes a number of the message exchanges from the protocol, is added.

Our dramatis personae for this explanation will be:

  • Alice, a web user. She has email addresses "alice@mailhost.com" and "alice@othermail.com"
  • mailhost.com: a server that has implemented the requirements of a primary identity authority in this scheme
  • destination.com: a server that Alice wants to visit and establish an identity with
  • othermail.com: a different server that Alice also has an account with; they have not implemented this scheme
  • trusted.org: a secondary verifier, which is trusted by both Alice and destination.com

Here is a simplified overview of the message exchange; read on for more detail:

  1. Alice logs into mailhost.com. Alice's browser creates a cryptographic keypair, storing the private key in the browser and returning the public key to mailhost.com.
  2. Alice visits destination.com, which asks for her address. Alice picks "alice@mailhost.com" from a list presented by her browser.
  3. Alice's browser uses the private key to sign an identity assertion that includes "alice@mailhost.com", along with a timestamp and audience restriction, and uploads the assertion to destination.com.
  4. destination.com retrieves Alice's public key from mailhost.com by using a webfinger lookup over SSL.
  5. destination.com verifies the identity assertion, and is now confident that the browser is being used by Alice.

(Read on for more detail, including a certification scheme that avoids the webfinger lookup, discussion of how to handle Alice's multiple browsers, expiration of the private key, secondary verifiers, and more…)

Primary Authorities

The authentication flow starts with a primary identity authority talking to a user agent -- that is, a user logging in to their email account. The user does what he or she needs to do to log in - perhaps a username/password, perhaps more.

The authority then executes a method in a web page served from its domain:

navigator.id.saveVerifiedAddress( "alice@mailhost.com", <callback>, [{optionalKey: value, ... }])
If you're passing multiple items, you should instead use something like
 navigator.id.saveInfo({"email":"alice@mailhost.com", "expire-timestamp":..., ...}, <callback>)
which would provide the most flexibility for future changes.

The browser will then:

  1. Create an asymmetric cryptography keypair (for example, an RSA or ECDSA keypair)
  2. Store "alice@mailhost.com", along with the private key, in secure, private, local storage
  3. Return the public key to the web page through the callback
Primary Auth is determined by webfinger lookup on the provided email address to ensure that the domain is allowed. An info block that fails this check is not stored, the user is notified of the attempt, and the code returns false.

The page will then upload the public key to the server, who will store it in a database, keyed on the user's address, for later retrieval (see section 6.4 for a discussion of what else the server could provide with it).

Note that optional arguments to saveVerifiedAddress could include:

  • expire-timestamp: a timestamp beyond which the private key is no longer valid
  • expire-session: an indication that the private key should be destroyed when the current browser session ends (and, probably, that the key should only be held in memory rather than ever put to disk)
  • require-challenge: an indication that the user agent should require the user to provide a "master password" or similar browser-level authentication before using the private key
  • require-encryption: an indication that the user agent should only persist the private key in a browser-level encrypted format
  • refresh-url: a URL that the browser can visit to re-establish a verified address, if the current key has expired
  • refresh-method: a snippet of JavaScript that can be executed to re-establish a verified address, if the current key has expired
See above note regarding suggestion on how this data should be passed

With a little bit more work, one could specify an argument that would allow the user agent to load a page containing a login form (including anti-CSRF parameters), fill it out, and submit it, on the user's behalf.

Relying Parties

The user then navigates to a website that wants to use his or her identity.

The website serves a page that includes this JavaScript:

navigator.id.onVerifiedEmail = function(identity_assertion) {
  // upload assertion to server...
});

This JavaScript tells the browser that this page understands the Browser-Verified Identity scheme. The browser may, if it wishes, use this information to enable a "sign in" or "connect" button in the browser user interface. Alternatively, the website may invoke this JavaScript function:

navigator.id.getVerifiedEmail();

… which informs the browser that it should start the "sign in" user interface flow immediately. This function could be called from, for example, a "Sign In" button in the web page.

Sadly, I can see most sites opting for the latter, possibly as part of the pages onLoad so that they can get "High User Engagement!"®. We should probably not directly expose the getVerifiedEmail() call in favor of a less blocking method.

When getVerifiedEmail is triggered, the browser:

  1. Presents to the user a list of the addresses that have been previously stored in the browser (note that this is a good place for browser to enhance informed consent for personal data disclosure - see the User Experience section below).
  2. When the user selects one of these addresses, retrieves the private key associated with that address
  3. If the key has expired, initiates key refresh - potentially a large topic, more needs to be written on that (see also Open Issues - perhaps the UA needs to check that the key is still valid here) Please include a list of those issues for the design. I would not like to have missed any.
  4. Once a key is found, the browser creates an assertion containing the email address, an audience, and a valid-until timestamp, and signs it with the private key. This is the identity assertion.
  5. The assertion is delivered to the onVerifiedEmail callback, which (optional? required?) uploads it to the relying site (for validation? storage? giggles?).

In our example, this would mean that this assertion is provided (and signed with Alice's "alice@mailhost.com" private key)

{
  audience: "destination.com",
  valid-until: <format TBD>,
  email: "alice@mailhost.com"
}

Once this assertion is delivered to destination.com's server, it verifies it. This means that it:

  1. Retrieves the email address from the assertion
  2. Performs discovery on alice@mailhost.com. For example, it performs webfinger discovery on the address, which leads to an XRDS document that contains one or more public-key LINK elements.
    1. There is probably more than one key here because Alice has more than one device, and may have reset her browser at some point without letting the server know. See 6.3 below for how a key identifier can be attached to the assertion to pick the right public key; otherwise the relying site needs to try all of them.
    2. See 6.4 for a discussion of how the authority could provide more detail about the user's credentials to the relying party, e.g. what kind of authentication was performed.
  3. Retrieves alice@mailhost.com's public key(s).
  4. Verifies the assertion with the public key(s).
  5. If the assertion is valid, destination.com can now be confident that alice@mailhost.com has successfully authenticated using this browser.

At this point, Alice has logged into destination.com. Destination.com can immediately use alice@mailhost.com as a trusted, unique identifier for Alice; if the privacy policy for the site permits it, destination.com can perform data discovery using this identifier. Alice can return to the site from any of her devices and immediately be reconnected with her server-side state by providing the same address (potentially with a different private key).

The user agent can easily record the fact that the user consented to a getVerifiedEmail() call during the initial login step, and save that into secure private local storage. It can then (with the user's consent) automatically provide the identity assertion when the user returns to the site - or, if that identity has expired, smoothly guide the user through the authentication refresh step. It can also store a list of all identities that have been provided to the site and provide a "fast user switch" feature between them (though some additional work on session termination is required - see some thoughts at MozillaID/Spec)

Secondary Authorities

As noted above, it is unrealistic to expect every mail host on the internet to adopt this protocol. A secondary authority is a trusted intermediary who verifies an email address on behalf of a relying party. Secondary authorities could be operated by entities that make strong guarantees about user privacy and authentication accuracy, and are perceived by users and developers to be both technically competent and commercially disinterested.

A secondary authority could verify an identity in whatever way it sees fit, but in one scenario, the user would simply provide their email address to the authority in a web page. The authority would then engage in a multi-stage authentication process, where it stores a cookie in the user's browser, sends a message to the provided email address, and, when the user clicks a link in the provided email message, establishes that this browser is being used by a user who controls that email address.

At that point, the same cryptographic handshake described in Part 1 applies, except that an "issuer" field is included with the call:

navigator.id.saveVerifiedAddress( 
  "alice@mailhost.com", 
  <callback>, 
  {issuer: "https://trusted.org"} 
)

The browser creates a keypair, stores the keypair and issuer in local storage, and returns the public key to the secondary authority, who stores it in a database for later retrieval.

Now, when our user navigates to relying party, the user interface flow is identical. The assertion has one additional field:

{
audience: "destination.com",
valid-until: <format TBD>,
email: "alice@othermail.com",
issuer: "https://trusted.org"
}

The presence of the issuer field tells the relying party that this is not a primary identity assertion. The relying party should decide whether they trust the issuer listed there; if they do, then they perform discovery on the provided email against the issuer's domain. The secondary authority relationship is probably pre-arranged; it is unrealistic to think that a relying party would trust an authority they had never heard of before. Although the lookup protocol could be issuer-specific, it would be simpler and more portable to just use the same lookup method that is used for the primary authority, (e.g.) webfinger.

lookup method(s) should be part of the assertion (just like encryption). People always assume incorrectly.

When they retrieve a public key, the relying party performs assertion verification as normal.

Verification Services

It is hoped that this scheme could be adopted by sites of all sizes, but experience shows that small sites might have a hard time executing all of the cryptographic steps to perform verification.

Therefore the scheme supports the use of "verification services", which are trusted third parties that can perform identity assertion verification on behalf of a caller. These services obviously have tremendous power and would need to be constructed with both technical and legal care.

The verification step would be quite straightforward: the relying party would simply POST an assertion to a verifier over SSL along with their expected audience string, the verifier would verify the assertion as in 4.2, and return a result code. The audience test is necessary, as it prevents replay attacks using assertions captured at other sites.

This requires that the server enforce that audience matches the reverse DNS of the requesting site?

Certification

The flows described above all included a step, during the verification process, where the relying party contacted the holder of the public key directly to retrieve the user's public key(s). This simplifies the flow for the relying party - they have only one assertion to verify, and because they are retrieving the public key over SSL, they can take advantage of the site-identification-and-integrity guarantees built into the SSL protocol.

It has a number of undesirable properties as well. It introduces (potentially large) extra latency to the verification step. And it leaks information about which sites the user is visiting back to their identity authority.

The protocol can be enhanced with a certification process to remove these two problems. To wit:

When the public key is returned to the authority during the saveVerifiedAddress flow, the authority:

  1. Creates a bundle of the user's public key, the user's email address, a validity interval, and some optional metadata, and signs it with their private key (and, in practice, it would make sense to include a key identifier since a site will have more than one public key; most digital signature specifications provide this capability)
  2. Returns this signed bundle (which is now an identity certificate) back to the user agent, who adds it to secure private local storage with navigator.id.saveVerifiedAddressCertificate(<identitifer>, <certificate>).

When the user agent provides a verifiedEmail to a relying party, the certificate is included with the identity assertion:

{
  audience: "destination.com",
  valid-until: <format TBD>,
  email: "alice@mailhost.com"
  certificate: {
    email: "alice@mailhost.com",
    public-key: <alices-public-key>,
    valid-until: <format TBD>,
  }-signed-with-mailhost.com-key
}

The relying party, when it sees that a certificate is present, may choose to skip the retrieval of the user's public key by instead verifying the certificate. That flow would be:

  1. Resolve a site-level public key for the issuer by performing host discovery on the email in the certificate (for example, by performing an RFC 5785 "well-known" lookup on an HTTPS server, or talking to a trusted directory server) Does this mean that there are more than one method used to do resource lookups? (WebFinger & "well-known"?) Isn't that potentially confusing to users and developers?)
  2. Verify the signature on the certificate using the public key

If the certificate is verified, the relying party can proceed with identity assertion verification using the public key contained in the certificate. The public key for the host can be cached or distributed out-of-band; there is no requirement for the relying party to communicate with the issuing authority directly at all.

A certificate can also be created by a secondary issuer; just as in the non-certificate case, the relying party will decide whether they trust the issuer. The presence of an "issuer" attribute would indicate that the certificate was issued by an entity other than the primary authority. The issuer key can be discovered through the same HTTPS lookup described above or distributed out-of-band.

Unfortunately, adding revokation complicates this flow and reduces the privacy-enhancing properties of it. Just as with site-identifying certificates, the RP is required to either retrieve a revocation list or use an online status check (that is, a CRL or OCSP) to make sure an identity certificate is still valid. These steps have proven to be problematic for the site-identifying CAs that power the SSL site-identification infrastructure, and there is little reason to think that email hosts would be any more capable of handling them at larger scale. It may be realistic to think that the internet could support identity certificate revokation at scale; perhaps we should focus our attention instead on limiting the scope of breaches, for example by encouraging short-lived identity certificates and automated certificate refresh.

How best should this be addressed in the short term for the system we are creating? Should certificates have a mandatory expiration period of <5 minutes?

5. User Experience Discussion

User experience before an identity is registered

If the user agent does not have any verified addresses stored already, the getVerifiedEmail call is an opportunity to bootstrap the user into the system. User agents should be free to adopt whatever flow they think is most effective for this.

Multiple devices and synchronization

Most users have more than one internet-enabled device. This scheme does not require that the same private keys on each device; neither does it disallow it. A user could independently log in to each device, creating a new private key (and registering a new public key with the server) on each, or they could use a synchronization service to securely transfer the private keys from the origin device to another device.

Informed Consent during Identity Disclosure

The user agent is in a unique position to assist the user during the getVerifiedEmail flow. During this call, the user agent could present information relating to the privacy policy and terms of service of the relying site. The site could, for example, present a machine-readable or simplified version of their privacy policy to the user agent in the page.

Creation of pseudonymous addresses

The user agent could, if a primary or secondary authority supported it, maintain a map of pseudonymous forwarding addresses on a per-site basis. New identities could be requested from the authority when a previously-unknown domain was encountered. This map would need to be synchronized across a user's devices or accessed in realtime from the server.

Potential for registration abuse

XX sites could call saveVerifiedAddress aggressively to get their (potentially bogus) identities into the browser's list; potential for "identity hijack?"

Automatic key refresh

XX want the browser to be able to re-submit credentials, but also want the ability to have strong (e.g. two-factor) authentication flows

Security Discussion

Registration of addresses across domains:

In the scheme as proposed here, any domain can make a claim to verify any address. This is an attempt to preserve flexibility for operators who have complicated domain name schemes (for example, a "login.site.com" server that handles logins for "sitemail.com"). Attempts to counterfeit the system would be defeated at the assertion verification step, where an attempt to retrieve the public key of the user (e.g. at sitemail.com) would fail.

Only secondary authorities are allowed to provide a public key outside of the normal public key lookup flow, and relying parties are required to explicitly trust secondary authorities.

In the primary authority certificate-based flow, the authority is required to serve the public key for the email domain from the email domain itself. The verification of the certificate with this key proves that the certificate-issuing host had the public key for the email domain, which proves the delegation relationship.

Synchronization of keys

This protocol does not require the user's private key to be synchronized across user agents or devices; it is expected that authorities will present more than one public key when queried. It does not forbid synchronization, however, and the system should work fine in that case. User agents should be prepared to deal with expired keys at any time.

Use of Key Identifiers

Retrieving all of the user's public keys is workable but involves extra work for the RP client and the authority server. The addition of a simple key identifier to the public key registration, identity assertion, and identity certificate would allow the RP to request just the public key they want. In X.509 parlance, this could be a Subject Key Identifier - though other schemes could be used.

Identity Authority authentication context records

The identity authority could advertise additional authentication metadata along with the user's public key. This metadata would be intended to allow a relying party to have greater confidence in the use of the authentication and could include the client IP address, the mode of authentication, the timestamp of the authentication or expected expiry, and so forth. In some cases, this metadata would represent sensitive data and the relying service would be expected to authenticate itself to the authority.

It would make sense to either include this data as part of the certificate, or as additional data included with the public key lookup.

This metadata would probably be substantially identical to the SAML Authentication Context; see more at http://docs.oasis-open.org/security/saml/v2.0/saml-authn-context-2.0-os.pdf

Enumerability of identities

A naive identity server implementation would allow an attacker to trivially enumerate all identities on the server by brute-force queries. This problem exists for all personal identity delivery protocols (see, for example, the SMTP VRFY command), but can mitigated by ensuring that the authority server presents a valid page for every public key lookup request (containing, perhaps, a random but stable public key).

Links

Acknowledgements

Many thanks to Dick Hardt for a clear summary of the actual goals of RPs and inspiration on the architecture, and to the organizers and members of the Identity Commons 2011 event for constructive commentary and feedback.


OPEN ISSUES:

Key expiration:

No mechanism is specified to allow the authority (whether primary or secondary) to tell the client that a key has expired/been revoked - right now, I just have an expiry argument to the register call. There needs to be some way for the client to be told; this may be a ping from the client to the authority.

RP errors:

There is no way for the RP to tell the UA that an address was rejected (so no way to tell the UA why it was rejected). This puts the UA in a bad position of submitting data which might work or might not, but with no way to recover if it doesn't work. Might need some markup or something to help with that.