Find My Device Security Review

From MozillaWiki
Jump to: navigation, search

Find My Device Security Review

This is the result of a high level security review of Find My Device (FMD).

Stefan Arentz, 2014-09-15

Introduction

The Find My Device application consists of a backend part, an API that devices talk to and a web application that end users use. For this audit, most time was spend on the public facing web application and less on the backend parts. The backend was already reviewed previously and all issues encountered back then have been addressed.

What is Find My Device

Find My Device is a service that lets people find (track) the geo location of their devices and ring, lock or erase those devices remotely.

The web application is a Backbone application that talks to a Go backend. The Go application only serves a single initial page template, index.html, and Backbone does the rest via partial html templates that are requested and processed on the client side.

The application is protected with a Firefox Accounts OAuth login flow. When hitting the signing button, the application redirects to accounts.firefox.com/oauth/signing where the user can login with existing or new Firefox Accounts credentials. After logging in, the user is redirected back to the FMD application.

The FMD application sets a server-side encrypted Secure and HttpOnly session (non persistent) cookie that contains the user id, email, device id and access token. This state information is only used server-side and because of HttpOnly and the encrypted value, is not possible to obtain client-side.

Audit Results Summary

Some non-critical issues were discovered by reading the Find My Device code and looking at a stage deployment of the web application.

Web Application Issues

No Content Security Policy is used

Risk: medium

Ideally applications, specially new applications, have a strict content security policy. This is not a complicated application and applying a content security policy should be simple. The only page where inline (CSS) code is present is the 404.html, which should be easy to move to a separate CSS file.

The risk is marked medium only because we would really like to see apps use CSP. I don’t think there is a direct danger of not having CSP on this site.

Robots.txt present but not used

Risk: low

The project contains a robots.txt file in the content directory (static/app) but it is not actually services. Requests for /robots.txt serve the index.html instead. Not sure if this is intentional or a server misconfiguration.

Unused .htaccess file

Risk: low

The content directory contains a .htaccess file. It is not clear if this project is ultimately being served with Apache, but considering most deployments happen with NGINX this is unlikely.

Backend Issues

No replay protection for Hawk-signed requests

Risk: medium

From the github.com/hueniverse/hawk project: “Without replay protection, an attacker can use a compromised (but otherwise valid and authenticated) request more than once, gaining access to a protected resource. To mitigate this, clients include both a nonce and a timestamp when making requests. This gives the server enough information to prevent replay attacks.”

The Find My Device server currently does not prevent replay attacks. This is likely not a very high risk, but it is an important part of the Hawk spec and it would be nice to see a full server-side Hawk implementation that does this. Even more so because other applications then can use it.

Hawk Unit Tests are not complete enough

Risk: medium

It would be nice to see more unit tests for the Hawk module. For example what is missing now is a test to validate that a Hawk signature over a POST request is also correctly handled. And specifically, if a malicious/invalid/modified POST request correctly fails to pass the Hawk authentication checks.

Certificate Pinning for Firefox Accounts Assertion Verifier

After the user logs in via accounts.firefox.com, it is given a Assertion that the Find My Device application then needs to verify by calling oauth.accounts.firefox.com/authorization. It POSTs the assertion there and received it’s status back.

As an extra security measure, we could pin the oauth.accounts.firefox.com certificate so that we know we are talking to the correct verification server. This is something we already do in Firefox Desktop.

Note that this would be a complicated attack since the POST request is made from our infrastructure. Nonetheless this is an application that has a risk of exposing users location or wiping devices when compromised.

Make sure the application is build with Go 1.3 or higher

There is a bug in Go versions older than 1.3 where TLS verification is broken. More details in http://tip.golang.org/doc/go1.3#major_library_changes but it is not immediately clear what the code changes should be. This needs more investigation to make sure this not affect us.