Security/Reviews/OWA-F1

From MozillaWiki
Jump to: navigation, search

Items to be reviewed

  1. ) Web Activities

Introduce Feature

  • lightweight service discovery for web services that also helps with interaction with those services
  • web addressable provider that will provide the service or content
  • could be multi-message or fire and forget
  • service providers taken from the list of web-apps the user has chosen to install
    • sim to web-intents from Google <-- markup as you browse
      • think that is a little weak so this is more stringent
  • as currently specified, system is anonymous in both directions (thus the mediator)

example

(second page of pdf) pretend there's no "chrome oauth API" business This diagram is a pseudo-threat model to illustrate the paths through which data flows. 3rd party website contains button that initiates "getting a photo". The button calls on the activity API, which is injected into web conent. If there's a mediator for the activity, initiates it and does its thang. The mediator is told what services are available and loads iframes to launch 'em.

  • Browser acts as a message-passing interface to connect sites' desired API calls to apps that provide those APIs.
  • special activity called getCredential
    • this gets current logged in user and credentials to use for the service
    • can only be initiated by the browser

Reviewer Provided Info

I suggest that you start by reading this writeup -  https://github.com/mozilla/openwebapps/blob/master/docs/ACTIVITIES.md

Relevant code is in

The "Share" use case was previously reviewed at https://intranet.mozilla.org/SecurityReview/F1

Web Activities is a service discovery mechanism and light-weight RPC system  between web apps and browsers.  In the system, a client creates an Activity describing the action it wants to be handled, and submits it to the browser for resolution.  The browser presents an interface that allows the user to select which service to use, and then submits the Activity to the service provider selected by the user.  The service may return data  that is returned to the client.  The browser may optionally inspect or modifying the data as it flows between the client and the service.

In this review, we would like to talk about two different modes of usage for this mechanism: 1. Chrome-level invocation of activities to support browser features (e.g. F1/Share) 2. Content-level discovery and invocation of activities to support opportunistic connections between web content (image access, file storage, search, contacts, social graph, profile data).

We describe two styles of activity: "headless" and panel-embedded.  In both cases, the activity is invoked by constructing an iframe to the activity's endpoint.  In the "headless" case, the browser passes a message into this iframe without ever displaying it.  In the panel-embedded case, the browser allows the activity's iframe to display in a panel that is displayed over the current page.


Discussion areas

Invocation and data flow from chrome to activity provider</b/

<b>Invocation and data flow from web content to activity provider

Content  can use this to request data and services from other sites; while it is  intended to be safe, browser-visible, and user-consenting, content  could still find ways to abuse it.

1. Content could lie about what it will do with the data it receives 2.  Content could try to send malicious content to the service, counting on  the browser to post it into the target frame (any different from  postmessage?  not really).  Are there important distinctions between  structured clone and JSON.stringify that we should be paying extra  attention to here? 3. Content could pop a consent panel at an unexpected time to get the user to grant access. 4.  If we allow a pref for "don't ask the user", content could cause  cross-site interactions without user confirmation; can limiting this to events in a user-initiated chain mitigate this sufficiently?   5.  We are injecting APIs into content pages using a Sandbox object; is our  implementation safe, and should we be using other techniques (see bugs  about APIs for navigator. and window. injection)

Discovery of the service - how can I be sure I'm talking to a trusted provider?

Preferred approach is from an installed application manifest, containing the action and optional type, resolving to a path.  The manifest is installed directly from a site or from a trusted directory.  This is a tight linkage to the Web Apps work; we like this because it:

  • Provides a clear action by the user indicating their desire to use this app
  • Has obvious branding characteristics making it clear to the user who they are talking to
  • Requires developers to list all their activities in a single place
  • Offers the possibility of server-assisted discovery - i.e. we can offer a just-in-time list of providers.

Cons of this approach include:

  • Could be harder for JS implementations to interact (though we offer one)
  • Could be seen as heavy-weight or more work for developer; we think it's not that bad
  • Still could be used for a long-pole phishing attack

Other options discussed include: 1. Page markup indicates availability of activity, e.g. <intent>, <meta type="activity">, etc.

Pros:

  • Very easy for web developers

Cons:

  • No obvious way for user to understand where activities list came from
  • Possibility of "drive-by" registration, leading to phishing
  • No obvious way for users to chunk activities with authorizations and authentication
  • Extra work for browser to keep activity registry up-to-date (on load?  poll for updates?)

2. JS API allows page to register page as provider of activity, e.g. apps.registerAsActivityHandler("someactivity") Pros and cons are pretty similar to #1.

Other threats:

  • An attacker could modify manifests on disk or add a new application on disk
  • An attacker could subvert just-in-time provider discovery mechanisms (currently, we envision a whitelisted set of directories/app stores that can answer queries for "where can I do X?") - could MITM, get on the whitelist somehow, subvert the server


Inclusion of the source domain?

We see pros and cons to including the origin domain of the request in the Activity object.  Our current design is to not include it. The motivation is a) privacy-enhancement, b) desire to focus on the user as the authorizing actor rather than the origin site.

Obviously the origin can include its domain in the Activity data if it wants, but this is not trustworthy.  If services want to have a trustworthy relationship with the origin they will need to arrange an out-of-band secret exchange.


Potential to frame-proxy Activity call?

Activity initiators and providers live in different frames. Data passes between these frames via navigator.apps.services.* API calls and callback registrations that receive these calls. With a browser implementation, these channels are straight-forward to secure. With a pure JavaScript implementation, we must ensure the integrity of these channels: can an attacker interpose himself in the call passively and view confidential details of the request and/or response? Can an attacker play MITM to the cross-frame communication?

Our use of JSChannel and the underlying postMessage() should ensure the confidentiality of authenticity of the channel, assuming we make full use of postMessage() destination specification and origin checking. We should ensure that we are doing these two tasks properly, and that our abstractions on top of postMessage() don't interfere with proper destination specification and origin checking.

Abuse of the panel-embedded interface by malicious providers

The  dependence on branded web applications is intended to reduce the risk  of phishing by asking the user to complete an explicit "install" step  before a domain can display an activity.  If the user could be convinced  to install a malicious application, however, they could be phished by  an application which claims to need their credentials in the frame.

Mitigating  this is challenging; our design intent is to avoid any display of login forms in the embedded frame through the use of the credential API, but a  phisher could ignore us and do it anyway.  That is - just because good guys will use a popup, the bad guys can still phish in an insecure frame.

This relates to...

Security-UX affordances of panel-embedded interface

Our current visual prototype does not have a location bar or a lock icon, because we are explicitly discouraging the use of the embedded panel for authentication.  Obvious tradeoffs here; comments & discussion?

Inappropriate use of Activity data before user indicates completion

To  support preview, the mediator will pass the Activity object to a   provider with the expectation that the provider will call postResult later. The user could switch to a different provider before that   point.

If  the first provider's frame is still around (just hidden), it  could  send data to the server despite the user not submitting the activity.  We don't see a straightforward way to prevent this, and focus our attention on intercepting malicious providers.

[note: in f1 we only give the  data to the provider iframe when the user clicks send, IMO data should  never be given to a provider until the user takes an action that requires  the provider has that data.  This is large reason the headless apps are  so important.]

Entropy/fingerprinting implications

As currently described, a non-existent provider is the same as a user-cancelled request.  This should mean that the only useful fingerprinting data is the presence of the API.

[by "presence of the API" do you mean the activities api?] [yes]

Client-Side OAuth vs. Some Other API

If we are unable to get partners to adopt a more secure account manager API, we can do client-side OAuth to get a token.  Depending on the vendor, this might include a real client secret, or just be a fake "anonymous" token.

The OAuth token will be persisted... where?... and carries the authorization to post a share for a given user to a given service.

  • security of the token storage
  • which vendors support anonymous app OAuth?  if it's not anomyous, do we have a key in the code?  yikes?

The Account Management API

This is the part that is rawest and feedback on it is very much desired.  The "Credential" object is a sort of super-cookie, if you like - an opaque blob that the browser manages to allow a session resumption.  We attach a displayName and thumbnailURL to it because we expect the user to be choosing one from a list.  These objects are high-value - they are like strong, long-duration session cookies.  They should be protected as such, and wiped when the user would expect them to be wiped.  Currently no path exists to expose credentials to a domain other than the origin that created one.

[IMO the management api should only be injected into management apps, and not into all iframe/browsers as it currently is.  part of the consent of installing a management app is that the app has special privileges whatever those are]

Threat Brainstorming

  • Either chrome or content can implement provider, chrome<->content interaction/confusion
    • provider should always be from a webapp which means the user installed it
    • only formally installed web-apps can be a provider
    • need to keep an eye towards implicit permissions that users many not understand
  • Does installing as a web app indicate user's intent to allow the app access that normal web content does not have? (I.e. why differ from Web Intents?)
  • Clickjacking since content can cause the UI to be displayed
  • Content injection
  • Cannot enforce privacy/security constraints if web apps can define their own activities.
  • Impersonation attacks (e.g. "paypal app") - this includes the installation dialog/experience, what information is provided to user about what activities are being registered for etc.
  • Driveby manifest installs (like what we are fixing for addons) - for example, an addon or other installer creating a web app manifest on disk (eg the comcast issue with their toolbar)
    • there's a "ceremony" to install web-apps. user is presented with UI to approve "installation" of each web app.
  • Are all the special cases for login (race condition protection, limited invocability) only needed for login actions?
  • Phishing -- does this feature make phishing easier? We are depending on address bar being visible to protect against phishing. (
    • all the normal indicators remain for the login scenario to help users not get phished (i.e. address bar)
    • But, aren't we getting rid of the address bar in many cases?)
      • the login popup for auth would always have the url bar
  • Redirect handling - must make sure to use final target of redirect when comparing domains (manifests must come from same url as web app etc)
  • SSL/security/trust indicators with hidden interactions that are happening off the current page
  • Structured clone > JSON, a much bigger risk, Mike is not sure it is necessary to allow more than JSON, but images? video?
  • Unifying local data access security model and UX with web data access and UX

Conclusions / Action Items

  • [bsterne] will need to assign someone for penetration testing
  • [bsterne] threat model, further small discussion
  • [dchan] code review
  • [Sid] privacy review
  • More threat brainstorming/modelling needed
  • Talk to jst about popup blocker code

Items to be reviewed:

  1. )F1 :: retooled version of the link sharing service we looked at

in May (https://wiki.mozilla.org/Security/Reviews/F1)

Agenda:

Introduce Feature

  • method for allowing users to share content on their social networks & later email
    • currently only Twitter and FB
      • Twitter currently requires OAuth
  • F1 is now a mediator for the "share" activity
    • installs specialized webapps for facebook and twitter to bootstrap sharing

Differs from OWA (Open Web Applicaiton) because:

  • mediator for F1 is more elaborate than default for OWA
  • provides OAuth as an authentication api
    • pops up a login dialog
  • we would prefer to not use OAuth where possible (depends on service providers)
    • Yes, just "native" OWA if possible.

Goal of Feature, what is trying to be achieved (problem solved, use cases, etc)

  • Attempting to make the sharing of web data easier for users
    • remove the NASCAR effect of sharing buttons on an item

What solutions/approaches were considered other than the proposed solution?

  • Client-Server arch of previous has been abandoned for a browser only solution using OWA
  • possibly build F1 into OWA directly to avoid cross application issues

Why was this solution chosen?

  • better privacy protection for users
  • does not put Mozilla in a postition to hold possibly private data / auth secrets for the user

Any security threats already considered in the design and why?

  • ^^ see previous discussion & OWA items

Threat Brainstorming

  • Screenshot image leakage (potentially sensitive data shows up in screenshots that are shared)
    • only works for email which is not in current implementation, might be dropped due to privacy concerns
    • Shane says probably it will just be pulled out.
  • Can arbitrary content invoke the OAuth flow/dialog ?
    • as of right now yes, this is a property of the injector that needs to be fixed
    • by design no, this is due to reuse of injector code
      • good thing to test during implementation review/penetration testing
  • potential clickjacking due to dialog being displayed over content, possibly phishing also by mimicing the experience (particularly in full screen mode)
    • potential mitigation - exit fullscreen mode when dialog is shown
  • Starting Share/F1 (or any activity) could be the "new window.open()"
    • jstenback is the person to talk to about trusted events being required for startActivity

Conclusions / Action Items

  • [scaraveo]Need to figure out if the temporary part for Twitter OAuth will end up in the product, or if we can cut it out before the first release.
  • [scaraveo]Final decision on screenshot thumbnail sharing
    • This decision will need to be communicated back to secteam
  • [scaraveo] bug to track fixing the of OAuth flow/dialog/injector

Threat Modeling

addition data in privacy review: https://wiki.mozilla.org/Privacy/Reviews/F1A alpha plan: https://wiki.mozilla.org/Labs/F1/AlphaPlan

  • SMTP Threats
    • Some addons might be tempted to use this addon to SPAM
  • How much of the UI/implementation is dynamically loaded over the network?
    • There is no remotely-loaded content in the Alpha release, resources are loaded from the add on itself into sub-Iframes
    • In future releases, some parts of the UI will be dynamically loaded, e.g. icons for service providers?
  • Thumbnails
    • Page screenshot thumbnail code has been removed for this alpha release
  • Follow-up Things
    • Review for Injection attacks --> bsterne to file bug
      • Data from content is being shared, but it isn't shared *by* content
      • Content can influence what data is pre-filled using OGP tags, makes it easier to mount injection attacks if there are any vulnerabilities
      • Fuzz testing?
    • SMTP code: https://github.com/mozilla/fx-share-addon/tree/feature/gmail/lib/email
      • Need to check SMTP code against injection attacks / proper escaping
    • Make sure that the JetPack panel (used for preview) uses type="content" - verified, type is content.
    • Share preview addon with secteam@mozilla.com
    • Come up with a way to sign this addon (not necessary for alpha release)
  • Pages cannot trigger the sharing process in this alpha release

page scraping: https://github.com/mozilla/fx-share-addon/blob/feature/gmail/lib/panel.js#L271