Accessibility/Mobile/Roadmap2016

From MozillaWiki
Jump to: navigation, search

Firefox OS Accessibility Roadmap for 2016

Introduction

The information below provides a rough timeline for features we wish to introduce into the FxOS accessibility stack during the year of 2016.

Magnification

Estimated implementation timeframe: Q3 2016

We are uniquely positioned as a platform to provide a very compelling magnification experience. Web content can be scaled in a lossless manner: all non-raster content will remain sharp. There are a number of challenges that need to be overcome for us to support this feature.

Gestures

Currently, the screen reader intercepts all content touch events and translates them to gestures. The touch events are not propagated to the content. Instead, all interaction is redirected through the screen reader.

Users expect to use the magnifier with or without the screen reader. This requires us to support two different gesture capture paradigms: With the screen reader, the magnification zoom and pan gestures need to be intercepted like any other gesture. Without the screen reader, the magnifier gestures should work similar to our async pan zoom controller (APZ) implementation, where the touch events are either used to pan and zoom content, or interact with content.

These two scenarios may require two different paths.

Zoom / Pan

The work required to zoom/pan the top-level content frame is not trivial. This will require working with the graphics team.

Screen Reader Tutorial

Estimated implementation timeframe: Q2 & Q3 2016

We identified two kinds of learning tools we may want to implement. Both require the screen reader to provide certain exceptions and special status for the content read. We see these two tools being developed as stand-alone apps. One or the other will start when the screen reader is launched for the first time. They will also be accessible through the screen reader settings panel.

Practice Tool

In this tool, the user can perform gestures in a blank screen. The screen reader will tell the user what gesture they performed, and what function it serves. This is similar the the VoiceOver practice tool. It allows the user to experiment in a controlled environment and get a feel for what different interactions do.

This tool would be relatively simple to create, and we estimate it could be done in the first quarter of 2016.

Tutorial

A tutorial walks the user through the basic operations necessary for simple tasks, like activation, swipe and explore by touch. It would be built on top of the work done for the practice tool above.

This app is more complex than the practice tool. We estimate there could be an initial version by the end of Q2 2016.

Text Navigation

Estimated implementation timeframe: Q2 & Q4 2016

Quicknav integration

We will first introduce read character/word as an option in the quick nav menu (double tap+hold). The swiping up down will move by character/word. This will be a relatively simple addition, and could be completed by the end of Q1.

Advanced gesture

We are investigating an advanced gesture for moving by character/word. This is something that may be introduced in the later half of 2016. It will require some dexterity from the user, so it will not replace the quicknav method above.

Caret & Clipboard Control

Estimated implementation timeframe: Q1 2017

Text editing with the screen reader is still a fragile experience. We plan to robustify these features throughout the year.