Accessibility/Planning/Android: Difference between revisions
< Accessibility | Planning
Jump to navigation
Jump to search
No edit summary |
No edit summary |
||
| Line 11: | Line 11: | ||
*We can use two different approaches: | *We can use two different approaches: | ||
**Inject ChromeVox into firefox mobile, which directly uses Android framework TTS to do selfvoicing (Both, Chrome OS and Android native browser does this way) | **Inject ChromeVox into firefox mobile, which directly uses Android framework TTS to do selfvoicing (Both, Chrome OS and Android native browser does this way) | ||
**Inject FireVox | |||
**Use our own accessibility layer and based on our accesible events fire AccesibilityEvent (so Talkback would speak) | **Use our own accessibility layer and based on our accesible events fire AccesibilityEvent (so Talkback would speak) | ||
| Line 19: | Line 20: | ||
*Check if the interation model of ChromeVox works ok with us (does it speak mostly when something is focused? if so, are we moving focus in the same way as WebKit does, allowing all elements to be focused?) | *Check if the interation model of ChromeVox works ok with us (does it speak mostly when something is focused? if so, are we moving focus in the same way as WebKit does, allowing all elements to be focused?) | ||
*Explore if we can achieve similar results using our accesible code firing AccesibilityEvent | *Explore if we can achieve similar results using our accesible code firing AccesibilityEvent | ||
*Check accessibility of Firefox chrome widgets (dialogs, preferences, etc...) | |||
Revision as of 14:50, 1 June 2011
Note there is a recent Google IO video on Android Accessibility.
Questions
- What is the Google approach for Android browser accessibility? (Talkback vs ChromeVox)
- Are new API planned that would allow non focused elements to send AccessibleEvent or query information about arbitrary Views?
Implementation ideas
- We can use two different approaches:
- Inject ChromeVox into firefox mobile, which directly uses Android framework TTS to do selfvoicing (Both, Chrome OS and Android native browser does this way)
- Inject FireVox
- Use our own accessibility layer and based on our accesible events fire AccesibilityEvent (so Talkback would speak)
Tasks
In order to evaluate both options, we need to figure out some stuff:
- Check if we can easily expose Java objects from the main application to our js engine (for TTS calls inside ChromeVox)
- Check if the interation model of ChromeVox works ok with us (does it speak mostly when something is focused? if so, are we moving focus in the same way as WebKit does, allowing all elements to be focused?)
- Explore if we can achieve similar results using our accesible code firing AccesibilityEvent
- Check accessibility of Firefox chrome widgets (dialogs, preferences, etc...)