Firefox/Projects/Multitouch Polish/DOM Events
This wiki page will be used to describe the current state of the touch events being implemented, and to discuss what the format of these events should be, what kind of information they should provide, etc.
The current implementation is being done in Windows 7 with the touch API, but the design should be platform agnostic.
Currently they inherit from MouseEvent and adds a streamId property which uniquely identifies a tracking point. In Win7 this id is provided by the OS/driver layer and is valid only while the same touch point is being tracked. After the finger is released, the id can be (and will be) reused.
- Can we expect every platform to give us the id?
Each event is related to each single touch point, so if there are 3 fingers touching the screen, 3 MozTouchMove events can possibly be dispatched for each loop in the message loop.
Things to add
Number of touch points
Some uses of touch events may need to track various points at once. This can be handled by observing MozTouchDown/Release, but a field with the current number of touch points could be added to simplify things
Size and pressure
Touch input also may provide detailed information about the contact area or pressure, but it depends on the platform and type of screen. We already have MozPressure attribute on MouseEvent, which is currently only used in some gtk. code. Win7 provides width and height of contact area.
Questions to ask
For some applications, getting the information for all of the touch points at the same time is important. We send separate events for each touch, so this information is not directly available. But it can be easily supported by a simple JS library which keep track of the current active points. Should we make this information always available or do we leave it simple and let a JS library do the work if needed?
Compatibility with webkit
Webkit implemented some multitouch events on the iPhone which will be on Android as well. How should we take these into account? Their model is quite different from the typical event model, as in they provide the list of all touches on a single event, and then values like event.clientX and such doesn't exist. There are three lists with different rules for the target nodes, some of which keep sending events to original target and this can break the model if there are dynamic changes on the page. Also there hasn't been much effort into making these into standards.
Touch gestures vs. touch input vs. mouse events
Using gestures and input at the same time is an ambiguous interaction. For example, if a finger is moved from bottom to the top of the screen.
- how can we know if the desired action is to pan (scroll) the page, or get touch events being sent about the movement
- Is this up for the web page to decide? How can it switch modes and which modes can probably work at the same time?
- preventDefault() ?
- Do we also send click and mousemove events? Should we be able to prevent this as well?
Need to take these to the DOM list for comments and hopefully make a spec