Firefox/Projects/Multitouch Polish/DOM Events: Difference between revisions

Jump to navigation Jump to search
no edit summary
No edit summary
No edit summary
Line 23: Line 23:
Touch input also may provide detailed information about the contact area or pressure, but it depends on the platform and type of screen. We already have MozPressure attribute on MouseEvent, which is currently only used in some gtk. code. Win7 provides width and height of contact area.
Touch input also may provide detailed information about the contact area or pressure, but it depends on the platform and type of screen. We already have MozPressure attribute on MouseEvent, which is currently only used in some gtk. code. Win7 provides width and height of contact area.


== Question to ask ==
== Questions to ask ==


==== Aggregated values ====
===== Aggregated values =====
For some applications, getting the information for all of the touch points at the same time is important. We send separate events for each touch, so this information is not directly available. But it can be easily supported by a simple JS library which keep track of the current active points. Do we leave it simple and let a JS library do the work if needed? Or should we make this information always available?
For some applications, getting the information for all of the touch points at the same time is important. We send separate events for each touch, so this information is not directly available. But it can be easily supported by a simple JS library which keep track of the current active points. Do we leave it simple and let a JS library do the work if needed? Or should we make this information always available?


Line 31: Line 31:
Webkit implemented some multitouch events on the iPhone which will be on Android as well. How should we take these into account? Their model is quite different from the typical event model, as in they provide the list of all touches on a single event, and then values like event.clientX and such doesn't exist. Also there are three lists with different rules for the target nodes, some of which keep sending events to original target and this can break the model if there are dynamic changes on the page
Webkit implemented some multitouch events on the iPhone which will be on Android as well. How should we take these into account? Their model is quite different from the typical event model, as in they provide the list of all touches on a single event, and then values like event.clientX and such doesn't exist. Also there are three lists with different rules for the target nodes, some of which keep sending events to original target and this can break the model if there are dynamic changes on the page


==== Gestures and touch ====
==== Touch gestures vs. touch input ====
Using gestures and input at the same time is an ambiguous interaction. For example, if a finger is moved from bottom to the top of the screen, how can we know if the desired action is to pan (scroll) the page, or get touch events being sent about the movement. Is this up for the web page to decide? How can it switch modes and which modes can probably work at the same time?
Confirmed users
371

edits

Navigation menu