Gecko:CrossProcessLayers: Difference between revisions

No edit summary
Line 52: Line 52:


= Important cases on fennec =
= Important cases on fennec =
Will the browser process need to see all layers in a content process, or just a single container/image/screen layer?  Need plan for optimal performance (responsiveness and frame rate) for the following cases.
* '''Panning''': browser immediately translates coords of either single screen layer or container layer before delivering event to content process.  Content (or browser) later uses event to update region painting heuristics.
* '''Volume rocker zoom''': browser immediately sets scaling matrix for either single screen layer or container layer (fuzzy zoom).  Content (or browser) later uses event to update region painting heuristics.
* '''Double-tap zoom'''
** Question: How long does it typically take to determine the zoom target?
** Single screen layer: need to propagate event into content process before repainting so that it can determine target of zoom.
** Container layer: can we use layer-tree heuristics to do a fuzzy zoom while content process figures out target?  (Better perceived responsiveness)
* '''Video'''
** Single screen layer: decoding needs to be done in content process (?).  Possibly better parallelism for SW-only decoding.  Content process controls frame rate allocation for multiple videos.  Harder to adjust frame rates across browser/content because relies on OS for CPU scheduling.
** Container layer: can extract video layers from content container and schedule centrally.  Browser (decoding thread) decodes all videos.  Possibly more efficient with HW accelerated decoding b/c can batch commands for several videos.  Easier to allocate frame rates across all visible videos.
* '''CSS transforms and SVG filters''': scheduling work browser/content probably needs to be viewed as distributed optimization problem.  For SW-only transforms/filters, probably want to do as much work as possible in content process.  Unclear for HW acceleration, but that's future work.
* '''Animations''': cjones doesn't know enough to comment on this.
General comment: I think we'll want the browser process to be able to see each content process's full published layer tree (which may not be equivalent to local layer tree).  Good scheduling of work is a tricky problem that likely changes per device and possibly per page; probably want to be flexible about which gfx operations are done in browser/content.  E.g., content process should be able to partially composite layer subtrees.
Question: for a given layer subtree, can reasonably guess how expensive the transformation/compositing operations will take in CPU and GPU time?  Could use this information for distributed scheduling.
cjones is in favor of a first implementation where the content process only publishes a single "screen layer".  Unsure how video fits into this, although decoding in content process seems fine.  Tentatively in favor of initially assuming content process can use GPU so that we can get baseline perf numbers to compare to if we decide to take away GPU access from content.
Confirmed users
699

edits