Confirmed users
699
edits
(→X11) |
|||
| (7 intermediate revisions by 2 users not shown) | |||
| Line 1: | Line 1: | ||
For context, see [[Gecko:Layers]] | |||
= Proposal = | = Proposal = | ||
| Line 75: | Line 77: | ||
Assume we have a master process M and a slave process S. M and S maintain their own local layer trees M_l and S_l. M_l may have a leaf RemoteContainer layer R into which updates from S are published. The contents of R are immutable wrt M, but M may freely modify R. R contains the "shadow layer tree" R_s published by S. R_s is semantically a copy of a (possibly) partially-composited S_l. | Assume we have a master process M and a slave process S. M and S maintain their own local layer trees M_l and S_l. M_l may have a leaf RemoteContainer layer R into which updates from S are published. The contents of R are immutable wrt M, but M may freely modify R. R contains the "shadow layer tree" R_s published by S. R_s is semantically a copy of a (possibly) partially-composited S_l. | ||
Updates to R_s are atomic wrt painting. When S wishes to publish updates to M, it sends an "Update(cset)" message to M containing all R_s changes to be applied. This message is processed in its own "task" ("event") in M. This task will (?? create a layer tree transaction and ??) apply cset. cset will include layer additions, removals, and attribute changes. Initially we probably want Update(cset) to be synchronous. (cjones believes it can be made asynchronous, but that would add unnecessary complexity for a first implementation.) Under the covers (opaque to M), in-place updates will be made to existing R_s layers. | Updates to R_s are atomic wrt painting. When S wishes to publish updates to M, it sends an "Update(cset)" message to M containing all R_s changes to be applied. This message is processed in its own "task" ("event") in M. This task will (?? create a layer tree transaction and ??) apply cset. cset will include layer additions, removals, and attribute changes. Initially we probably want Update(cset) to be synchronous. <s>(cjones believes it can be made asynchronous, but that would add unnecessary complexity for a first implementation.)</s> (asynchronous layer updates seem counterproductive wrt perf and introduce too many concurrency problems. synchronous seems to be the way to go permanently.) Under the covers (opaque to M), in-place updates will be made to existing R_s layers. | ||
Question: how should M publish updates of R_s to its own master MM? One approach is to apply Update(cset) to R_s, then synchronously publish Update(cset union M_cset) to its master MM. This is an optimization that allows us to maintain copy semantics without actually copying. | Question: how should M publish updates of R_s to its own master MM? One approach is to apply Update(cset) to R_s, then synchronously publish Update(cset union M_cset) to its master MM. This is an optimization that allows us to maintain copy semantics without actually copying. | ||
| Line 82: | Line 84: | ||
Video decoding fits into this model: decoders will be a slave S that's a thread in a content process, and the decoders publish updates directly to a master M that's the compositor process. The content and browser main threads will publish special "placeholder" video layers that reference "real" layers in the compositor process. | Video decoding fits into this model: decoders will be a slave S that's a thread in a content process, and the decoders publish updates directly to a master M that's the compositor process. The content and browser main threads will publish special "placeholder" video layers that reference "real" layers in the compositor process. | ||
= Implementation = | |||
The process architecture for the first implementation will look something like the following. | |||
[[File:Cross-process-layers-v0.png|thumb|650px|v0 process architecture]] | |||
The big boxes are processes and the circles next to squiggly lines are threads. <code>PFoo</code> refers to an IPDL protocol, and <code>PFoo(Parent|Child)</code> refers to a particular actor. | |||
Layer trees shared across processes may look something like | |||
[[File:Layer-trees-v0.png|thumb|650px|v0 layer tree]] | |||
Like the first diagram, big boxes are processes. The circled item in the Content process is a decoder thread. The arrows drawn between processes indicate the direction that layer-tree updates will be pushed. All updates will be made through a <code>PLayers</code> protocol. This protocol will likely roughly correspond to a "remote layer subtree manager". A <code>ShadowContainerLayer</code> corresponds to the "other side" of a remote subtree manager; it's the entity that receives layer tree updates from a remote process. The dashed line between the decoder thread and the Content main thread just indicates their relationship; it's not clear what Layers-related information they'll need to exchange, if any. | |||
NB: this diagram assumes that a remote layer tree will be published and updated in its entirety. However, it sometimes might be beneficial to partially composite a subtree before publishing it. This exposition ignores that possibility because it's the same problem wrt IPC. | |||
It's worth calling out in this picture the two types of (non-trivial) shared leaf layers: <code>ShmemLayer</code> and <code>PlaceholderLayer</code>. A <code>ShmemLayer</code> will wrap IPDL <code>Shmem</code>, using whichever backend is best for a particular platform (might be POSIX, SysV, VRAM mapping, ...). Any process holding on to a <code>ShmemLayer</code> can read and write the buffer contents within the constraints imposed by [[IPDL/Shmem|<code>Shmem</code> single-owner semantics]]. This means, e.g., that if Plugin pushes a <code>ShmemLayer</code> update to Content, Content can twiddle with Plugin's new front buffer before pushing the update to Chrome, and similarly for Chrome before pushing Chrome-->Compositor. We probably won't need this feature. | |||
A <code>PlaceholderLayer</code> refers to a "special" layer with a buffer that's inaccessible to the process with the <code>PlaceholderLayer</code> reference. Above, it refers to the video decoder thread's frame layer. When the decoder paints a new frame, it can immediately push its buffer to Compositor, bypassing Content and Chrome (which might be blocked for a "long time"). In Compositor, however, the <code>PlaceholderLayer</code> magically turns into a <code>ShmemLayer</code> with a readable and writeable front buffer, so there new frames can be painted immediately with proper Z-ordering wrt content and chrome layers. Content can arbitrarily fiddle with the <code>PlaceholderLayer</code> in its subtree while new frames are still being drawn, and on the next Content-->Chrome-->Compositor update push, the changes to <code>PlaceholderLayer</code> position/attributes/etc. will immediately take effect. | |||
== Strawman PLayers protocol == | |||
The rough idea here is to capture all modifications made to a layer subtree during a <code>RemoteLayerManager</code> transaction, package them up into a "changeset" IPC message, send the message to the "other side", then have the other side replay that changeset on its shadow layer tree. | |||
// XXX: not clear whether we want a single PLayer managee or PColorLayer, PContainerLayer, et al. | |||
include protocol PLayer; | |||
// Remotable layer tree | |||
struct InternalNode { | |||
PLayer container; | |||
Node[] kids; | |||
}; | |||
union Node { | |||
InternalNode; | |||
PLayer; // leaf | |||
}; | |||
// Tree operations comprising an update | |||
struct Insert { Node x; PLayer after; }; | |||
struct Remove { PLayer x; }; | |||
struct Paint { PLayer x; Shmem frontBuffer; }; | |||
struct SetOpaque { PLayer x; bool opaque; }; | |||
struct SetClipRect { PLayer x; gfxRect clip; }; | |||
struct SetTransform { PLayer x; gfxMatrix tranform; }; | |||
// ... | |||
union Edit { | |||
Insert; | |||
Remove; | |||
Paint; | |||
SetOpaque; | |||
SetClipRect; | |||
SetTransform; | |||
// ... | |||
}; | |||
// Reply to an Update() | |||
// buffer-swap reply sent in response to Paint() | |||
struct SetBackBuffer { PLayer x; Shmem backBuffer; }; | |||
// ...? | |||
union EditReply { | |||
SetBackBuffer; | |||
//...? | |||
}; | |||
// From this spec, singleton PLayersParent/PLayersChild actors will be | |||
// generated. These will be singletons-per-protocol-tree roughly | |||
// corresponding to a "RemoteLayerManager" or somesuch | |||
sync protocol PLayers { | |||
// all the protocols with a "layers" mix-in | |||
manager PCompositor or PContent or PMedia or PPlugin; | |||
manages PLayer; | |||
parent: | |||
sync Publish(Node root); | |||
sync Update(Edit[] cset) | |||
returns(EditReply[] reply); | |||
// ... other lifetime management stuff here | |||
state INIT: | |||
recv Publish goto UPDATE; | |||
state UPDATE: | |||
recv Update goto UPDATE; | |||
//... other lifetime management stuff here | |||
}; | |||
What comprises a changeset is eminently fungible. For example, some operations may only apply to certain types of layers; IPDL's type system can capture this, if desired. | |||
union ClippableLayer { PImageLayer; PColorLayer; /*...*/ }; | |||
struct SetClip { ClippableLayer x; gfxRect clipRect; } | |||
Need lots of input from roc/Bas/jrmuizel here. | |||
== Recording modifications made to layer trees during transactions == | |||
It's not clear yet what's the best way to implement this. One approach would be to add an optional <code>TransactionObserver</code> member to <code>LayerManager</code>s. The observer could be notified on each layer tree operation. For remoting layers, we would add a <code>TransactionObserver</code> implementation that bundled up modifications into an <code>nsTArray<Edit></code> per above. A second approach would be for <code>LayerManager</code>s internally, optionally to record all tree modifications, then invoke an optional callback on transaction.End(). For remoting, this callback would transform the manager's internal changeset format into <code>nsTArray<Edit></code>. | |||
== Strawman complete example == | |||
'''TODO''': assume processes have been launched per diagram above. Walk through layer tree creation and Publish()ing. Walk through transaction and Update() propagation. | |||
= Platform-specific sharing issues = | |||
== X11 == | |||
When X11 clients die, the X server appears to free their resources automatically (apparently when the Display* socket closes?). This is generally good but a problem for us because we'd like child processes to conceptually "own" their surfaces, with parents just keeping an "extra ref" that can keep the surface alive after the child's death. We also want a backstop that ensures all resources across all processes are cleaned up when Gecko dies, either normally or on a crash. For Shmem image surfaces, this is easy because "sharing" a surface to another process is merely a matter of dup()ing the shmem descriptor (whatever that means per platform), and the OS manages these descriptors automatically. | |||
'''Wild idea''': if the X server indeed frees client resources when the client's Display* socket closes, then we could have the child send the parent a dup() of the child's Display socket. The parent would close this dup on ToplevelActor::ActorDestroy() (like what happens automatically with Shmems), and the OS would close the dup automatically on crashes. | |||