SMIL:Animation Model

From MozillaWiki
Jump to: navigation, search

Animation model

An overview of the classes that make up the animation model is shown below:

animation-model.png

nsSMILAnimationController

The animation controller maintains the animation timer so it is this object that determines the sample times and sample rate. There is at most one animation controller per nsPresContext so frame-rate tuning can be performed at this level. The additional interface nsIAnimationController is the one used by nsPresContext to minimise coupling of /layout/base with SMIL whilst still allowing some control of animations can be performed at this level. (Perhaps this is an interface that should be public, for example so extensions could pause all the animations on a web page? Or is there some other interface for stopping GIF animations that should be extended to include this?)

Currently the member of nsPresContext that points to this attribute will be NULL for all documents. When a document containing SVG is loaded the nsSVGSVGElement checks if this member is NULL and if it is, it sets it with a new nsISMILAnimationController object. The controller object is required even for un-animated SVG in order to initialise the nsSMILTimedDocumentRoot with the appropriate start time for the document in case animation is later added via the DOM. (While this is strictly how it should work I'm not sure how important it is. We could create the controller truly on demand. Then animations created via the DOM might produce technically incorrect results as the document start would correspond to when the first animation element was added and not SVGLoad, but this might be acceptable.)

See also, roc's original description of the requirements for an animation controller.

nsSMILAnimationRegistry

The animation registry is not present in Schmitz's design. I've included it for three reasons.

1. It simplifies registering for animation elements and the outermost SVG element. One feature of Schmitz's design is the separation of the timing and animation model. However, with my implementation it is therefore necessary to register once with the time container (responsible for the timing model) and once with the list of compositors (Schmitz's design doesn't describe how this part works). This is a bit tedious, and needs to be performed not only by the <animate> element but also every other animation element we implement and the outermost <svg> element that owns the registries. To simplify all this I've tied the timing and animation model together with this one registry.

2. It allows per-sample operations to be performed at the appropriate time. Schmitz's model does not delve into integration issues such as suspending and unsuspending redrawing. This is of course a deliberate part of the design but at some point the model must meet the real world and I've chosen to do that here through the nsISMILAnimationObserver interface. This interface provides a few methods called at pertinent times so that operations such as suspending and unsuspending redrawing can be performed.

3. It allows the compositing to be controlled 'from above'. This is probably the most significant deviation from Schmitz's design. In his design the timing and animation model are very elegantly kept at arm's length through the time client interface (nsISMILTimeClient in my implementation). So how does the compositor know when to perform compositing? Well, the composable objects hand their results 'up' to the compositor and it counts them until it figures it has enough to proceed. Of course, some exceptions have to be accounted for such as 'to animation' and relative values. Schmitz suggests callbacks could be used for this.

The implementation I've produced here operates in the opposite direction. The composables simply store the sample parameters provided through the nsISMILTimeClient interface. These parameters include information such as the simple time of the last sample. After all composables have been sampled the registry is told to start compositing. The compositor then iterates through the composables requesting their results as necessary.

Some of the advantages of this approach are:

  • No special handling is required for to animations, other than that they be composited at the appropriate point in the sandwich
  • The compositor does not need to combine results, or even know about the additive behaviour of its composable children (although it probably will as an optimisation)
  • The compositor is free to optimise as it sees fit by only requesting those composables that will actually affect the final result to calculate their results
  • Relative values can be recalculated in a more natural fashion (although I haven't yet implemented this)
  • Animations that are filling don't need to be resampled (they will simply re-use the parameters passed to them last time)
  • No problems with counts of composables getting out of sync.
  • Knowledge of how different types of animations prioritise is confined to the composables themselves (and not the compositor)


The main disadvantage is coupling between the timing model and animation model. This coupling appears between the animation registry and the timed document root. However, I think the simplicity afforded by this approach warrants the extra coupling.

The registry also provides the implementation for several animation related methods of the SVGSVGElement DOM interface.

nsISMILAnimationObserver

This interface allows a client to be informed of steps in the animation process. This is used by nsSVGSVGElement to suspend and unsuspend redrawing before and after compositing as well as to batch enumerating the animation nodes. (Without this call it would re-enumerate the animation nodes in the entire tree for each node that was attached--a very costly operation if a subtree with several animation elements was grafted in, something in the order of O(n!).)

nsSMILCompositor

A compositor manages a collection of animations that target the same attribute. Each of these animations implements the nsISMILComposable interface. The compositor is responsible for calling these objects in order from lowest priority order to the highest priority according to the animation sandwich.

Each time an nsISMILComposable object is called it is passed the underlying value of the sandwich to which it may add its result or replace it (depending on the additive behaviour of the animation).

The compositor is responsible for re-compositing when a relative value changes (although this is not yet implemented) and performs optimisations such as not calling those objects that it determines will not contribute to the final result.

In implementing <animateMotion> we may register the one animation function against several target attributes. In this case it may be necessary to pass the target attribute to the composable during ComposeSample so that it can identify which attribute is currently being composited.

nsISMILComposable

This interface is implemented by animation function objects so that they can be manipulated by the compositor. The key method is ComposeResult which takes the underlying value of the animation sandwich as a parameter and adds to or replaces this value.

Two further sets of methods are provided.

The first set, consisting of methods such as IsToAnimation, GetBeginTime, and GetDocumentPosition are used by other nsISMILComposable objects to implement the CompareTo method so that composable objects can be sorted by the compositor. This allows the compositor to be ignorant of how to prioritorise composable objects.

The other set of methods, IsActive and WillReplace provide the compositor with extra information needed to optimise its operations by filtering out composable objects that will not effect the current sample.

nsSMILAnimationFunction

This interface and implementation provide the calculation of animation values for animation elements that interpolate such as <animate> and <animateColor>. Later when <set> is implemented, this class and interface may be split into nsSMILSimpleAnimFunc and nsSMILInterpolatingAnimFunc. <animateTransform> and <animateMotion> may be implemented as subclasses of this class or by adding extra parameters.

Not shown in the diagram is an UnsetXXX method corresponding to each of the SetXXX methods. All attribute parsing and handling such as providing default values is performed within this class. This allows this logic to be shared between all animation elements.

nsISMILAnimAttr

This interface sits above the nsISMILAnimValue interface to wrap the animated and base value of an attribute together for querying by SMIL. It roughly corresponds to an nsSVGAnimatedXXX object whereas nsISMILAnimValue corresponds to the nsSVGXXX object. This interface could possibly be removed but I'm currently waiting to see how animated values will be implemented to determine if this is possible. Also, keeping this interface allows nsISMILAnimVal to be implemented as a lightweight object separate from the nsSVGXXX type. This approach is also supported by the methods of the interface. For example, only a copy of the base value is returned and the animated value is never accessed directly.

nsISMILAnimValue

This interface is the basic layer of indirection used by the animation model to manipulate different data types. The methods allows all the necessary calculations such as addition and repetition to be performed. Objects of this type are used frequently and so should be fairly lightweight. For example, when parsing values="20; 30; 15; 20; 60; 70; 80; 90" a new nsISMILAnimValue is created for each value in the array (by calling the factory methods in the nsISMILAnimAttr interface).

nsISMILAnimElement

This interface is not used within the SMIL module but provides a consistent manner for identifying elements that have attributes that can be animated and accessing those attributes. This consistent interface will be important in multi-namespace situations.

Currently this interface is implemented in nsSVGElement with the idea that specific SVG elements can explicitly disallow animation of certain attributes by overriding this interface.