SMIL:Animation Model: Difference between revisions

Update broken links
m (Updated links)
(Update broken links)
 
(13 intermediate revisions by 2 users not shown)
Line 1: Line 1:
= Overview =
= Animation model =


Following is an overview of the animation model suggested as part of a [[SVGDev:Possible_SMIL_timing_and_animation_model]]. Note that attributes are not shown, return values are often omitted, some interfaces have been omitted and many data types are not specified as the appropriate NSPR / Moz type. Underlined methods are static.
An overview of the classes that make up the animation model is shown below:


http://brian.sol1.net/svg/designs/iteration-1/animation-model-1.png
http://brian.sol1.net/svg/designs/current/animation-model.png


= The parts in detail =
== nsSMILAnimationController ==
 
The animation controller maintains the animation timer so it is this object that
determines the sample times and sample rate. There is at most one animation
controller per
[http://lxr.mozilla.org/seamonkey/source/layout/base/nsPresContext.h nsPresContext] so frame-rate tuning can be performed at this level. The
additional interface <tt>nsIAnimationController</tt> is the one used by
nsPresContext to minimise coupling of <tt>/layout/base</tt> with SMIL whilst
still allowing some control of animations can be performed at this level.
(Perhaps this is an interface that should be public, for example so extensions
could pause all the animations on a web page? Or is there some other interface
for stopping GIF animations that should be extended to include this?)
 
Currently the member of nsPresContext that points to this attribute will be NULL
for all documents. When a document containing SVG is loaded the
<tt>nsSVGSVGElement</tt> checks if this member is NULL and if it is, it sets it
with a new <tt>nsISMILAnimationController</tt> object. The controller object is
required even for un-animated SVG in order to initialise the <tt>[[SMIL:Timing Model#nsSMILTimedDocumentRoot|nsSMILTimedDocumentRoot]]</tt> with the appropriate start time for the
document in case animation is later added via the DOM. (While this is strictly
how it should work I'm not sure how important it is. We could create the
controller truly on demand. Then animations created via the DOM might produce
technically incorrect results as the document start would correspond to when the
first animation element was added and not SVGLoad, but this might be
acceptable.)
 
See also, [[SMIL:Animation Controller|roc's original description of the requirements for an animation controller]].


== nsSMILAnimationRegistry ==
== nsSMILAnimationRegistry ==


This class acts as the single point of registry in both the animation and timing model. The appropriate nsSMILTimedElement and nsISMILComposable objects would be passed in along with perhaps the target attribute (for getAnimValue lookups as described next).
The animation registry is not present in
[http://www.ludicrum.org/plsWork/papers/BatikSMILsupport.htm Schmitz's design]. I've included it for three reasons.
 
'''1. It simplifies registering for animation elements and the outermost SVG element.''' One feature of [http://www.ludicrum.org/plsWork/papers/BatikSMILsupport.htm Schmitz's design] is the separation of the timing and animation model. However, with my
implementation it is therefore necessary to register once with the time
container (responsible for the timing model) and once with the list of
compositors ([http://www.ludicrum.org/plsWork/papers/BatikSMILsupport.htm Schmitz's design] doesn't describe how this part works). This is a bit tedious,
and needs to be performed not only by the <tt>&lt;animate&gt;</tt> element but also every other animation element we implement and the outermost <tt>&lt;svg&gt;</tt> element that owns the registries. To simplify all this I've tied the timing and animation model together with this one registry.
 
'''2. It allows per-sample operations to be performed at the appropriate time.''' [http://www.ludicrum.org/plsWork/papers/BatikSMILsupport.htm Schmitz's model] does not delve into integration issues such as suspending and
unsuspending redrawing. This is of course a deliberate part of the design but at
some point the model must meet the real world and I've chosen to do that here
through the [[#nsISMILAnimationObserver|nsISMILAnimationObserver]] interface. This
interface provides a few methods called at pertinent times so that operations
such as suspending and unsuspending redrawing can be performed.
 
'''3. It allows the compositing to be controlled 'from above'.''' This is probably the most significant deviation from
[http://www.ludicrum.org/plsWork/papers/BatikSMILsupport.htm Schmitz's design]. In his design the timing and animation model are very elegantly kept at arm's
length through the time client interface ([[SMIL:Timing Model#nsISMILTimeClient|nsISMILTimeClient]] in my implementation). So how does the compositor
know when to perform compositing? Well, the composable objects hand their
results 'up' to the compositor and it counts them until it figures it has enough
to proceed. Of course, some exceptions have to be accounted for such as 'to
animation' and relative values.
[http://www.ludicrum.org/plsWork/papers/BatikSMILsupport.htm Schmitz] suggests
callbacks could be used for this.
 
The implementation I've produced here operates in the opposite direction. The
composables simply store the sample parameters provided through the
[[SMIL:Timing Model#nsISMILTimeClient|nsISMILTimeClient]] interface. These parameters include
information such as the simple time of the last sample. After all composables
have been sampled the registry is told to start compositing. The compositor then
iterates through the composables requesting their results as necessary.
 
Some of the advantages of this approach are:
 
* No special handling is required for to animations, other than that they be composited at the appropriate point in the sandwich
* The compositor does not need to combine results, or even know about the additive behaviour of its composable children (although it probably will as an optimisation)
* The compositor is free to optimise as it sees fit by only requesting those composables that will actually affect the final result to calculate their results
* Relative values can be recalculated in a more natural fashion (although I haven't yet implemented this)
* Animations that are filling don't need to be resampled (they will simply re-use the parameters passed to them last time)
* No problems with counts of composables getting out of sync.
* Knowledge of how different types of animations prioritise is confined to the composables themselves (and not the compositor)
 
<!-- Also, don't put this on the Wiki, but include it when writing the final
report: The whole approach of counting how many clients have provided results
and comparing this against how many *should* provide results doesn't seem
robust. It certainly could be made to work through proper testing but in an
environment like Mozilla where there are expected to be many changes to the code
by independent parties there is a good possibility of introducing hard to find
bugs under such a model. -->


The <tt>getAnimValue</tt> method is to support the mode of interaction originally proposed by Alex (sorry no link). The main difference is that instead of having the compositor apply the new animation (presentation) value to the animated attribute--which breaks encapsulation a little, the animated attribute fetches its animated value from the animation registry during redraw (the animated value itself is still calculated in the regular way, i.e. as part of the traversal of the nsSMILTimedElements). This is a quick hashtable lookup and probably not a big performance penalty. This is the approach I used in my [http://brian.sol1.net/svg/ last prototype].
The main disadvantage is coupling between the timing model and animation model.
This coupling appears between the animation registry and the timed document
root. However, I think the simplicity afforded by this approach warrants the
extra coupling.


It doesn't matter which approach we use from the point of view of this model however. If we stick with applying the animation attribute from the compositor, we can simply remove the <tt>getAnimValue</tt> method from this class and the nsSMILCompositor.
The registry also provides the implementation for several animation related
methods of the SVGSVGElement DOM interface.
 
== nsISMILAnimationObserver ==
 
This interface allows a client to be informed of steps in the animation process.
This is used by <tt>nsSVGSVGElement</tt> to suspend and unsuspend redrawing
before and after compositing as well as to batch enumerating the animation
nodes. (Without this call it would re-enumerate the animation nodes in the
entire tree for each node that was attached--a very costly operation if
a subtree with several animation elements was grafted in, something in the order
of O(n!).)


== nsSMILCompositor ==
== nsSMILCompositor ==


I'm not quite sure how this part should work. There are probably many possibilities including:
A compositor manages a collection of animations that target the same attribute.
Each of these animations implements the [[#nsISMILComposable|nsISMILComposable]] interface. The
compositor is responsible for calling these objects in order from lowest
priority order to the highest priority according to the
[http://www.w3.org/TR/2001/REC-smil-animation-20010904/#AnimationSandwichModel animation sandwich].


# All nsISMILComposable's provide their result to the compositor (<tt>composeResult()</tt>), and then the 'compositor' applies the necessary prioritisation and composition. This is the approach suggested by Schmitz (at least that's my understanding).<br/><br/>
Each time an [[#nsISMILComposable|nsISMILComposable]] object is called it is passed the underlying value of the sandwich to which it may add its result or replace it (depending on the additive behaviour of the animation).
# The timing model ensures that elements are sampled in correct priority order (from lowest to highest), and then when each nsISMILComposable goes to provide its result to the compositor it first requests the base value from the compositor (<tt>getBaseValue</tt>) which will return the cumulative result of animations on this attribute for the current sample (or the attribute base value if not animations have targetted it yet in this sample). The 'composable' object then performs addition (if it is an additive animation) and provides its result back to the composable object.<br/><br/>The advantage of this second approach is that the compositor does not need to perform the actual addition, all this type-specific logic is concentrated in the composable object. Also, the compositor does not need to know if the composable object is additive.


Relative values and to animation could still be supported through the same mechanism by simply causing the compositor to remember the order in which it received results from its children. Probably I'm overlooking something however. For a start, I'm not sure how hard it will be to get the timing model to sample elements in priority order.
The compositor is responsible for re-compositing when a relative value changes
(although this is not yet implemented) and performs optimisations such as not
calling those objects that it determines will not contribute to the final
result.
 
In implementing <tt>&lt;animateMotion&gt;</tt> we may register the one animation
function against several target attributes. In this case it may be necessary to
pass the target attribute to the composable during <tt>ComposeSample</tt> so
that it can identify which attribute is currently being composited.


== nsISMILComposable ==
== nsISMILComposable ==


This interface is discussed above. In both of the suggested modes of operation the data flows "up" from this object to the compositor. The <tt>calcResult</tt> method would only be used for later addition of the calculated values, such as when a relative measure needs to be updated.
This interface is implemented by animation function objects so that they can be
manipulated by the compositor. The key method is <tt>ComposeResult</tt> which
takes the underlying value of the
[http://www.w3.org/TR/2001/REC-smil-animation-20010904/#AnimationSandwichModel animation sandwich] as a parameter and adds to or replaces this value.
 
Two further sets of methods are provided.


== nsSMILSimpleAnimElement ==
The first set, consisting of methods such as <tt>IsToAnimation</tt>,
<tt>GetBeginTime</tt>, and <tt>GetDocumentPosition</tt> are used by other
nsISMILComposable objects to implement the <tt>CompareTo</tt> method so that
composable objects can be sorted by the compositor. This allows the compositor
to be ignorant of how to prioritorise composable objects.


Base class for animation elements. The <tt>toValue</tt> may be redundant when inherited by nsSMILInterpolatingAnimElement. I'd like to see if it is possible to remove this inheritance relationship later and replace it with composition.
The other set of methods, <tt>IsActive</tt> and <tt>WillReplace</tt> provide the
compositor with extra information needed to optimise its operations by filtering
out composable objects that will not effect the current sample.


== nsSMILInterpolatingAnimElement ==
== nsSMILAnimationFunction ==


Animation element object providing support for interpolation and addition.
This interface and implementation provide the calculation of animation values
for animation elements that interpolate such as <tt>&lt;animate&gt;</tt> and
<tt>&lt;animateColor&gt;</tt>. Later when <tt>&lt;set&gt;</tt> is implemented,
this class and interface may be split into nsSMILSimpleAnimFunc and
nsSMILInterpolatingAnimFunc. <tt>&lt;animateTransform&gt;</tt> and
<tt>&lt;animateMotion&gt;</tt> may be implemented as subclasses of this class or
by adding extra parameters.


== nsSVGXXXElement ==
Not shown in the diagram is an UnsetXXX method corresponding to each of the
SetXXX methods. All attribute parsing and handling such as providing default
values is performed within this class. This allows this logic to be shared
between all animation elements.


Schmitz stresses that:
== nsISMILAnimAttr ==


:It is important to be able to model timing as an extension of element behavior, rather than a completely separate model - this just makes it easier to code and manipulate.  
This interface sits above the [[#nsISMILAnimValue|nsISMILAnimValue]] interface to wrap the
animated and base value of an attribute together for querying by SMIL. It
roughly corresponds to an nsSVGAnimatedXXX object whereas [[#nsISMILAnimValue|nsISMILAnimValue]]
corresponds to the nsSVGXXX object. This interface could possibly be removed
but I'm currently waiting to see how animated values will be implemented to
determine if this is possible. Also, keeping this interface allows
nsISMILAnimVal to be implemented as a lightweight object separate from the
nsSVGXXX type. This approach is also supported by the methods of the interface.
For example, only a copy of the base value is returned and the animated value is
never accessed directly.


Nevertheless, I would like to try and model this using composition instead in order to keep the content tree simple (and because it's always worth considering how we can avoid inheriting implementation).
== nsISMILAnimValue ==


== nsISMILTimeScaler, nsSMILXXXScaler ==
This interface is the basic layer of indirection used by the animation model to
manipulate different data types. The methods allows all the necessary
calculations such as addition and repetition to be performed. Objects of this
type are used frequently and so should be fairly lightweight. For example, when
parsing <tt>values="20; 30; 15; 20; 60; 70; 80; 90"</tt> a new nsISMILAnimValue
is created for each value in the array (by calling the factory methods in the
[[#nsISMILAnimAttr|nsISMILAnimAttr]] interface).


It's probably not necessary to split this functionality up over so many object (it shouldn't be ''that'' complicated). The basic idea is to transform the simple time into something that will produce the correct result when linearly interpolated. [http://www.w3.org/TR/2001/REC-smil-animation-20010904/#TimingAndRealWorldClockTime Section 3.6.1 of the SMIL Animation Specification] explains this much better.
== nsISMILAnimElement ==


== nsISMILAnimValue, nsSMIL(CSS|SVG)XXXValue ==
This interface is not used within the SMIL module but provides a consistent
manner for identifying elements that have attributes that can be animated and
accessing those attributes. This consistent interface will be important in
multi-namespace situations.


This needs to be revisited. For now I've just based it on Schmitz's proposal but I'd like to see how we can integrate this with nsISVGValue rather than defining another data type abstraction interface. The only real requirement on this interface is that it allows the data types to be passed around transparently and that several static methods are available for defining a notion of distance and performing the addition and interpolation.
Currently this interface is implemented in nsSVGElement with the idea that
specific SVG elements can explicitly disallow animation of certain attributes by
overriding this interface.
Confirmed users, Bureaucrats and Sysops emeriti
969

edits