From MozillaWiki
< Platform‎ | GFX
Jump to: navigation, search

The Plan

The current plan to hardware accelerate Gecko and Firefox is to use OpenGL. This seems like a good starting point because it's supported (to varying degrees) on all the platforms we care about (including mobile platforms, in the form of OpenGL ES). (Note that it will be necessary to support both software and hardware render paths, because not all computers will be capable of GPU acceleration.)

Follow-on work for this might include making a Direct3D/Direct2D backend, especially if it's found that OpenGL stability/availability on Windows isn't sufficient.

The tentative timetable for this work is to have it working in an at least opt-in state (pref or compile-time) by EOY 2009.

The Steps

Cairo OpenGL Backend

NOTE: Our work plan might become obsolete, since there are at least two groups working on OpenGL backends for Cairo already. Also, this is not Glitz because Glitz does not take advantage of GPUs or OpenGL properly. Glitz is basically a pixman implementation in OpenGL, which is sub-optimal.

A Cairo OpenGL backend is the first basic requirement for hardware support. Cairo is our basic underlying 2D rendering library, and it needs to support drawing to OpenGL (probably in the form of drawing directly to textures via FBO).

The starting point for a Cairo backend will be simply drawing pixman's output into a texture. This will be the worst of both worlds.

Next, we want to take advantage of what CPUs suck at, and what GPUs are best at: blending. This probably means representing every surface as a separate texture, and then blending them together to create whatever the output is.

After that, it'll be a case of simply figuring out how to implement the Cairo operations in OpenGL. Paramount will be not "going backwards:" don't read data from the GPU back into main memory, because that is the worst thing you can do for performance.

Mozilla OpenGL Widget Support

There needs to be separate widget (nsIWindow and related) support for every platform. This is because there are separate ways of creating windows, handling different pixel depth, etc.

  • WGL (Windows)
    • Will probably want Jim Mathies' help on this.
  • GLX (X11)
  • AGL (OS X)
  • EGL (cross-platform)?

The tentative plan is to start with AGL, since it's really simple to get an OpenGL context out of an OS X window.


The basic approach to scrolling will be to take the texture of the displayed content, offset it by the scrolled amount, and rerender with that texture, with the remainder being filled in by the usual painting method.

To enable super-smooth scrolling, we probably want some opportunistic drawing in the background; at the least, holding on to the 10% above and below the current viewport would help the slow scrolling case. A better solution than this would be tiling the page, with tiles being filled in as necessary.

Text Rendering

Barring glyph rasterization on the GPU (which will probably happen at some point), text rendering will be performed by the CPU and uploaded to the GPU in the form of a texture. There will need to be some smarts in texture management, because we don't want to create one texture per glyph (too much overhead, and lots of drivers/GPUs have limits on number of active textures). One glyph per code page might be a bad idea too, due to the amount of time that'll need to be spent rasterizing an entire code page. The best solution will probably be to find some happy medium in size of texture for glyph caching, and then render directly to that texture for each incremental glyph.

It should also be possible to throw away glyphs that haven't been used recently - this will be a proper cache.

  • Subpixel anti-aliasing

The only important point about this subpixel AA is that we need to support it, and therefore glyph textures will need to be RGBA.

Image management

Similarly to text rendering, we will need to have the ability to cache images in textures in VRAM. This requires the (at the time of this writing) on-going Imagelib refactoring to support decode-on-draw and throwing away decoded data (textures), just in case we don't want to hold on to large textures. There will probably also need to be some form of tiling to make this work properly, because OpenGL implementations have a maximum texture size.

An interesting problem we'll need to solve is how to do filtering across texture boundaries when the image is too large to fit into a single texture.


A giant bag of pain.

On Windows, plugins (specifically, Flash) want to be able to hardware accelerate themselves using Direct3D. This might be possible by creating a top-level window which itself has Direct3D and OpenGL children. This might cause some small troubles because we'd be overlaying two different 3D contexts on each other, but I expect those troubles to be few.

Plugin vendors are mostly concerned about performance, so in order for them to accept windowless plugins (which are pretty important to get this to work), we will need to hand them a 3D context of some sort. We could potentially grab the contents of this context; if we are very lucky, it would already exist in VRAM, so we'd just need to reference it.

Another option would be to use some form of tricky hack to create a Direct3D context from an OpenGL context; it is unknown how well-supported this sort of trick would be.

A bigger problem exists if we want to separate plugins into their own process. It's almost impossible to share OpenGL contexts across processes, although it is apparently possible using Direct3D.