From MozillaWiki
Jump to: navigation, search

Information related to commonly used benchmarks and compliance measures for the web


Bold means commonly used in media


  • HTML5Test
    • Targeted for "HTML5 Ready" apps/usage
    • Tests existence of certain APIs but not really compliance; easy to game and being gamed.
    • We disagree that the FileSystem API should be part of this suite, no vendor other than Google implements and they seem to have given up on this spec too.
  • Ringmark
  • Acid3
    • All major browsers pass this
    • Should not be used as a performance test
  • Test262
    • Javascript compliance
  • CSS3 Selectors Test
    • All major browsers should pass
  • Browserscope
    • community driven project initiated by Google
    • includes Ringmark
    • the rich-text numbers are somewhat arbitrary and a higher score doesn't necessarily mean a better implemenation
  • W3C CSS Test Suites
    • hard to run, but results from others, for some of the test suites, are available (but separated by browser engine but not by version)
    • test suites in-development aren't separated from the stable ones
  • Content Security Policy Compliance Test Suite




  • GUIMark2
    • Vector Charting Test
      • Dominated by line stroking performance
      • 2D canvas
    • Bitmap Gaming Test
      • 2D canvas
    • Text Column Test
  • GUIMark3
    • A basic test of Canvas and video performance, largely focusing on comparing similar workloads between Flash and HTML5.
    • The result of each test is intended to be a framerate, with frames rendered via setInterval.
    • On a fast machine, results are meaningless -- likely to hit the throttled rate given the simple workload.
    • Bitmap Test
      • 2D canvas
    • Vector Test
      • 2D canvas
      • Dominated by radial gradient performance when not running with Direct2D
    • Compute Test
    • Video Test
  • FishIE
  • PenguinMark


  • Kraken
    • Mozilla developed
    • Essentially unmaintained
  • Sunspider
    • Tests run very fast which makes measurement error significant and means that the tasks are not scaled to a significant amount of work.
      • Has 26 tests that run in 200-300ms, depending on the machine, so about 10ms per test. The differences between current browsers are now on the order of running a test in 9ms instead of 10ms, so practical significance is limited. Another problem with SunSpider is that with 10ms per tests, making a more advanced JIT doesn't improve your score, because the compile time ends up outweighing the improved run time. This doesn't matter so much for comparing browsers, but it means SunSpider doesn't really drive JS engines to get faster on big apps. If anything, it tells browsers to create new startup modes so they can run small programs a shade faster. (dmandelin)


  • Dromaeo
    • Mozilla developed.
    • Maintenance status unclear; no reasonable contact info for how to get the deployed benchmark updated.
    • Source lives at
    • Microbenchmarks that are not representative of any real workloads and in fact are often completely bogus.
    • Easily gamed (and has been actively gamed in the past by UAs).
    • Incentivizes making already fast stuff faster more than making slow stuff faster.
    • Often doesn't measure what it claims to be measuring.
    • Best (because pretty much only, not because it's so good) DOM benchmark around.

Page Load

  • WebPageTest
    • seems very good
    • cross-browser
    • simulates different networks
  • iBench
    • uses onLoad() which is not credible for performance
    • not available anymore.


  • Eideticker
    • Mozilla developed
  • Chalkboard
    • Does CSS transforms on a single SVG image
      • Tests rasterization of SVG images
    • Not that interesting for real pan and zoom


Benchmark Aggregators