Benchmarks: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
Line 9: Line 9:
** Targeted for "HTML5 Ready" apps/usage
** Targeted for "HTML5 Ready" apps/usage
** Tests existence of certain APIs but not full compliance
** Tests existence of certain APIs but not full compliance
** We disagree that the FileSystem API should be part of this suite, no vendor other than Google implements and they seem to have given up on this spec too
** We disagree that the FileSystem API should be part of this suite, no vendor other than Google implements and they seem to have given up on this spec too.
* '''[http://rng.io Ringmark]'''
* '''[http://rng.io Ringmark]'''
** Targeted for mobile HTML5 app capabilities
** Targeted for mobile HTML5 app capabilities

Revision as of 14:59, 18 September 2012

Information related to commonly used benchmarks and compliance measures for the web

Benchmarks

Bold means commonly used in media

Compliance

  • HTML5Test
    • Targeted for "HTML5 Ready" apps/usage
    • Tests existence of certain APIs but not full compliance
    • We disagree that the FileSystem API should be part of this suite, no vendor other than Google implements and they seem to have given up on this spec too.
  • Ringmark
  • Acid3
    • All major browsers pass this
    • Should not be used as a performance test
  • Test262
    • Javascript compliance
  • CSS3 Selectors Test
    • All major browsers should pass
  • Browserscope
    • community driven project initiated by Google
    • includes Ringmark
    • the rich-text numbers are somewhat arbitrary and a higher score doesn't necessarily mean a better implemenation
  • W3C CSS Test Suites
    • hard to run, but results from others, for some of the test suites, are available (but separated by browser engine but not by version)
    • test suites in-development aren't separated from the stable ones
  • Content Security Policy Compliance Test Suite

Performance

General

Graphics

Javascript

  • Kraken
    • Mozilla developed
    • Essentially unmaintained
  • Sunspider
    • Tests run very fast which makes measurement error significant and means that the tasks are not scaled to a significant amount of work.
      • Has 26 tests that run in 200-300ms, depending on the machine, so about 10ms per test. The differences between current browsers are now on the order of running a test in 9ms instead of 10ms, so practical significance is limited. Another problem with SunSpider is that with 10ms per tests, making a more advanced JIT doesn't improve your score, because the compile time ends up outweighing the improved run time. This doesn't matter so much for comparing browsers, but it means SunSpider doesn't really drive JS engines to get faster on big apps. If anything, it tells browsers to create new startup modes so they can run small programs a shade faster. (dmandelin)

DOM

Page Load

  • iBench
    • uses onLoad() which is not credible for performance
    • not available anymore.

Pan/Zoom

Other

Benchmark Aggregators