Performance/Evaluating Performance of New Features

From MozillaWiki
Jump to: navigation, search
There are a lot of good tools available now for studying Firefox performance. This is a list of steps to follow when evaluating the performance of your next Firefox feature.
  1. Make sure to test your feature on a low-end or mid-range computer; our dev machines are uncommonly powerful. Think machines with spinning hard drives, not SSDs. Also make sure to test on Windows, as it is used by the vast majority of our users.
    • The perf team, fx-team, and gfx team have Windows Asus T100 tablets available in multiple offices just for this purpose. Contact me, Gavin, or Milan Sreckovic if you need one.
  2. Ensure your feature does not touch storage on the main thread, either directly or indirectly
    • If there's any chance it might cause main-thread IO, test it with the Gecko profiler. The profiler now has an option to show you all the IO done on the main thread, no matter how brief it is.
    • Also be careful about using SQLite
  3. Make sure to add Telemetry probes that measure how well your feature performs on real user machines.
    • Check the Telemetry numbers again after your feature reaches the release channel. The release channel has a diversity of configurations that simply don't exist on any of the pre-release channels.
      • You can check for regressions in the Telemetry dash, or you can ask the perf-team to show you how to do a custom analysis (e.g. performance on a particular gfx card type) using MapReduce or Spark.
        • The learning curve can be a bit steep, so the perf team can do one-off analyses for you.
      • We have additional performance dashboards; they are listed in the "More Dashboards" sidebar on telemetry.mozilla.org
    • Always set the "alert_mails" field for your histogram in Histograms.json so you get automatic e-mail notifications of performance regressions and improvements.
      • Ideally, this email address should point to an alias for your team.
      • Note that the Telemetry regression detector has an extremely low false-positive rate so you won't be getting any emails unless performance has changed significantly.
  4. Keep an eye out on the Talos scores.
    • The Talos tests are much less noisy now than they used to be, and more sensitive as well. This is thanks to Avi Halachmi's, Joel Maher's, and others' efforts.
      • Partly as a result of this, we now have a stricter Talos sheriffing policy. The patch author has 3 business days to respond to a Talos regression bug (before getting backed out), and two weeks to decide what to do with the regression.
    • Joel Maher will file a regression bug against you if you regress a Talos test.
      • The list of unresolved regressions in each release is tracked in the meta bugs: Firefox 36, Firefox 37, Firefox 38, etc
      • Joel tracks all the improvements together with all the regressions in a dashboard
    • If you cause a regression that you can't reproduce on your own machine, you can capture a profile directly inside the Talos environment: https://wiki.mozilla.org/Buildbot/Talos/Profiling
    • Some Talos tests can be run locally as extensions, others may require you to set up a Talos harness. Instructions for doing this will be provided in the Talos regression bugs from now on.
    • The graph server can show you a history of test scores and test noise to help you determine if the reported regression is real.
      • William Lachance is working on a new & improved graphing UI for treeherder.
  5. Consider adding a new Talos test
    • Add a new Talos test if the performance of your feature is important and it is not covered by existing tests. The Perf team would be happy to help you design a meaningful and reliable test.
    • Make sure your test measures the right things, isn't noisy and that it is is able to detect real regressions