Confirmed users
975
edits
(→Timestamp benchmark: Use syntax highlighting for simple timestamp benchmark example) |
(Clean up header; Mention no CI for benchamrk) |
||
Line 1: | Line 1: | ||
Do you want to know if your change impacts Fenix or Focus performance? If so, here are the methods you can use, in order of preference: | Do you want to know if your change impacts Fenix or Focus performance? If so, here are the methods you can use, in order of preference: | ||
# [[#Benchmark|'''Benchmark:''']] use an automated test to measure the change in duration | |||
# [[#Benchmark | # [[#Timestamp benchmark|'''Timestamp benchmark:''']] add temporary code and manually measure the change in duration. Practical for non-UI measurements or ''very'' simple UI measurements | ||
# [[#Timestamp benchmark|'''Timestamp benchmark:''']] add temporary code and manually measure the change in duration | # [[#Profile|'''Profile:''']] take a profile, identify the start and end points of your measurement, and measure the change in duration | ||
# [[#Profile|'''Profile:''']] | |||
We don't necessarily recommend these techniques though they have their place: | |||
# '''Screen recording, side-by-side:''' take a screen recording of before and after your change, synchronize the videos, and put them side-by-side with timestamps using [https://github.com/mozilla-mobile/perf-tools/blob/6422e190ae0cb3380fb1bd6069240a7a6c5ed8b2/combine-videos-side-by-side.sh the <code>perf-tools/combine-videos-side-by-side.sh</code> script.] | |||
The trade-offs for each technique are mentioned in their respective section. | The trade-offs for each technique are mentioned in their respective section. | ||
Line 10: | Line 12: | ||
== Benchmark locally == | == Benchmark locally == | ||
A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results. However, automated benchmarks are time consuming and difficult to write so sometimes it's better to perform manual tests. | A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results. However, automated benchmarks are time consuming and difficult to write so sometimes it's better to perform manual tests. | ||
Unfortunately, we don't yet support benchmarks in CI so you'll have to run them manually. '''Please use a low-end device.''' | |||
'''To benchmark, do the following:''' | '''To benchmark, do the following:''' |