Performance/Fenix/Performance reviews: Difference between revisions

Add benchmark in CI to intro
(Add benchmark intro)
(Add benchmark in CI to intro)
Line 1: Line 1:
Do you want to know if your change impacts Fenix or Focus performance? If so, here are the methods you can use, in order of preference:
Do you want to know if your change impacts Fenix or Focus performance? If so, here are the methods you can use, in order of preference:


# [[#Benchmark|'''Benchmark:''']] use an automated test to measure the change in duration
# '''Benchmark in CI:''' not yet available. However, it'd be preferred as it's the most consistent
# [[#Benchmark locally|'''Benchmark locally:''']] use an automated test to measure the change in duration
# [[#Timestamp benchmark|'''Timestamp benchmark:''']] add temporary code and manually test to measure the change in duration
# [[#Timestamp benchmark|'''Timestamp benchmark:''']] add temporary code and manually test to measure the change in duration
# [[#Profile|'''Profile:''']] use a profile to measure the change in duration
# [[#Profile|'''Profile:''']] use a profile to measure the change in duration
Line 7: Line 8:
The trade-offs for each technique are mentioned in their respective section.
The trade-offs for each technique are mentioned in their respective section.


== Benchmark ==
== Benchmark locally ==
A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results. However, automated benchmarks are time consuming and difficult to write so sometimes it's better to perform manual tests.
A benchmark is an automated test that measures performance, usually the duration from point A to point B. Automated benchmarks have similar trade-offs to automated functionality tests when compared to one-off manual testing: they can continuously catch regressions and minimize human error. For manual benchmarks in particular, it can be tricky to be consistent about how we aggregate each test run into the results. However, automated benchmarks are time consuming and difficult to write so sometimes it's better to perform manual tests.


Confirmed users
975

edits