Performance/Fenix/Performance reviews: Difference between revisions

Add intro with list to different duration change measurement methodologies
(→‎Testing Start Up code: Correct argument order)
(Add intro with list to different duration change measurement methodologies)
Line 1: Line 1:
Whenever submitting a PR for Fenix or Focus and you believe that the changed code could have a positive (or negative) impact on performance, there are a few things you can do to test the impact of the modified code.
Do you want to know if your change impacts Fenix or Focus performance? If so, here are the methods you can use, with the preferred methods at the top:


== Testing Start Up code ==
# [[#Benchmark|'''Benchmark:''']] use an automated test to measure the change in duration
# [[#Timestamp benchmark|'''Timestamp benchmark:''']] add temporary code and manually test to measure the change in duration
# [[#Profile|'''Profile:''']] measure the change in duration from a profile
 
The trade-offs for each technique are mentioned in their respective section.
 
== Benchmark ==
TODO
 
=== Testing Start Up code ===


To test start up code, the approach is usually simple:
To test start up code, the approach is usually simple:
Line 27: Line 36:
An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]).
An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]).


== Testing non start-up changes ==
=== Testing non start-up changes ===


Testing for non start-up changes is a bit different than the steps above since the performance team doesn't have tools as of now to test different part of the browser.
Testing for non start-up changes is a bit different than the steps above since the performance team doesn't have tools as of now to test different part of the browser.
Line 35: Line 44:
## ([https://wiki.mozilla.org/Performance/Fenix/Profilers_and_Tools#Profilers_and_performance_tooling Profiles]) can be a good visual representative for performance changes. A simple way to find your code and its changes could be either through the call tree, the flame graph or stack graph.  ''' NOTE: some code may be missing from the stack since pro-guard may inline it, or the sampling rate of the profiler is more than the time taken by the code. '''
## ([https://wiki.mozilla.org/Performance/Fenix/Profilers_and_Tools#Profilers_and_performance_tooling Profiles]) can be a good visual representative for performance changes. A simple way to find your code and its changes could be either through the call tree, the flame graph or stack graph.  ''' NOTE: some code may be missing from the stack since pro-guard may inline it, or the sampling rate of the profiler is more than the time taken by the code. '''
## Another useful tool to find changes in performance is markers. Markers can be good to show the time elapsed between point A and point B or to pin point when a certain action happens.
## Another useful tool to find changes in performance is markers. Markers can be good to show the time elapsed between point A and point B or to pin point when a certain action happens.
== Timestamp benchmark ==
TODO
== Profile ==
TODO
Confirmed users
975

edits