Performance/Fenix/Performance reviews: Difference between revisions

Jump to navigation Jump to search
Added list on how to performance test Fenix to be more concise on what to do.
(Created page with "Whenever submitting a PR for Fenix or Focus and you believe that the changed code could have a positive (or negative) impact on performance, there are a few things you can do...")
 
(Added list on how to performance test Fenix to be more concise on what to do.)
Line 18: Line 18:
## <code> Cold view nav start (cold_view_nav_start in the script) </code>. This path is taken when the browser is opened through an outside link (i.e: a link opened through Ggmail)
## <code> Cold view nav start (cold_view_nav_start in the script) </code>. This path is taken when the browser is opened through an outside link (i.e: a link opened through Ggmail)
## <code> Cold main session restore (cold_main_session_restore in the script) </code>. This path is taken when the browser was closed with an opened tab. When reopening, the application will automatically restore that session.
## <code> Cold main session restore (cold_main_session_restore in the script) </code>. This path is taken when the browser was closed with an opened tab. When reopening, the application will automatically restore that session.
# After determining your the path your changes affect, running the scripts should be your next step. Here's a few things to keep in mind:
#After determining the path your changes affect, these are the steps that you should follow:
## The usual iteration counts used is 25. Running less iterations might affect the results due to noise
 
## Make sure the application you're testing is a fresh install. ''' If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing '''  
* Run <code>measure_start_up.py</code> located in perf-tools. '''Note''':  
# Once you have gathered your results, you can analyze them using ([https://github.com/mozilla-mobile/perf-tools/blob/main/analyze_durations.py <code>analyze_duations.py</code>]) which is found in the perf-tools repository.  
**The usual iteration coumbered list itemnts used is 25. Running less iterations might affect the results due to noise
# Repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using <code>git rev-parse ${SHA}^</code> where <code>${SHA}</code> is the first commit on the branch where the changes are)
**Make sure the application you're testing is a fresh install. ''' If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing '''  
  python3 measure_start_up.py {path_changes_affect} {path_to_repo} {release_channel} -p fenix -c {how_many_iterations_to_test} --no_start_up_cache
 
* Once you have gathered your results, you can analyze them using <code>analyze_durations.py</code> in perf-tools.  
  python3 analyze_durations.py {path_to_output_of_measure_start_up.py}
 
 
'''NOTE''':For testing before and after to compare changes made to Fenix: repeat these steps, but this time for the code before the changes. Therefore, you could checkout the parent comment (I.e: using <code>git rev-parse ${SHA}^</code> where <code>${SHA}</code> is the first commit on the branch where the changes are)


An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]).  
An example of using these steps to review a PR can be found ([https://github.com/mozilla-mobile/fenix/pull/20642#pullrequestreview-748204153 here]).  
8

edits

Navigation menu