8
edits
(Added list on how to performance test Fenix to be more concise on what to do.) |
(Fixed nits and removed Before testing section to make it more concise and clear) |
||
| Line 1: | Line 1: | ||
Whenever submitting a PR for Fenix or Focus and you believe that the changed code could have a positive (or negative) impact on performance, there are a few things you can do to test the impact of the modified code. | Whenever submitting a PR for Fenix or Focus and you believe that the changed code could have a positive (or negative) impact on performance, there are a few things you can do to test the impact of the modified code. | ||
== Testing Start Up code == | == Testing Start Up code == | ||
| Line 20: | Line 13: | ||
#After determining the path your changes affect, these are the steps that you should follow: | #After determining the path your changes affect, these are the steps that you should follow: | ||
Example: | |||
* Run <code>measure_start_up.py</code> located in perf-tools. '''Note''': | * Run <code>measure_start_up.py</code> located in perf-tools. '''Note''': | ||
**The usual iteration coumbered list itemnts used is 25. Running less iterations might affect the results due to noise | **The usual iteration coumbered list itemnts used is 25. Running less iterations might affect the results due to noise | ||
**Make sure the application you're testing is a fresh install. ''' If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing ''' | **Make sure the application you're testing is a fresh install. ''' If testing the Main intent (which is where the browser ends up on its homepage), make sure to clear the onboarding process before testing ''' | ||
python3 measure_start_up.py | python3 measure_start_up.py cold_view_nav_start /Users/johndoe/repositories/fenix/ nightly -p fenix -c 50 --no_start_up_cache | ||
where <code>p</code> is the product, <code>c</code> is the iteration count | |||
* Once you have gathered your results, you can analyze them using <code>analyze_durations.py</code> in perf-tools. | * Once you have gathered your results, you can analyze them using <code>analyze_durations.py</code> in perf-tools. | ||
python3 analyze_durations.py | python3 analyze_durations.py /Users/johndoe/output/measure_start_up_results.txt | ||
edits