Changes

Jump to: navigation, search

Performance/Fenix/Best Practices

1,170 bytes added, 19:12, 20 May 2021
Use the profiler is to understand problems, not assert their absence: Fill in section
== Use the profiler is to understand problems, not assert their absence ==
The profiler is useful for understanding what might cause a perf problem but '''it's imperfect for understanding if a perf problem exists or not.''' For example, if you've made a code change with the intention of improving performance, you may notice that the problem point is gone in your profile. Success, right? Maybe not: '''the code change may have moved the performance problem elsewhere''' and it's easy to overlook this in the profiler view. For example, perhaps you removed a long call to load <code>SharedPreferences</code> but the next call to <code>SharedPreferences</code> increases in duration to compensate and start up is just as slow.
 
To see if a code change creates a perf regression or improvement, you should '''ideally run a known benchmark''' – i.e. a duration measurement from a start point to a stop point – and see how performance changes before and after your code change. With benchmarks, you can't overlook things like you can in the profiler. If you don't have a benchmark, you can create your own with timestamp logs, though it should be done carefully to ensure the measurements are consistent – writing good benchmarks is hard.
Confirm
975
edits

Navigation menu