Changes

Jump to: navigation, search

Project Fission/Memory

1,406 bytes added, 15:20, 29 June 2020
Adding some discussion of tools
Due to the drastic increase in the number of processes required to support [[Project Fission]] we must focus on reducing the per-process overhead of each content process. As a baseline, we are working on reducing the amount of memory used to load an about:blank page.
 
We should also look at per-content process overhead in the parent. One example of this is IPDL protocols. We are not currently measuring this.
 
== Measurement tools ==
=== about:memory ===
This is the default tool for looking at memory usage. One approach to finding targets for memory reduction is to look at the about:memory report for a small content process (for instance, example.com) and see what is using up memory. In addition to using it to guide memory reduction work, we can work to reduce the heap-unclassified number in tests of interest. Adding new reporters often leads to finding things to improve.
 
=== GC logs ===
GC logs, which can be saved via the about:memory page, record information about live JS objects. As JS is a major source of memory overhead, having a detailed understanding of where the memory going is very useful. There are a [https://github.com/amccreight/heapgraph/tree/master/g number of scripts available] to help analyze these log files.
 
One interesting variation of this is to run with MOZ_GC_LOG_SIZE=1, which leverages the DevTools UbiNode work to record the size of individual objects in the GC log. You can then use those logs with the dominator tree based analysis in [https://github.com/amccreight/heapgraph/blob/master/g/dom_tree.py dom_tree.py] to get fine-grained information about the memory overhead of chrome JS, down to the individual function level.
== Metrics ==
Confirm
557
edits

Navigation menu