Necko/MobileCache/MicroBenchmarks: Difference between revisions

Line 64: Line 64:
=== test_stresstest_cache_with_timing.js  ===
=== test_stresstest_cache_with_timing.js  ===


This test loads a number of '''different''' resources with uniform size. The resources are loaded sequentially in a tight loop and the test syncs with cache-io thread after finishing the loop. Each loop is executed for five different cache-configurations/setups:  
This test loads a number of '''different''' resources with uniform size. All responses are 200 Ok, i.e. there are no redirections. The resources are loaded sequentially in a tight loop and the test syncs with cache-io thread after finishing the loop. Each loop is executed for five different cache-configurations/setups:  


#no cache enabled, loading all urls from server, not writing to cache (clearing cache before this step)  
#no cache enabled, loading all urls from server, not writing to cache (clearing cache before this step)  
Line 94: Line 94:
#time measured by the test using Data.now() for the loop, including syncing with the cache-io thread at the end
#time measured by the test using Data.now() for the loop, including syncing with the cache-io thread at the end


;telemetry-<bytes>.dat 
;timingchannel-<bytes>.dat 
:One file for each datasize run by the test. The format is multi-data with one datablock for each configuration. Each row represents one resource-load in actual order. The datapoints in each row are
:One file for each datasize run by the test. The format is multi-data with one datablock for each configuration. Each row represents one resource-load in actual order. The datapoints in each row are


Line 111: Line 111:
#time from responseStart to responseEnd
#time from responseStart to responseEnd


I.e. referring to the Telemetry-data accumulated by nsLoadGroup, (2+3) corresponds to Telemetry::HTTP_PAGE_OPEN_TO_FIRST_FROM_CACHE alternatively Telemetry::HTTP_PAGE_OPEN_TO_FIRST_RECEIVED, and (2+3+4) corresponds to Telemetry::HTTP_PAGE_COMPLETE_LOAD.  
Referring to Telemetry-data accumulated by nsLoadGroup, (2+3) corresponds to Telemetry::HTTP_PAGE_OPEN_TO_FIRST_FROM_CACHE alternatively Telemetry::HTTP_PAGE_OPEN_TO_FIRST_RECEIVED, and (2+3+4) corresponds to Telemetry::HTTP_PAGE_COMPLETE_LOAD.  


==== Files to work with GnuPlot (requires Gnuplot >= 4.4)  ====
==== Files to work with GnuPlot (requires Gnuplot >= 4.4)  ====
Line 130: Line 130:
:Plots summaries of average loading-times seen from JS. The term "INCLUDING sync" means that the time includes time to sync with the cache-io thread after finishing a loop.
:Plots summaries of average loading-times seen from JS. The term "INCLUDING sync" means that the time includes time to sync with the cache-io thread after finishing a loop.


;plot-telemetry.gnu
;plot-timingchannel.gnu
:You need to copy the file "telemetry-<size>.dat" to "telemetry.dat" for the size you want to study. Plots detailed information for each resource loaded, one plot for each configuration. The format used is a rowstacked histogram where each column shows total time from creating the channel to loading has finished. '''Note''' that columns show time for each load separately (all times starting at 0) as opposed to a real time-line for the dataset.
:You need to copy the file "timingchannel-<size>.dat" to "timingchannel.dat" for the size you want to study. Plots detailed information for each resource loaded, one plot for each configuration. The format used is a rowstacked histogram where each column shows total time from creating the channel to loading has finished. '''Note''' that columns show time for each load separately (all times starting at 0) as opposed to a real time-line for the dataset.


==== Sample results  ====
==== Sample results  ====
Line 142: Line 142:
|}
|}


The Nexus S seems to perform very well compared to the server, in fact it is generally faster! Note that the Nexus handles entries of size up to 8K pretty much as as fast as 128byte entries, whereas the Linux box maintains this performance only up to 4K. Note also the very first column in the Nexus-plot - most likely there is one outlier in this dataset. Let's study it closer and use "plot-telemetry.gnu" with the "telemetry-128.dat" file
The Nexus S seems to perform very well compared to the Linux server, in fact it is generally faster! Note that the Nexus handles entries of size up to 8K pretty much as as fast as 128byte entries, whereas the Linux box maintains this performance only up to 4K. Note also the very first column in the Nexus-plot - most likely there is one outlier in this dataset. Let's study it closer and use "plot-timingchannel.gnu" with the "timingchannel-128.dat" file


[[File:Nexuss-100iter-128byte-telemetry-nocache.png|800px]]
[[File:Nexuss-100iter-128byte-telemetry-nocache.png|800px]]


Observe that the first load takes a long time, caused by a delay between creating the channel to asyncOpen is called. I'm not entirely sure what can cause this, but I assume it's because it's the first load and certain things may have to be loaded and set up. It is not (should not be!) caused by the cache-service creating the disk-cache because caching is disabled in this test. It is easy to modify the test to load a couple of resources prior to starting the timed loop. The corresponding telemetry-plot from the Linux server looks like this
Observe that the first load takes a long time, caused by a delay between creating the channel to asyncOpen is called. I'm not entirely sure what can cause this, but I assume it's because it's the first load and certain things may have to be loaded and set up. It is not (should not be!) caused by the cache-service creating the disk-cache because caching is disabled in this test. It is easy to modify the test to load a couple of resources prior to starting the timed loop. The corresponding timingchannel-plot from the Linux server looks like this


[[File:Watson-100iter-128byte-telemetry-nocache.png|800px]]
[[File:Watson-100iter-128byte-telemetry-nocache.png|800px]]
97

edits