|
|
(9 intermediate revisions by the same user not shown) |
Line 53: |
Line 53: |
|
| |
|
| ==== Real-life verification of results from the lab ==== | | ==== Real-life verification of results from the lab ==== |
| Telemetry monitors real-life efficiency of the cache, hence by monitoring the right values we can ensure that improvements we see in the lab also benefits real users. Exactly which measurements we need is not entirely clear yet (work is in progress to determine this). | | Telemetry monitors real-life efficiency of the cache, hence by monitoring the right values we can ensure that improvements we see in the lab also benefit real users. Exactly which measurements we need is not entirely clear yet (work is in progress to determine this). |
|
| |
|
| An important point is that in order to verify results like this we should measure the same (or at least very similar) values in microbenchmarks and in telemetry. We should also, of-course, measure other values from telemetry in order to cross-check our results, but in order to verify lab-improvements in real life we should align measurements. The rationale for this is as follows: Let's say we want to improve characteristic A. We make a benchmark which measure A, fix the code, and see a clear improvement of A in the lab so we push the code-fix. Let's say telemetry do not measure A but rather some other characteristic B. What if B did not improve (or even degraded)? Is it because A and B are unrelated? Or is it because A actually didn't improve in real-life? We want to first verify that A actually did improve in real-life, then we can discuss why B did not improve, and then decide whether the code-change is worth keeping. Such results will also increase our understanding of how the caching works in general. | | An important point is that in order to verify results like this we must measure the same (or at least very similar) values in microbenchmarks and in telemetry. We should also, of-course, measure other values from telemetry in order to cross-check our results, but in order to verify lab-improvements in real life we must align measurements. The rationale for this is as follows: Let's say we want to improve characteristic A. We make a benchmark to measure A, fix the code, and see a clear improvement of A in the lab so we push the code-fix. Let's say telemetry do ''not'' measure A but rather some other characteristic B. What if B didn't improve (or even degraded)? Is it because A and B are unrelated? Or is it because A actually didn't improve in real-life at all? We must first verify that A actually ''did'' improve in real-life, ''then'' we can discuss why B did ''not'' improve, and finally we can decide whether the code-change is worth keeping. Such results will also increase our understanding of how the caching works in general. |
|
| |
|
| Thus, the suggested strategy is to first introduce a telemetry-probe to gather baseline-data for some characteristic we want to improve, then introduce a code-fix which improves the characteristic in the lab, then finally monitor changes from telemetry to verify that the improvement from the lab. In the last step, the particular characteristic should be monitored as well as other (more general) performance-probes for cross-checking. | | Thus, the suggested strategy is to first introduce a telemetry-probe to gather baseline-data for some characteristic we want to improve, then introduce a code-fix which improves the characteristic in the lab, then finally monitor changes from telemetry to verify the improvement we saw in the lab. In the last step, the particular characteristic should be monitored as well as other (more general) performance-probes for cross-checking. |
|
| |
|
| == The benchmarks == | | == The benchmarks == |
Line 64: |
Line 64: |
| === test_stresstest_cache_with_timing.js === | | === test_stresstest_cache_with_timing.js === |
|
| |
|
| This test loads a number of '''different''' resources with uniform size. All responses are 200 Ok, i.e. there are no redirections. The resources are loaded sequentially in a tight loop and the test syncs with cache-io thread after finishing the loop. Each loop is executed for five different cache-configurations/setups: | | This test loads a number of '''different''' resources with uniform size. All responses are 200 Ok, i.e. there are no redirections. The resources are loaded sequentially in a tight loop and the test syncs with cache-io thread after finishing the loop. Each loop is executed for a number of different cache-configurations/setups: |
|
| |
|
| #no cache enabled, loading all urls from server, not writing to cache (clearing cache before this step) | | #no cache enabled, loading all urls from server, not writing to cache (clearing cache before this step) |
| #memory-cache enabled, loading all urls from server, write them to cache (clearing cache before this step) | | #empty memory-cache, loading all urls from server, write them to cache (clearing cache before this step) |
| #memory-cache enabled, read all urls from cache (resources cached in previous step) | | #memory-cache primed by pervious step, read all urls from cache again |
| #disk-cache enabled, loading all urls from server, write them to cache (clearing cache before this step) | | #empty disk-cache, loading all urls from server, write them to cache (clearing cache before this step) |
| #disk-cache enabled, reading all urls from cache (resources cached in previous step) | | #disk-cache primed by pervious step, reading all urls from cache |
| | #memory-cache filled with small (128bytes) entries, loading all urls from server, write them to cache (clear and pre-load cache before this step) |
| | #memory-cache primed by pervious step, read all urls from cache |
| | #memory-cache filled with large (128Kb) entries, loading all urls from server, write them to cache (clear and pre-load cache before this step) |
| | #memory-cache primed by pervious step, read all urls from cache |
| | #disk-cache filled with small (128bytes) entries, loading all urls from server, write them to cache (clear and pre-load cache before this step) |
| | #disk-cache primed by pervious step, read all urls from cache |
| | #disk-cache filled with large (128Kb) entries, loading all urls from server, write them to cache (clear and pre-load cache before this step) |
| | #disk-cache primed by pervious step, read all urls from cache |
|
| |
|
| This whole procedure is repeated over a given list of datasizes, clearing the cache between each repetition. | | This whole procedure is repeated over a given list of datasizes, clearing the cache between each repetition. |
Line 76: |
Line 84: |
| Time is measured in two ways: 1) for each individual load by by using the nsITimingChannel interface of the channel, and 2) by measuring run-time of the complete loop from the test itself. | | Time is measured in two ways: 1) for each individual load by by using the nsITimingChannel interface of the channel, and 2) by measuring run-time of the complete loop from the test itself. |
|
| |
|
| Informally one can say that this test loads a large number of uniformly sized resources at the same time, somewhat similar to what happens when starting up firefox with lots of open tabs. | | Informally one can say that this test triggers loading a large number of uniformly sized resources at the same time, somewhat similar to what happens when starting up firefox with lots of open tabs. Time to finish loading all resources is measured over combinations of memory-cache vs disk-cache, empty cache vs full cache, cache containing small entries vs large entries, and all items found in cache vs no item found in cache. |
|
| |
|
| ==== Generated datafiles ==== | | ==== Generated datafiles ==== |
|
| |
|
| This test writes data into a number of files. '''Note that''' all files are appended, meaning that if they exist prior to the run, you will have strange data...
| | The test writes data into a number of files. '''Note that''' all files are appended, meaning that if they exist prior to the run, you may have unexpected results... |
|
| |
|
| ;summary.dat | | ;summary.dat |
| :Format is multi-data with one datablock for each datasize run by the test. Each datablock has a header describing the datasize, then one row for each of the five configurations summing up the results from the loop. Datapoints are | | :Format is multi-data with one datablock for each datasize run by the test. Each datablock has a header describing the datasize, then one row for each of the configurations described above, summing up the results from the loop. Datapoints are |
|
| |
|
| #description of the configuration | | #description of the configuration |
Line 133: |
Line 141: |
| :You need to copy the file "timingchannel-<size>.dat" to "timingchannel.dat" for the size you want to study. Plots detailed information for each resource loaded, one plot for each configuration. The format used is a rowstacked histogram where each column shows total time from creating the channel to loading has finished. '''Note''' that columns show time for each load separately (all times starting at 0) as opposed to a real time-line for the dataset. | | :You need to copy the file "timingchannel-<size>.dat" to "timingchannel.dat" for the size you want to study. Plots detailed information for each resource loaded, one plot for each configuration. The format used is a rowstacked histogram where each column shows total time from creating the channel to loading has finished. '''Note''' that columns show time for each load separately (all times starting at 0) as opposed to a real time-line for the dataset. |
|
| |
|
| ==== Sample results ==== | | ==== Sample results (debug-builds with logging enabled) ==== |
|
| |
|
| Here are two plots by "plot-summary-from-javascript.gnu". The leftmost is from a 64 bit Linux server and the rightmost from a Nexus S
| | This benchmark has currently been run successfully on two different Linux-systems and on a Nexus S. Below are extracts from the results, focusing on time to load resources as seen from JavaScript. The plots have been obtained by editing "summary.dat", sorting results by cache-type and splitting up in "hits" and "misses" which are plotted separately. Modifying the generated files and the GnuPlot-files to do this is trivial and I won't discuss it here. |
|
| |
|
| {| | | The first set of graphs show "misses", i.e. it indicates time to search the cache for each resource, clear space for it, load it and finally set it up to be saved. Resources are served from the in-built httpd.js - loading with cache disabled is included first as a reference. Hence, the first group in the histogram show resources loaded without caching. The next three groups show results with disk-cache enabled and the last three groups show results for the memory-cache. |
| | [[Image:Watson-100iter-summary.png|400px]] | | |
| | [[Image:Nexuss-100iter-summary.png|400px]] | | {| |
| | | [[File:Watson-summary-misses-100iter.png|400px]] |
| | | [[File:HPEnvy-summary-misses-100iter.png|400px]] |
| | | [[File:Nexuss-summary-misses-100iter.png|400px]] |
| |} | | |} |
|
| |
|
| The Nexus S seems to perform very well compared to the Linux server, in fact it is generally faster! Note that the Nexus handles entries of size up to 8K pretty much as as fast as 128byte entries, whereas the Linux box maintains this performance only up to 4K. Note also the very first column in the Nexus-plot - most likely there is one outlier in this dataset. Let's study it closer and use "plot-timingchannel.gnu" with the "timingchannel-128.dat" file
| |
|
| |
| [[File:Nexuss-100iter-128byte-telemetry-nocache.png|800px]]
| |
|
| |
| Observe that the first load takes a long time, caused by a delay between creating the channel to asyncOpen is called. I'm not entirely sure what can cause this, but I assume it's because it's the first load and certain things may have to be loaded and set up. It is not (should not be!) caused by the cache-service creating the disk-cache because caching is disabled in this test. It is easy to modify the test to load a couple of resources prior to starting the timed loop. The corresponding timingchannel-plot from the Linux server looks like this
| |
|
| |
| [[File:Watson-100iter-128byte-telemetry-nocache.png|800px]]
| |
|
| |
| === test_timing_cache.js '''(Obsolete)''' ===
| |
| This benchmark measures time to load a resource from a server and from the cache, as well as time for call-overhead and clearing the cache.
| |
|
| |
| The approach is to repeat a single load in a loop for some time and report the average time for one iteration. This eliminates timer-granularity issues and should smoothen out random jitter in the system, but the weakness is that IO-caching at any level will influence disk-cache results. Moreover, memory-cache results are likely to be influenced by HW caches and memory-paging in the OS. In short, this benchmark does not implement a very realistic access-pattern for cache-entries, hence its practical value may be limited. It can be useful, though, to study changes unrelated to the speed of the cache media, e.g. code to optimize datastructures or algorithms.
| |
|
| |
| Note that there are two different tests for cache-misses: One which clears the cache in each iteration, and one where the response has a header which prevents it from being cached. (The first is mainly present for completeness - it makes a clearCache() call in each iteration, which is not very likely to happen in real life.) One could also imagine a third approach to emulate cache-misses; suffix the url with a different query-string in order to load a new resource in each iteration. However, this doesn't work because it hits the max number of open connections and times out. (See test below though, where we can control this limit in a different manner.)
| |
|
| |
| All operations are reported for different data-sizes, as well as with no cache enabled, memory-cache enabled only and disk-cache enabled only. (Note that, theoretically, searching for an entry in a cache should take some time, hence we measure also with no cache enabled.)
| |
|
| |
| Some results from a Linux server running 64-bit Ubuntu 10.4
| |
|
| |
| Memory-cache capacity: 51200 Kb / Disk-cache capacity: 1048576 Kb
| |
| Max #connections=256 / max per server=256
| |
| Total time #runs avg time/run
| |
| === Set datasize 50 bytes
| |
| Calloverhead (no-cache) 10016 14594 0.69
| |
| Calloverhead+ClearCache (no-cache) 10015 11647 0.86
| |
| CacheMiss (nocache) (no cache) 10047 368 27.30
| |
| Calloverhead (mem-cache) 10015 14532 0.69
| |
| Calloverhead+ClearCache (mem-cache) 10016 11616 0.86
| |
| CacheMiss (clrcache) (mem-cache) 10018 360 27.83
| |
| CacheMiss (nocache) (mem-cache) 10032 367 27.34
| |
| CacheHit (mem-cache) 10017 4674 2.14
| |
| Calloverhead (disk-cache) 10015 13954 0.72
| |
| Calloverhead+ClearCache (disk-cache) 10017 6723 1.49
| |
| CacheMiss (clrcache) (disk-cache) 10033 346 29.00
| |
| CacheMiss (nocache) (disk-cache) 10032 357 28.10
| |
| CacheHit (disk-cache) 10017 4709 2.13
| |
| === Set datasize 1024 bytes
| |
| CacheMiss (nocache) (no cache) 10020 339 29.56
| |
| CacheMiss (nocache) (mem-cache) 10037 351 28.60
| |
| CacheHit (mem-cache) 10019 3634 2.76
| |
| CacheMiss (nocache) (disk-cache) 10048 345 29.12
| |
| CacheHit (disk-cache) 10018 3535 2.83
| |
| === Set datasize 51200 bytes
| |
| CacheMiss (nocache) (no cache) 10095 103 98.01
| |
| CacheMiss (nocache) (mem-cache) 10023 104 96.38
| |
| CacheHit (mem-cache) 10070 306 32.91
| |
| CacheMiss (nocache) (disk-cache) 10110 104 97.21
| |
| CacheHit (disk-cache) 10054 301 33.40
| |
| === Set datasize 524288 bytes
| |
| CacheMiss (nocache) (no cache) 10277 14 734.07
| |
| CacheMiss (nocache) (mem-cache) 10312 14 736.57
| |
| CacheHit (mem-cache) 10443 31 336.87
| |
| CacheMiss (nocache) (disk-cache) 10283 14 734.50
| |
| CacheHit (disk-cache) 10451 31 337.13
| |
|
| |
| The important numbers are found in the rightmost column. Note that the first block (datasize 50 bytes) includes more results than the other blocks. "Pure Overhead" measures the call-overhead from JavaScript, and "Overhead+ClearCache" measures the time to Clear the cache from JavaScript. These numbers are unrelated to the size of the data loaded, hence reported only once.
| |
|
| |
| Results above indicate that reading from the disk-cache is about as fast as reading from memory-cache, which probably is due to efficient IO-caching on this platform. Note also that cache-hits for small entries are relatively much faster than cache-hits for larger entries (i.e. for small entries there is a factor 10 whereas for large entries we see a factor 2-3).
| |
|
| |
| There is no conclusive evidence that searching a disk or memory cache takes any significant amount of time. However, the caches in this test are pretty much empty so we should create a separate test for this, making sure the cache contains some number of entries before we measure. (See [[#Wanted_tests]] below)
| |
|
| |
| Some raw results from my Nexus S. '''I find it weird that this device outperforms the Linux server when the data-size becomes larger... any insight is appreciated'''
| |
|
| |
| Memory-cache capacity: 51200 Kb / Disk-cache capacity: 1048576 Kb
| |
| Max #connections=256 / max per server=256
| |
| Total time #runs avg time/run
| |
| === Set datasize 50 bytes
| |
| Calloverhead (no-cache) 10016 10878 0.92
| |
| Calloverhead+ClearCache (no-cache) 10016 9482 1.06
| |
| CacheMiss (nocache) (no cache) 10020 321 31.21
| |
| Calloverhead (mem-cache) 10015 10975 0.91
| |
| Calloverhead+ClearCache (mem-cache) 10016 8937 1.12
| |
| CacheMiss (clrcache) (mem-cache) 10036 371 27.05
| |
| CacheMiss (nocache) (mem-cache) 10021 348 28.80
| |
| CacheHit (mem-cache) 10016 4639 2.16
| |
| Calloverhead (disk-cache) 10016 11426 0.88
| |
| Calloverhead+ClearCache (disk-cache) 10043 213 47.15
| |
| CacheMiss (clrcache) (disk-cache) 10034 117 85.76
| |
| CacheMiss (nocache) (disk-cache) 10019 347 28.87
| |
| CacheHit (disk-cache) 10017 4206 2.38
| |
| === Set datasize 1024 bytes
| |
| CacheMiss (nocache) (no cache) 10049 332 30.27
| |
| CacheMiss (nocache) (mem-cache) 10024 339 29.57
| |
| CacheHit (mem-cache) 10017 3501 2.86
| |
| CacheMiss (nocache) (disk-cache) 10031 327 30.68
| |
| CacheHit (disk-cache) 10017 3308 3.03
| |
| === Set datasize 51200 bytes
| |
| CacheMiss (nocache) (no cache) 10064 119 84.57
| |
| CacheMiss (nocache) (mem-cache) 10074 119 84.66
| |
| CacheHit (mem-cache) 10049 396 25.38
| |
| CacheMiss (nocache) (disk-cache) 10078 116 86.88
| |
| CacheHit (disk-cache) 10031 426 23.55
| |
| === Set datasize 524288 bytes
| |
| CacheMiss (nocache) (no cache) 10572 19 556.42
| |
| CacheMiss (nocache) (mem-cache) 10052 18 558.44
| |
| CacheHit (mem-cache) 10274 45 228.31
| |
| CacheMiss (nocache) (disk-cache) 10167 18 564.83
| |
| CacheHit (disk-cache) 10177 43 236.67
| |
|
| |
|
| This is one way to represent the results
| | Note that a memory-cache full of small entries (sixth group) seems to perform bad for large resources (surprising, because you'd expect the disk-cache to always be slower). |
|
| |
|
| {| border="1" cellpadding="5" cellspacing="0" align="center"
| | Note that a disk-cache full of small entries (third group) is generally slower than the empty disk-cache (second group). |
| |+'''Summary of results over different platforms'''
| |
| |-
| |
| ! colspan="2" |
| |
| ! colspan="4" | No cache
| |
| ! colspan="4" | Memory-Cache
| |
| ! colspan="4" | Disk-Cache
| |
| |-
| |
| ! colspan="2" | || 50 || 1024 || 51200 || 524288 || 50 || 1024 || 51200 || 524288 || 50 || 1024 || 51200 || 524288
| |
| |-
| |
| ! rowspan="3" | Ubuntu 64bit server
| |
| ! | Clear cache
| |
| | colspan="4"| 0.86
| |
| | colspan="4"| 0.86
| |
| | colspan="4"| 1.49
| |
| |-
| |
| ! | Miss
| |
| | 27.30 || 29.56 || 98.01 || 734.07
| |
| | 27.34 || 28.60 || 96.38 || 736.57
| |
| | 28.10 || 29.12 || 97.21 || 734.50
| |
| |-
| |
| ! | Hit
| |
| | colspan="4" | '''n/a'''
| |
| | 2.14 || 2.76 || 32.91 || 336.87
| |
| | 2.13 || 2.83 || 33.40 || 337.13
| |
| |-
| |
| ! rowspan="3" | Nexus S
| |
| ! | Clear cache
| |
| | colspan="4"| 1.06
| |
| | colspan="4"| 1.12
| |
| | colspan="4"| 47.15
| |
| |-
| |
| ! | Miss
| |
| | 31.21 || 30.27 || 84.57 || 556.42
| |
| | 28.80 || 29.57 || 84.66 || 558.44
| |
| | 28.87 || 30.68 || 86.88 || 564.83
| |
| |-
| |
| ! | Hit
| |
| | colspan="4" | '''n/a'''
| |
| | 2.16 || 2.86 || 25.38 || 228.31
| |
| | 2.38 || 3.03 || 23.55 || 236.67
| |
| |-
| |
| ! rowspan="3" | Samsung Galaxy S (from Geoff)
| |
| ! | Clear cache
| |
| | colspan="4"| 1.89
| |
| | colspan="4"| 1.87
| |
| | colspan="4"| 1339.75
| |
| |-
| |
| ! | Miss
| |
| | 31.33 || 27.48 || 74.04 || 524.20
| |
| | 27.27 || 28.32 || 75.05 || 529.32
| |
| | 28.04 || 28.43 || 93.92 || 612.59
| |
| |-
| |
| ! | Hit
| |
| | colspan="4" | '''n/a'''
| |
| | 2.67 || 3.04 || 21.67 || 209.69
| |
| | 3.03 || 3.37 || 24.37 || 218.61
| |
| |}
| |
|
| |
|
|
| |
|
| and this is another representation. I prefer the latter since it allows me to easier study the effect of different cache-configurations on a particular testcase
| | The second set of graphs show "hits", i.e. the time spent to load the resource from cache. It indicates time to search for the resource as well as load it. Columns are sorted as in previous set of graphs. |
|
| |
|
| {| border="1" cellpadding="5" cellspacing="0" align="center" | | {| |
| |+'''Summary 2 of results over different platforms'''
| | | [[File:Watson-summary-hits.png|400px]] |
| |-
| | | [[File:HPEnvy-summary-hits.png|400px]] |
| ! colspan="2" | || 50 || 1024 || 51200 || 524288
| | | [[File:Nexuss-summary-hits.png|400px]] |
| |-
| |
| ! rowspan="5" | Ubuntu 64bit server
| |
| ! align="right" | Miss (no-cache)
| |
| | 27.30 || 29.56 || 98.01 || 734.07
| |
| |-
| |
| | align="right" | (mem-cache)
| |
| | 27.34 || 28.60 || 96.38 || 736.57
| |
| |-
| |
| | align="right" | (disk-cache)
| |
| | 28.10 || 29.12 || 97.21 || 734.50
| |
| |-
| |
| ! | Hit (mem-cache)
| |
| | 2.14 || 2.76 || 32.91 || 336.87
| |
| |-
| |
| | align="right" | (disk-cache)
| |
| | 2.13 || 2.83 || 33.40 || 337.13
| |
| |-
| |
| ! rowspan="5" | Nexus S
| |
| ! align="right" | Miss (no-cache)
| |
| | 31.21 || 30.27 || 84.57 || 556.42
| |
| |-
| |
| | align="right" | (mem-cache)
| |
| | 28.80 || 29.57 || 84.66 || 558.44
| |
| |- | |
| | align="right" | (disk-cache)
| |
| | 28.87 || 30.68 || 86.88 || 564.83
| |
| |- | |
| ! align="right" | Hit (mem-cache)
| |
| | 2.16 || 2.86 || 25.38 || 228.31
| |
| |- | |
| | align="right" | (disk-cache)
| |
| | 2.38 || 3.03 || 23.55 || 236.67
| |
| |- | |
| ! rowspan="5" | Samsung Galaxy S (from Geoff)
| |
| ! align="right" | Miss (no-cache)
| |
| | 31.33 || 27.48 || 74.04 || 524.20
| |
| |-
| |
| | align="right" | (mem-cache)
| |
| | 27.27 || 28.32 || 75.05 || 529.32
| |
| |-
| |
| | align="right" | (disk-cache)
| |
| | 28.04 || 28.43 || 93.92 || 612.59
| |
| |-
| |
| ! align="right" | Hit (mem-cache)
| |
| | 2.67 || 3.04 || 21.67 || 209.69
| |
| |-
| |
| | align="right" | (disk-cache)
| |
| | 3.03 || 3.37 || 24.37 || 218.61
| |
| |} | | |} |
|
| |
|
| === test_timing_cache_2.js '''(Obsolete)''' ===
| | Note the time to load 2K and 8K entries from a memory-cache filled with small entries (sixth group)... Could this be caused by the access-pattern or is it generally true? |
| This benchmark loads a number of resources with different cache-keys and then waits for all cache-io to finish. Then it loads all resources again (retrieving them from cache this time). Both sequences are measured and reported for various entry-sizes and cache-configurations.
| |
| | |
| This access-pattern is slightly more realistic than in the benchmark described above; a page is likely to load some number of resources from the (same) server, and if the user visits the page second time, those resources are loaded again (from cache this time). The benchmark aims at emulating this pattern in a simple way.
| |
| | |
| Some results from a Linux server running 64-bit Ubuntu 10.4
| |
| | |
| Memory-cache capacity: 51200 Kb / Disk-cache capacity: 1048576 Kb
| |
| Max #connections=256 / max per server=256
| |
| Total time #runs avg time/run
| |
| Setting datasize 50 bytes
| |
| CacheMiss (nocache) (no cache) 2803 101 27.75
| |
| CacheMiss (q-url) (mem-cache) 2767 101 27.40
| |
| CacheHit (mem-cache) 205 101 2.03
| |
| CacheMiss (q-url) (disk-cache) 2783 101 27.55
| |
| CacheHit (disk-cache) 186 101 1.84
| |
| Setting datasize 1024 bytes
| |
| CacheMiss (nocache) (no cache) 2884 101 28.55
| |
| CacheMiss (q-url) (mem-cache) 2570 101 25.45
| |
| CacheHit (mem-cache) 269 101 2.66
| |
| CacheMiss (q-url) (disk-cache) 2910 101 28.81
| |
| CacheHit (disk-cache) 271 101 2.68
| |
| Setting datasize 51200 bytes
| |
| CacheMiss (nocache) (no cache) 9546 101 94.51
| |
| CacheMiss (q-url) (mem-cache) 9536 101 94.42
| |
| CacheHit (mem-cache) 3274 101 32.42
| |
| CacheMiss (q-url) (disk-cache) 9543 101 94.49
| |
| CacheHit (disk-cache) 3274 101 32.42
| |
| Setting datasize 524288 bytes
| |
| CacheMiss (nocache) (no cache) 75942 101 751.90
| |
| CacheMiss (q-url) (mem-cache) 75840 101 750.89
| |
| CacheHit (mem-cache) 27799 101 275.24
| |
| CacheMiss (q-url) (disk-cache) 79318 101 785.33
| |
| CacheHit (disk-cache) 31333 101 310.23
| |
| | |
| The interesting numbers are found in the rightmost column of the output.
| |
| | |
| Results above suggest that loading entries up to a certain size from the server (the xpcshell-handler in this case) is rather consistent regardless of whether we use no cache, mem-cache or a disk-cache. Also, when loading entries up to a certain size from cache, the disk-cache seems to be as fast as the memory-cache. This IMO indicates that IO on this platform is pretty efficient (up to a certain limit) because we would always expect to spend more time when using the disk-cache than when using mem-cache. Observe that we need 512K-sized resources to see a clear difference between using disk and memory. (The exact limit here is not identified, but it may not very interesting either since it is probably platform-specific).
| |
| | |
| Also observe that when reading from the cache, small entries load relatively much faster than larger entries. (Memory-page size?)
| |
| | |
| Results from my Nexus S. '''Like above, the performance on large data is surprisingly good compared to the Linux server... any insight?'''
| |
| | |
| Memory-cache capacity: 51200 Kb / Disk-cache capacity: 1048576 Kb
| |
| Max #connections=256 / max per server=256
| |
| Total time #runs avg time/run
| |
| Setting datasize 50 bytes
| |
| CacheMiss (nocache) (no cache) 2724 101 26.97
| |
| CacheMiss (q-url) (mem-cache) 2249 101 22.27
| |
| CacheHit (mem-cache) 126 101 1.25
| |
| CacheMiss (q-url) (disk-cache) 2314 101 22.91
| |
| CacheHit (disk-cache) 146 101 1.45
| |
| Setting datasize 1024 bytes
| |
| CacheMiss (nocache) (no cache) 2190 101 21.68
| |
| CacheMiss (q-url) (mem-cache) 2206 101 21.84
| |
| CacheHit (mem-cache) 177 101 1.75
| |
| CacheMiss (q-url) (disk-cache) 2382 101 23.58
| |
| CacheHit (disk-cache) 189 101 1.87
| |
| Setting datasize 51200 bytes
| |
| CacheMiss (nocache) (no cache) 7388 101 73.15
| |
| CacheMiss (q-url) (mem-cache) 7582 101 75.07
| |
| CacheHit (mem-cache) 2205 101 21.83
| |
| CacheMiss (q-url) (disk-cache) 7749 101 76.72
| |
| CacheHit (disk-cache) 2226 101 22.04
| |
| Setting datasize 524288 bytes
| |
| CacheMiss (nocache) (no cache) 57251 101 566.84
| |
| CacheMiss (q-url) (mem-cache) 57939 101 573.65
| |
| CacheHit (mem-cache) 22888 101 226.61
| |
| CacheMiss (q-url) (disk-cache) 62850 101 622.28
| |
| CacheHit (disk-cache) 22626 101 224.02
| |
|
| |
|
| == Wanted tests == | | == Wanted tests == |