97
edits
| Line 14: | Line 14: | ||
== Telemetry vs microbenchmarks == | == Telemetry vs microbenchmarks == | ||
There has been some discussion about xpcshell-based microbenchmarks | There has been some discussion about using xpcshell-based microbenchmarks instead of telemetry. IMO these approaches are complementary but there '''may''' be a point in using the same code as telemetry does to capture data from xpcshell-tests (as opposed to use JS time-functions in the test). In other words, we may try to use the same mechanisms to harvest data for microbenchmarks as telemetry-code uses to harvest data. | ||
The | The major benefit of this is to get real-life verification by telemetry '''after''' using synthetic, isolated and focused benchmarks in the lab. I.e. we can use synthetic test-patterns implemented by xpcshell-tests (the microbenchmarks) in the lab to identify and qualify code-changes, then after landing code-changes we should be able to verify the effect of these changes on real-life usage-patterns via telemetry. If we measure differently in microbenchmarks and telemetry we may quickly end up "comparing apples and oranges". | ||
Below is a pro/con list for | Below is a pro/con list for using telemetry-code vs JS time-functions to harvest data for microbenchmarks - feel free to add and comment. | ||
'''Note that lab-experiments in both approaches will use synthetic test-patterns - the difference is in the way we harvest data.''' | |||
{| border="1" cellpadding="5" cellspacing="0" align="center" | {| border="1" cellpadding="5" cellspacing="0" align="center" | ||
edits