Firefox/Go Faster/Measurement

From MozillaWiki
Jump to: navigation, search

Overview

Note: ongoing work can be found at https://wiki.mozilla.org/Firefox/Go_Faster/System_Add-ons/Verification_Chain#Verifying_delivery

This page documents the efforts to measure Go Faster deployments. This is currently in the discovering phase.

Update 2016-10-20

(read from top to bottom)

(georg)

Here is a quick hack on re:dash, from the longitudinal table (a 1% sample of our clients): https://sql.telemetry.mozilla.org/queries/1472/source

(ckprice)

>https://sql.telemetry.mozilla.org/queries/1472/source

This is awesome. I've forked it and will play around a bit. Given this, I'm assuming we can make the following statement:

`Based on a 1% sample size of our clients, there are currently 1,925 Firefox clients with the d3d9fallback@mozilla.org system add-on installed.`

>what is the use-case here?

(probably should have led with this)

The most recent requests we've had for system add-ons is to fix things (e.g. flipping prefs[0]). In these cases, relman wants to ensure that the fix is being rolled out to everybody in a timely manner.

The other 'type' of system add-on is more heavy 'feature' work (Hello (rip), Pocket). Metrics here may not be as timely, but a signal at the browser level of users with the add-on installed is good to have to compare against the feature-level metrics being collected.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=1306465

(chutten)

I don't think that's a useful statement to make. (Also, given how the query is structured (taking only the latest reports) it isn't a correct statement either). To make it useful, I think it would need some changes:

  • Constrain it by time.
  • Constrain it by release channel.
  • Report it as a percentage or proportion to give an impression of scale.

So, if you could say something like "According to a 1% sample of Firefox clients reporting between Date1 and Date2, X% of Firefox release users have this addon installed."

That, I think, would be the most concise, useful thing we could get from the longitudinal dataset.

If what you want is to see rollout of the system addon across the populations, that would be a decent place to start. What you'd want then is to see numbers per day.

So....

SELECT t.ss_startDate AS d, CASE WHEN element_at(t.addons, '<the addon id>') IS NOT NULL THEN 'has the addon' ELSE 'nopes' END AS has_addon, normalized_channel, COUNT(DISTINCT client_id) AS num FROM longitudinal CROSS JOIN UNNEST(subsession_start_date, active_addons) AS t(ss_startDate, addons) GROUP BY 1, 2, 3

That... _might_ do it? I'm not sure. But it should give you a list of dates with has/nope counts by channel. Then a Visualization (type: line, x-axis d, y-axis num, group by has_addon or channel or both) should give you the curves you want.

History

TODO:ckprice

  • How do we measure rollout of existing add-ons?
  • What prior work has been done for system add-ons?

EXAMPLES OF SYSTEM ADD-ONS

https://bugzilla.mozilla.org/show_bug.cgi?id=1307108 (pref flip) https://bugzilla.mozilla.org/show_bug.cgi?id=1306465 (pref flip)

More: https://trello.com/b/moJCpVCD/go-faster-system-add-on-pipeline

Useful Measurements

  • How many instances have been offered a system add-on over time period X?
  • How many instances have been updated over time period X?
  • Metrics on users who have disabled system add-ons.
  • (Ritu)
    • Do we want some metrics on failures?
      • Are there instances where a system add-on update is downloaded but fails to apply?
    • Do we want to do measurements that are location specific? We rolled out a system add-on update for RU locale users and the metric here would be "how quickly was the system add-on update done on RU locale"?
    • How many system add-on updates were pushed for security fixes vs hot fixes vs others?

Note: In theory this isn't allowed, and work is going on to prevent it. However, we do need something that'll give us detail as to if we've fixed it or not. Or if there's recurring issues.