User:Mevans/chofmannUserDataEmail

From MozillaWiki
Jump to: navigation, search

yeah, there are inflection points where we get new and interesting kinds of bug reports when we go beyond the 30,000 nightly testers.

those inflection points are around 100k, 200k, 300k, 600k, and a million users

I'm sure there are interesting points above that getting to our full installed base, but we don't have as much experience in finding and testing those inflection points. I'm a big advocate for trying to test those points even at the expense of updating our users more slowly, but certainly in pounding away at building bigger beta populations.

https://bugzilla.mozilla.org/show_bug.cgi?id=525581 is an example of a bug that lies dormant for many months of having 30,000 active daily users, and only became visible on 11/6, 10 months after the change was introduced, when 3.6b1 when over 200k adu's. This is a good case study and there are a few more we should digg up to highlight.

Some argue that its really not the size of the testing population, but the composition of the population that matters most. I'd argue that, yes, both are important. We have proven that we can control the size of the testing population with the right press and social messaging. We haven't yet proven that we can control the composition of the testing population to get the right feedback in the right testing time frame. Until we can understand and use the ideas about controlling composition of the beta test pool better we should drive the size of the testing pool as the main control we use for ensuring the right levels of feeeback.

You can get latest adu's by release out of

https://metrics.mozilla.com/pentaho/content/pentaho-cdf/RenderHTML?solution=metrics&path=blocklist/blocklist_0&dashboard=template.html&template=new-metrics&title=Firefox%20Usage behind ldap. currently

it would be good to graph some of these to show the various pools of users that we have and at the rate we seem to be able to grow these pools.

3.7a5 has 17,781 3.7a6pre has 26,915 total trunk adus are an amazing 58,676 but we scatter those over many different experimental branches and test builds. alphas tend to concentrate users on a single build a bit more but its still hard to get above 25-30k adus on any set of trunk builds.

typically when we just do the right messaging and start encouraging users to start trying out betas we can move quickly to 100,200, and 300k.

April 20 -> 3.6.4 had 270,527 May 4 -> 3.4.4 had 400,805

Beyond that we need additional rounds of messaging to grow the beta population, and we need to keep the quality levels high to avoid just cycling new users through the betas v. sustained daily use.

May 19 -> around 500k 3.6.4 RC users May 24 -> around 600k June 1 -> around 700k Day before 3.6.4 shipped -> 956,689

5,316,769 on 3.6.4 ship day 30,709,058 within 24 hours

If we can get millions of users on *sustained daily use* its one of the best measures we have that we have a quality level that is good enough to ship. In our current set of metrics we have a hard time trying to differentiate between "sustained users" and "casual/cycling users" we need better tools to track that; especially for betas.

the metrics system also has a way to digg out breakdown of localization beta test populations. thats another good thing to show and we need to make those numbers easier to produce and make visible. that's just the start of what we need to try to control and measure the composition of our beta test populations. laura and mayumi have the list of ideas on trying to control and measure the composition of the beta test pop. cc'ing them so we can get that part of the message factored in to this presentation too.

this should be the start of some pretty good slides. This is good information to get out into as many venues as we can.

Additional Information:

https://bugzilla.mozilla.org/show_bug.cgi?id=574725#c3 might also turn out to be a good case study in the need for a bigger beta audience and reducing surprises after shipping.

There is also some trade off between the size of the test pool, and the amount of time giving for testing and feedback. more testing_and_feeeback_time means we can catch the same bugs as we might with more testers, but that doesn't help us to go faster.

There are some minimum threasholds on number of testers, and amount of time needed for testing and feedback. We need to articulate these better and use them in release planning.

-chofmann



Original Message -----

From: "Marcia Knous" <marcia@mozilla.org> To: "Chris Hofmann" <chofmann@mozilla.com> Sent: Friday, June 25, 2010 11:47:50 AM Subject: Data needed for Presentation at Whistler

chofmann: Matt wants to give a presentation regarding "QA on a Global Scale"

Part of what he wanted to talk about in the presentation is how we need to get crash data and other feedback from our testers in order to be successful. Part of that is reaching a certain threshold. Can you give us some data points we can use for our slides about what kinds of numbers of users we need in order to get useful data? You mentioned previously we don't get enough data from alpha releases.

Also, who would have relevant data about the following:

  • How many average daily users we have?
  • How many nightly testers do we currently have?
  • Can we tell how many people are running the localized versions of Firefox?