Foundation/Metrics/Optimizely Process: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
Line 37: Line 37:
# Put the test live in Optimizely
# Put the test live in Optimizely
# Update the status to live on the [https://wiki.mozilla.org/Webmaker/Testing_Hub Webmaker Webmaker Testing Hub]
# Update the status to live on the [https://wiki.mozilla.org/Webmaker/Testing_Hub Webmaker Webmaker Testing Hub]
# People affected by testing should be watching the wiki page for updates
# People affected by testing should watch the wiki page for updates
# Ping updates to relevant people on IRC etc
# Announce new test to:
# Let the test run until results are significant
#* webmaker on IRC
#* Some tests don't produce significant results. If this happens, don't be afraid to close the test. There is always something else we can be testing that might have more impact.
#* webmaker@lists.mozilla.org
# Let the test run until results are statistically significant (Optimizely will do the calculations for you)
 
== Concluding a test ==
# Some tests don't produce significant results. If this happens, don't be afraid to close the test. There is always something else we can be testing that might have more impact.
# Add the results and Optimizely screenshots into your test write-up
# Add the results and Optimizely screenshots into your test write-up
# Share this on the next weekly team call and get peer-review on your conclusions
# Move your test into the 'Completed Tests' section in the [https://wiki.mozilla.org/Webmaker/Testing_Hub Webmaker testing hub]
# Move your test into the 'Completed Tests' section in the [https://wiki.mozilla.org/Webmaker/Testing_Hub Webmaker testing hub]
# Announce the write-up to:
#* webmaker on IRC
#* webmaker@lists.mozilla.org
# Close the ticket
# Close the ticket
# Share this on the next weekly team call and get peer-review on your conclusions


== Weekly updates ==
== Weekly updates ==

Revision as of 15:59, 11 February 2014

This is our process for running front-end tests on Mozilla Foundation websites and tools. The process is new so please feedback on it, and we can improve these guides, and the process.

Golden rules for A/B testing

  • Testing is continuous, it is never finished
  • Be brave: You can always be testing something else, so focus on potential impact
  • Traffic is opportunity. At any given time, we should have a test running on each of our sites and tools
  • There are no fails. Every test is a learning opportunity (especially if it makes something worse)
  • It's a team sport. Testing is as good as the number of people who see the results

How to get something tested

  1. If you have quick ideas, dump them here: Webmaker test ideas Etherpad
  2. If you have something specific you definitely want 'actioned', scroll down and follow the guide on how to file the bug and fill out the report template

What tests are we running?

And what did we learn from previous tests?

How do we choose?

Choosing which test to run next is a judgement call combining the following:

  • How long a test will take to run
  • How difficult a test is to setup
  • The potential impact of making the change

A formula to calculate this would be artificial as the impact of a test is hard to predict. Just keep these three things in mind and continually look to have the biggest impact. And when it's hard to decide, remember that testing anything is better than testing nothing.

Our process to run a test

  1. Open a Bugzilla ticket under the component the test is being run in to assign the test and track it's progress
  2. Add [optimizely] to the ticket's whiteboard
  3. Estimate time to complete the test and record this in the ticket: Test duration calculator
  4. Make a copy of this Google doc template and fill out the content: Mofo A/B testing report template
  5. Add the URL of your test report into your ticket
  6. Add your test into the Webmaker testing hub
  7. Setup your test in Optimizely, or request someone to do this
  8. Add the preview URL from Optimizely into the ticket for review:
    • Technical review: (make sure we haven't broken other things on the page)
    • Content review: with design and copy people as appropriate
  9. Put the test live in Optimizely
  10. Update the status to live on the Webmaker Webmaker Testing Hub
  11. People affected by testing should watch the wiki page for updates
  12. Announce new test to:
    • webmaker on IRC
    • webmaker@lists.mozilla.org
  13. Let the test run until results are statistically significant (Optimizely will do the calculations for you)

Concluding a test

  1. Some tests don't produce significant results. If this happens, don't be afraid to close the test. There is always something else we can be testing that might have more impact.
  2. Add the results and Optimizely screenshots into your test write-up
  3. Move your test into the 'Completed Tests' section in the Webmaker testing hub
  4. Announce the write-up to:
    • webmaker on IRC
    • webmaker@lists.mozilla.org
  5. Close the ticket
  6. Share this on the next weekly team call and get peer-review on your conclusions

Weekly updates

In the weekly cross-team calls, include a short segment on testing:

  1. Announce any tests closed in that week and link to the write up for peer-review
  2. Remind people to log new test ideas, and discuss if any planned tests needed priority