Identity/Firefox Accounts/Testing Notes

From MozillaWiki
Jump to: navigation, search

Notes

(jrgm asks - “where do the notes below come from?”)

stuart: discussions with karlof and some devs, they are messy and incoherent ;) i’ll clean this up shortly

(jrgm asks - “does the FxA grouping include Oauth and Profile server?”)

stuart: it does yes, would it make more sense to break it up?

Sandbox env called ‘stable'

  • dev are maintaining this
  • close to production

Ops is converting deployment to jenkins by end of Q1

  • want stable env done the same way, will be done by ops as well

from dev -> staging, how to orchestrate automate version selecting for various project versions and deps?

front end code has functional tests

does load test look good, does api look good?

perhaps release one project a time, in order

content depends on auth

almost like they are separate projects, released independently

TeamCity runs tests currently on site for ‘latest'

  • ask jrgm or vlad

Able, an a/b testing service

  • ship feature, turn on and off
  • expand tests if needed
  • john getting this on staging

future, 1% deployments, monitoring and testing : hard cutover currently

after hits stage john runs load testing

might make sense to get from dev -> prod quickly to get things like load or performance tested

can always talk to fxa-dev list to get some dev more ideas or chat directly with ryan kelly, vlad, or shane tomlinson

Next Steps

Make stable/latest test coverage as close as possible to dev/stage (or vice versa)

Ensure we have good test coverage for:

  • api (functional tests for requests/queries)
  • basic e2e (login/logout, create account, etc.)
  • load (how many rps?)
  • performance (does response time vary by account? can a really junky/large account be slower?)

How can we orchestrate automated version selecting for various projects, taking into account potential version and dependency conflicts?

Are there any manual steps that qa does that can be automated (basic validations like config checking, or tagging, or trains)?

Thinking forward to when we have everything deployed nicely into production with % rollouts, what kind of tests could we run in production that would help us verify how things are running before increasing %? monitoring tests