QA/Browser Technologies/JulyWorkWeek2011

From MozillaWiki
Jump to: navigation, search


Date: July 18th-22, 2011
Attendees: Tbd
What: Browser Technologies QA Workweek
Main Scrumpad:

Meeting Space


 171 Coronado Ave, HMB 
 Leslie: (650) 703-8993

Onsite: Zombocom


The Workweek will consist of Project planning, Discussions, Lightning Talks, and Work sessions. Use the offsite for peer discussions, but also use the onsite to catch up with devs, pm's and other colleagues.

Schedule Meeting Topic
Monday Planning
Offsite topics
Wednsday, morning
Offsite topics
Sessions topics
None Scheduled topics




  • 9am: Continued Discussions
  • 10am: Continued Discussions
  • 11am: Pack up, clean, head out
  • 12pm: Lunch someplace, head back to office


  • 11am: Sync Bug Triage
  • 12pm: Farewell Lunch for Aakash. Pho Garden on Castro
  • 2pm: BT Project demo, (video recorded on airmozilla, 10 Fwd)
  • 3pm: Sync Server Unit, Load, Automated Testing: Dev/Ops/QA Discussion
  • 3pm: mobile release test planning (aaron, kevin)
  • TBD: TPS vs. Funkload for Sync Server API automation/smokes: James, Owen, Jonathon, other interested parties


  • TBD: TPS vs. Funkload for Sync Server API automation/smokes: James, Owen, Jonathon, other interested parties
  • TBD: other stuff
  • Tracy returns to KC (7:10 am flight)
  • Aaron returns to TO (noon flight)

Action Items

Collaborating Notes from Workweek (7/25)

  1. Vision & Goals
    • Summary:
      • Responsible for emerging technologies and environments like Sync, Mobile, Experimental Lab projects (Identity, Share, WebApps), and server environments
      • More resources to help with building out automation, support for new projects, and a better usage of tasks like project investigation, more exploratory testing, interacting with other teams, and defining processes
        • Less focus on routine tasks like regression testing, and beefing automation while leveraging outsource tools
    • Takeaways:
      • (Tony) Defining Process: what is the future of sync and mobile in respect to Browser Tech?
      • (Tony) Come up with a BT service agreement for emerging projects, that's somewhat carbon copied to hand off to teams
      • (Tony) define our goals to other teams (marketing, support, devs, l10n)
    • Sandboxed for Future:
      • What are our responsibilities, current vs future? As a team?
      • What's our hiring and resourcing plan?
  2. Community
    • Summary:
      • Community engagement is a challenge for Mobile and Services. Both have very small userbase now.
      • provide a clear list of tasks and a regular schedule:
        • QMO cleanup (owen and rbillings are asking around)
        • Testday posts (send this earlier! also, needs to have direct list of tasks for those that want it, but exploratory for others)
          • there should be an assistance from dev : ideally it would be nice if they attended events
      • Have a playground environment for experimenting with client/server testing and automation
        • Mobile can be device anywhere, create scripts for easier and faster setup
        • Services would be a QA infrastructure
    • Takeaways:
      • (aaron, tracy) use Moz Reps; additional means of communications
        • other focus channels (ie Reddit, selenium, Android)
      • provide a test environment that's easy accessible from outside
        • (kevin, aaron) Device Anywhere for mobile
        • (tracy, james) Sync server with access for Services)
      • (All) Clearer Testday posts, earlier, and list of tangible tasks to execute
        • (All) Include Devs in testday channels, and ask them to be strategic on what to do
      • (james, owen, tracy) Setup VM environments and documentation for services. Video Sync help
    • Sandboxed for Future:
      • have a forum, survey, summary of bugs, feedback, summary of bugs, list of tasks outside of IRC; not limiting to one day.
      • introduce folks to emerging technologies + projects, so early adopters can play with it (eg labs)
      • Having physical meetups. Need to expand on the ideas listed, and drive a purpose
      • How to handle feedback from outside, and incorporate them in the most effective way into testing
  3. Waverley
    • Summary:
      • Waverley's feedback revolved around:
        • How can they help more?
        • Litmus tests - creating, updating, maintaining?
        • More regular interaction with us?
        • what are document source of truth
        • Mozilla to provide more guidance, attention, and answering when needed
          • Having special office hours, more regular interaction outside of tuesday mornings, and keeping them informed on projects and bugs
        • Encourage Waverley to interact with developers directly in bugzilla, irc, and any other avenues more often. Doing a great job now, but more of it is good.
        • Include Waverley on our monday 9am triage calls?
          • Need to provide better feedback on litmus test case writing. Also, writing good feature testplans
    • Takeaways:
      • (Tony) flagging in-litmus + bugs and ask them/us to create tests
      • (aaron, Kevin) provide our knowledge of bugs and features status in our features in test plans
      • (aaron, Kevin) Follow up with feature sign off / status. Look through their wiki page, also get in touch directly. They have a PM now, so we can work through that channel.
      • (aaron, Kevin) Need to provide better feedback on litmus test case writing. Also, writing good feature testplans
        • Create a template for a testplan
      • (Tony, Tracy) Introduce Services projects (sync triaging, client-side testing for now) to Waverley.
      • (All) regression-wanted; build config
    • Sandboxed for Future:
      • Future projects and expectations for Waverley
  4. Services Goals
    • Summary:
      • client-side automation, explore mozmill and TPS
      • server-side automation, explore funkload and TPS
      • Continuous integration is needed
      • Full QA against Load Cluster (combined Staging)
      • Pull/package FF build with it. Build out a VM that can be used by anyone
    • Takeaways:
      • (tracy, owen) What sets of tests do we automate?
      • (tracy) develop mozmill automation for client sync smoketests
      • (Owen) Funkload tests with load. Add reporting
      • (James) Get production reports so we can analyze baselines and customized load tests
      • (James) Get two servers up and running so we can snapshot them as reference system
      • (tracy, owen, James) Have proper documentation for all the above
    • Sandboxed for Future:
      • Sync Server distribution and support + Reference Setup
      • Creating VMs with reference setup openly
      • Building up all components for distribution (nginx, gunicorn, sync server, etc...)
        • Make available for community to contribute
  5. Mobile Automation
    • Summary:
      • Android is a top tier platform focus - so we should be adding automation to the mix
        • No current client mobile framework on our plate now.
        • QA can work with A-team and releng to work on fixing broken mochitests
        • crowdsource addon : get out there
      • We know Dev team has lack of automation now, and aware of it
      • Discussed pros and cons of
    • Takeaways:
      • (martijn) continue to work with joel on getting the failed mochitests on tinderbox down (Q3 Goal)
      • (martijn, aaron) continue to get the crowdsource addon in the hands of testers quickly (Q3 Goal) (see wiki page)
      • (tchung, aaron) work with QA Automation team to create a wish-list of what a client tool should look like
        • In addition, talk to Clint's team to see what they have in mind
    • Sandboxed for Future:
      • A better understanding of how dev automation is today, and their roadmap
        • What other tools can be used? (eg. more Test Harness like - what about performance testing?)
      • Investigate usage of 3rd party automation tools in DA, and going down the private cloud route
      • continued client automation solution in house
      • L10n Coverage has nothing. what about memory usage, telemetry, performance?
      • Who can write more browser-chrome tests?
        • Ask developers as well for their input: mochitests are being written from them
  6. Sync Discussion
    • Summary:
      • Weekly train work, defined with monday client-side handoff, and wednesday server-side handoff. Important to define and document what is being tested each week, in details and in public.
      • Questions around the right automation tools for client. Mozmill or TPS? Need to weigh the two
      • Server side automation will continue with funkload. QA sign-off needs to happen on Funkload results/reports since we are not controlling the activity
      • Test/Automation of Server-Side
        • 1. Weekly Train - Stage
        • 2. Maintenance Releases
        • 3. Load Testing
        • 4. Continuous Integration
      • Staging VMs are being repurposed to make a CI ENV for Dev and OPs. All QA work will be moved over to the physical LT cluster
    • Takeaways:
      • (james) build out Sync environment for QA, and host it with viewable IP
      • (tracy, owen) to talk to Henrik this week (hopefully) about usefullness of Mozmill
      • (Owen) already in touch with Jonathon about what is/is not working with TPS
      • (Tracy) to look at updating his Sync Server TP and the test cases (new, current) in LItmus. helpful to community use on Aurora/Beta branchs
    • Sandboxed for Future:
      • We should talk to Jesse Ruderman about FireFox Fuzz Testing and using ideas/techniques for Sync automation
      • What's left of Eggplant?
  7. Mobile Coffee Talk
    • Summary:
      • What's the best way to know when beta builds are coming? (usually HG to get the changes, usually a handful of small fixes, see URL link for Beta Changes (from Kevin)
      • Haven't spent much time reviewing what Waverley results from beta and release testing. Need to spend more resources reviewing their work
      • Device Anywhere is pretty good with turn around with their bugs. But need to figure out ways to setup quicker, cleaner state, deal with power issues
      • Crasher Triage has challenges. libc bugs dont tell us much, and symbols are missing on Honeycomb
      • Device Compatibility, how to increase exposure and participation? need better market feedback, QA more stern about issues found, more aggressive in bugzilla on devices, tracking devices better.
      • Feature signoff earlier. Shouldnt wait until beta, but midpoint of Aurora. what to do about late string freezes? Also, bad tracking of features getting backed out, need better tracking criteria.
      • Aurora in the marketplace
    • Takeaways:
      • (aaron, kevin) Write up device anywhere wiki page (how to use, etc.)
      • (naoki) follow up with Thomas or Android and see how to get more symbols for honeycomb
      • (All) trouble shooting documentation for Waverly / Device Anywhere
      • (All) help waverly do some troubleshooting before submitting ticket
      • (kevin, aaron) Weekly dig : add feedback analysis for Aurora
      • (martijn) deploy crowdsource addon!
    • Sandboxed for Future:
      • Device Anywhere automation
      • Is there a better release process we can push forward?
      • Integration with Desktop scheduling
  8. Beta Environment
    • Summary:
      • Defined: quick, experimental projects in an isolated dev environment (its own channel, repository, etc)
      • QA's signoff criteria: (Solid extension tested, & Loose support of product sign-off (eg. identity, Account Portal))
        • QA level of support: monitor feedback, smoketest levels of product
    • Takeaways:
      • (tony, james) How are we getting QA knowledge of incoming projects? What is our signoff process)
    • Sandboxed for Future:
      • If Beta gets big, how do we integrate other team projects into this environment? (eg. web, mobile, desktop)
        • And how would QA signoff work then? Services team owns the addon and the infra
        • Should we also have our own QA beta environment?