QA/Browser Technologies/JulyWorkWeek2011: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
 
(2 intermediate revisions by the same user not shown)
Line 110: Line 110:
#***Less focus on routine tasks like regression testing, and beefing automation while leveraging outsource tools  
#***Less focus on routine tasks like regression testing, and beefing automation while leveraging outsource tools  
#*Takeaways:  
#*Takeaways:  
#**(Tony) Defining Process: what is the future of sync and mobile in respect to Browser Tech?  
#**'''(Tony)''' Defining Process: what is the future of sync and mobile in respect to Browser Tech?  
#**(Tony) Come up with a BT service agreement for emerging projects, that's somewhat carbon copied to hand off to teams  
#**'''(Tony)''' Come up with a BT service agreement for emerging projects, that's somewhat carbon copied to hand off to teams  
#**(Tony) define our goals to other teams (marketing, support, devs, l10n)  
#**'''(Tony)''' define our goals to other teams (marketing, support, devs, l10n)  
#*Sandboxed for Future:  
#*Sandboxed for Future:  
#**What are our responsibilities, current vs future? As a team?  
#**What are our responsibilities, current vs future? As a team?  
#**What's our hiring and resourcing plan?
#**What's our hiring and resourcing plan?
#'''Community'''  
#'''Community'''  
#*Summary:  
#*Summary:  
Line 128: Line 127:
#***Services would be a QA infrastructure
#***Services would be a QA infrastructure
#*Takeaways:  
#*Takeaways:  
#**(aaron, tracy) use Moz Reps; additional means of communications  
#**'''(aaron, tracy)''' use Moz Reps; additional means of communications  
#***other focus channels (ie Reddit, selenium, Android)
#***other focus channels (ie Reddit, selenium, Android)
#**provide a test environment that's easy accessible from outside  
#**provide a test environment that's easy accessible from outside  
#***(kevin, aaron) Device Anywhere for mobile  
#***'''(kevin, aaron)''' Device Anywhere for mobile  
#***(tracy, james) Sync server with access for Services)
#***'''(tracy, james)''' Sync server with access for Services)
#**(All) Clearer Testday posts, earlier, and list of tangible tasks to execute  
#**'''(All)''' Clearer Testday posts, earlier, and list of tangible tasks to execute  
#***(All) Include Devs in testday channels, and ask them to be strategic on what to do
#***'''(All)''' Include Devs in testday channels, and ask them to be strategic on what to do
#**(james, owen, tracy) Setup VM environments and documentation for services. Video Sync help
#**'''(james, owen, tracy)''' Setup VM environments and documentation for services. Video Sync help
#*Sandboxed for Future:  
#*Sandboxed for Future:  
#**have a forum, survey, summary of bugs, feedback, summary of bugs, list of tasks outside of IRC; not limiting to one day.  
#**have a forum, survey, summary of bugs, feedback, summary of bugs, list of tasks outside of IRC; not limiting to one day.  
Line 154: Line 153:
#**** Need to provide better feedback on litmus test case writing.  Also, writing good feature testplans
#**** Need to provide better feedback on litmus test case writing.  Also, writing good feature testplans
#*Takeaways:
#*Takeaways:
#**(Tony) flagging in-litmus + bugs and ask them/us to create tests
#**'''(Tony)''' flagging in-litmus + bugs and ask them/us to create tests
#** (aaron, Kevin) provide our knowledge of bugs and features status in our features in test plans
#** '''(aaron, Kevin)''' provide our knowledge of bugs and features status in our features in test plans
#** (aaron, Kevin) Follow up with feature sign off / status.  Look through their wiki page, also get in touch directly.  They have a PM now, so we can work through that channel.
#** '''(aaron, Kevin)''' Follow up with feature sign off / status.  Look through their wiki page, also get in touch directly.  They have a PM now, so we can work through that channel.
#** (aaron, Kevin) Need to provide better feedback on litmus test case writing.  Also, writing good feature testplans
#** '''(aaron, Kevin)''' Need to provide better feedback on litmus test case writing.  Also, writing good feature testplans
#***Create a template for a testplan
#***Create a template for a testplan
#** (Tony, Tracy) Introduce Services projects (sync triaging, client-side testing for now) to Waverley.
#** '''(Tony, Tracy)''' Introduce Services projects (sync triaging, client-side testing for now) to Waverley.
#** (All) regression-wanted; build config
#**''' (All)''' regression-wanted; build config
#* Sandboxed for Future:
#* Sandboxed for Future:
#** Future projects and expectations for Waverley
#** Future projects and expectations for Waverley
# '''Services Goals'''
# '''Services Goals'''
 
#* Summary:
    Summary:
#** client-side automation, explore mozmill and TPS
 
#** server-side automation, explore funkload and TPS
    client-side automation, explore mozmill and TPS
#** Continuous integration is needed
 
#** Full QA against Load Cluster (combined Staging)
    server-side automation, explore funkload and TPS
#** Pull/package FF build with it.  Build out a VM that can be used by anyone
 
#* Takeaways:
    Continuous integration is needed
#** '''(tracy, owen)''' What sets of tests do we automate?
 
#** '''(tracy)''' develop mozmill automation for client sync smoketests
    Full QA against Load Cluster (combined Staging)
#** '''(Owen)''' Funkload tests with load. Add reporting
 
#** '''(James)''' Get production reports so we can analyze baselines and customized load tests
    Pull/package FF build with it.  Build out a VM that can be used by anyone
#** '''(James)''' Get two servers up and running so we can snapshot them as reference system
 
#** '''(tracy, owen, James)''' Have proper documentation for all the above
    Takeaways:
#* Sandboxed for Future:
 
#** Sync Server distribution and support + Reference Setup
    (tracy, owen) What sets of tests do we automate?
#** Creating VMs with reference setup openly
 
#** Building up all components for distribution (nginx, gunicorn, sync server, etc...)
    (tracy) develop mozmill automation for client sync smoketests
#*** Make available for community to contribute
 
# '''Mobile Automation'''
    (Owen) Funkload tests with load. Add reporting
#* Summary:
 
#** Android is a top tier platform focus - so we should be adding automation to the mix
    (James) Get production reports so we can analyze baselines and customized load tests
#*** No current client mobile framework on our plate now.
 
#*** QA can work with A-team and releng to work on fixing broken mochitests
    (James) Get two servers up and running so we can snapshot them as reference system
#*** crowdsource addon : get out there
 
#** We know Dev team has lack of automation now, and aware of it
    (tracy, owen, James) Have proper documentation for all the above
#** Discussed pros and cons of  
 
#* Takeaways:
    Sandboxed for Future:
#** '''(martijn)''' continue to work with joel on getting the failed mochitests on tinderbox down (Q3 Goal)
 
#** '''(martijn, aaron)''' continue to get the crowdsource addon in the hands of testers quickly (Q3 Goal)  (see wiki page)
    Sync Server distribution and support + Reference Setup
#**''' (tchung, aaron)''' work with QA Automation team to create a wish-list of what a client tool should look like
 
#*** In addition, talk to Clint's team to see what they have in mind
    Creating VMs with reference setup openly
#* Sandboxed for Future:
 
#** A better understanding of how dev automation is today, and their roadmap
    Building up all components for distribution (nginx, gunicorn, sync server, etc...)
#*** What other tools can be used?  (eg. more Test Harness like - what about performance testing?)
 
#** Investigate usage of 3rd party automation tools in DA, and going down the private cloud route
    Make available for community to contribute
#** continued client automation solution in house
 
#** L10n Coverage has nothing.  what about memory usage, telemetry, performance?
5. Mobile Automation  
#** Who can write more browser-chrome tests?
 
#*** Ask developers as well for their input: mochitests are being written from them
    Summary:
# '''Sync Discussion'''
 
#* Summary:
    Android is a top tier platform focus - so we should be adding automation to the mix
#** Weekly train work, defined with monday client-side handoff, and wednesday server-side handoff.  Important to define and document what is being tested each week, in details and in public.
 
#** Questions around the right automation tools for client.  Mozmill or TPS?  Need to weigh the two
    No current client mobile framework on our plate now.
#** Server side automation will continue with funkload.  QA sign-off needs to happen on Funkload results/reports since we are not controlling the activity
 
#** Test/Automation of Server-Side
    QA can work with A-team and releng to work on fixing broken mochitests
#*** 1. Weekly Train - Stage
 
#*** 2. Maintenance Releases
    crowdsource addon : get out there
#*** 3. Load Testing
 
#*** 4. Continuous Integration
    We know Dev team has lack of automation now, and aware of it
#** Staging VMs are being repurposed to make a CI ENV for Dev and OPs.  All QA work will be moved over to the physical LT cluster
 
#* Takeaways:
    Discussed pros and cons of  
#**'''(james)''' build out Sync environment for QA, and host it with viewable IP
 
#** '''(tracy, owen)''' to talk to Henrik this week (hopefully) about usefullness of Mozmill
    Takeaways:
#** '''(Owen)''' already in touch with Jonathon about what is/is not working with TPS
 
#** '''(Tracy)''' to look at updating his Sync Server TP and the test cases (new, current) in LItmus.  helpful to community use on Aurora/Beta branchs
    (martijn) continue to work with joel on getting the failed mochitests on tinderbox down (Q3 Goal)
#* Sandboxed for Future:
 
#** We should talk to Jesse Ruderman about FireFox Fuzz Testing and using ideas/techniques for Sync automation
    (martijn, aaron) continue to get the crowdsource addon in the hands of testers quickly (Q3 Goal)  (see wiki page)
#** What's left of Eggplant?
 
# Mobile Coffee Talk  
    (tchung, aaron) work with QA Automation team to create a wish-list of what a client tool should look like
#* Summary:
 
#** What's the best way to know when beta builds are coming? (usually HG to get the changes, usually a handful of small fixes, see URL link for Beta Changes (from Kevin)
    In addition, talk to Clint's team to see what they have in mind
#** Haven't spent much time reviewing what Waverley results from beta and release testing.  Need to spend more resources reviewing their work
 
#** Device Anywhere is pretty good with turn around with their bugs.  But need to figure out ways to setup quicker, cleaner state, deal with power issues
    Sandboxed for Future:
#** Crasher Triage has challenges.  libc bugs dont tell us much, and symbols are missing on Honeycomb
 
#** Device Compatibility, how to increase exposure and participation?  need better market feedback, QA more stern about issues found, more aggressive in bugzilla on devices, tracking devices better.
    A better understanding of how dev automation is today, and their roadmap
#** Feature signoff earlier.  Shouldnt wait until beta, but midpoint of Aurora.  what to do about late string freezes?  Also, bad tracking of features getting backed out, need better tracking criteria.
 
#** Aurora in the marketplace
    What other tools can be used?  (eg. more Test Harness like - what about performance testing?)
#* Takeaways:
 
#** '''(aaron, kevin)''' Write up device anywhere wiki page (how to use, etc.)
    Investigate usage of 3rd party automation tools in DA, and going down the private cloud route
#** '''(naoki)''' follow up with Thomas or Android and see how to get more symbols for honeycomb
 
#** '''(All)''' trouble shooting documentation for Waverly / Device Anywhere
    continued client automation solution in house
#** '''(All)''' help waverly do some troubleshooting before submitting ticket
 
#**''' (kevin, aaron)''' Weekly dig : add feedback analysis for Aurora
    L10n Coverage has nothing.  what about memory usage, telemetry, performance?
#** '''(martijn)''' deploy crowdsource addon!
 
#* Sandboxed for Future:
    Who can write more browser-chrome tests?
#** Device Anywhere automation
 
#** Is there a better release process we can push forward?
    Ask developers as well for their input: mochitests are being written from them
#** Integration with Desktop scheduling
 
# '''Beta Environment'''
6. Sync Discussion  
#* Summary:
 
#** Defined: quick, experimental projects in an isolated dev environment (its own channel, repository, etc)
    Summary:
#** QA's signoff criteria: (Solid extension tested, & Loose support of product sign-off (eg. identity, Account Portal))
 
#*** QA level of support: monitor feedback, smoketest levels of product
    Weekly train work, defined with monday client-side handoff, and wednesday server-side handoff.  Important to define and document what is being tested each week, in details and in public.
#* Takeaways:
 
#** '''(tony, james)''' How are we getting QA knowledge of incoming projects?  What is our signoff process)
    Questions around the right automation tools for client.  Mozmill or TPS?  Need to weigh the two
#* Sandboxed for Future:
 
#** If Beta gets big, how do we integrate other team projects into this environment?  (eg. web, mobile, desktop)
    Server side automation will continue with funkload.  QA sign-off needs to happen on Funkload results/reports since we are not controlling the activity
#*** And how would QA signoff work then?  Services team owns the addon and the infra
 
#*** Should we also have our own QA beta environment?
    Test/Automation of Server-Side
 
    1. Weekly Train - Stage
 
    2. Maintenance Releases
 
    3. Load Testing
 
    4. Continuous Integration
 
    Staging VMs are being repurposed to make a CI ENV for Dev and OPs.  All QA work will be moved over to the physical LT cluster
 
    Takeaways:
 
    (james) build out Sync environment for QA, and host it with viewable IP
 
    (tracy, owen) to talk to Henrik this week (hopefully) about usefullness of Mozmill
 
    (Owen) already in touch with Jonathon about what is/is not working with TPS
 
    (Tracy) to look at updating his Sync Server TP and the test cases (new, current) in LItmus.  helpful to community use on Aurora/Beta branchs
 
    Sandboxed for Future:
 
    We should talk to Jesse Ruderman about FireFox Fuzz Testing and using ideas/techniques for Sync automation
 
    What's left of Eggplant?
 
7. Mobile Coffee Talk  
 
    Summary:
 
    What's the best way to know when beta builds are coming? (usually HG to get the changes, usually a handful of small fixes, see URL link for Beta Changes (from Kevin)
 
    Haven't spent much time reviewing what Waverley results from beta and release testing.  Need to spend more resources reviewing their work
 
    Device Anywhere is pretty good with turn around with their bugs.  But need to figure out ways to setup quicker, cleaner state, deal with power issues
 
    Crasher Triage has challenges.  libc bugs dont tell us much, and symbols are missing on Honeycomb
 
    Device Compatibility, how to increase exposure and participation?  need better market feedback, QA more stern about issues found, more aggressive in bugzilla on devices, tracking devices better.
 
    Feature signoff earlier.  Shouldnt wait until beta, but midpoint of Aurora.  what to do about late string freezes?  Also, bad tracking of features getting backed out, need better tracking criteria.
 
    Aurora in the marketplace
 
    Takeaways:
 
    (aaron, kevin) Write up device anywhere wiki page (how to use, etc.)
 
    (naoki) follow up with Thomas or Android and see how to get more symbols for honeycomb
 
    (All) trouble shooting documentation for Waverly / Device Anywhere
 
    (All) help waverly do some troubleshooting before submitting ticket
 
    (kevin, aaron) Weekly dig : add feedback analysis for Aurora
 
    (martijn) deploy crowdsource addon!
 
    Sandboxed for Future:
 
    Device Anywhere automation
 
    Is there a better release process we can push forward?
 
    Integration with Desktop scheduling
 
8. Beta Environment  
 
    Summary:
 
    Defined: quick, experimental projects in an isolated dev environment (its own channel, repository, etc)
 
    QA's signoff criteria: (Solid extension tested, & Loose support of product sign-off (eg. identity, Account Portal))
 
    QA level of support: monitor feedback, smoketest levels of product
 
    Takeaways:
 
    (tony, james) How are we getting QA knowledge of incoming projects?  What is our signoff process)
 
    Sandboxed for Future:
 
    If Beta gets big, how do we integrate other team projects into this environment?  (eg. web, mobile, desktop)
 
    And how would QA signoff work then?  Services team owns the addon and the infra
 
    Should we also have our own QA beta environment?

Latest revision as of 01:52, 2 August 2011

Overview

Date: July 18th-22, 2011
Attendees: Tbd
What: Browser Technologies QA Workweek
Main Scrumpad: http://mozqa.sync.in/bt-work-week-july-2011

Meeting Space

Offsite:

 171 Coronado Ave, HMB 
 Leslie: (650) 703-8993


Onsite: Zombocom

Agenda

The Workweek will consist of Project planning, Discussions, Lightning Talks, and Work sessions. Use the offsite for peer discussions, but also use the onsite to catch up with devs, pm's and other colleagues.

Schedule Meeting Topic
Monday Planning
topics
Tuesday
Offsite topics
Wednsday, morning
Offsite topics
Thursday
Sessions topics
Friday
None Scheduled topics

Monday

Tuesday

Wednesday

  • 9am: Continued Discussions
  • 10am: Continued Discussions
  • 11am: Pack up, clean, head out
  • 12pm: Lunch someplace, head back to office

Thursday

  • 11am: Sync Bug Triage
  • 12pm: Farewell Lunch for Aakash. Pho Garden on Castro
  • 2pm: BT Project demo, (video recorded on airmozilla, 10 Fwd)
  • 3pm: Sync Server Unit, Load, Automated Testing: Dev/Ops/QA Discussion
  • 3pm: mobile release test planning (aaron, kevin)
  • TBD: TPS vs. Funkload for Sync Server API automation/smokes: James, Owen, Jonathon, other interested parties

Friday

  • TBD: TPS vs. Funkload for Sync Server API automation/smokes: James, Owen, Jonathon, other interested parties
  • TBD: other stuff
  • Tracy returns to KC (7:10 am flight)
  • Aaron returns to TO (noon flight)

Action Items

Collaborating Notes from Workweek (7/25)

  1. Vision & Goals
    • Summary:
      • Responsible for emerging technologies and environments like Sync, Mobile, Experimental Lab projects (Identity, Share, WebApps), and server environments
      • More resources to help with building out automation, support for new projects, and a better usage of tasks like project investigation, more exploratory testing, interacting with other teams, and defining processes
        • Less focus on routine tasks like regression testing, and beefing automation while leveraging outsource tools
    • Takeaways:
      • (Tony) Defining Process: what is the future of sync and mobile in respect to Browser Tech?
      • (Tony) Come up with a BT service agreement for emerging projects, that's somewhat carbon copied to hand off to teams
      • (Tony) define our goals to other teams (marketing, support, devs, l10n)
    • Sandboxed for Future:
      • What are our responsibilities, current vs future? As a team?
      • What's our hiring and resourcing plan?
  2. Community
    • Summary:
      • Community engagement is a challenge for Mobile and Services. Both have very small userbase now.
      • provide a clear list of tasks and a regular schedule:
        • QMO cleanup (owen and rbillings are asking around)
        • Testday posts (send this earlier! also, needs to have direct list of tasks for those that want it, but exploratory for others)
          • there should be an assistance from dev : ideally it would be nice if they attended events
      • Have a playground environment for experimenting with client/server testing and automation
        • Mobile can be device anywhere, create scripts for easier and faster setup
        • Services would be a QA infrastructure
    • Takeaways:
      • (aaron, tracy) use Moz Reps; additional means of communications
        • other focus channels (ie Reddit, selenium, Android)
      • provide a test environment that's easy accessible from outside
        • (kevin, aaron) Device Anywhere for mobile
        • (tracy, james) Sync server with access for Services)
      • (All) Clearer Testday posts, earlier, and list of tangible tasks to execute
        • (All) Include Devs in testday channels, and ask them to be strategic on what to do
      • (james, owen, tracy) Setup VM environments and documentation for services. Video Sync help
    • Sandboxed for Future:
      • have a forum, survey, summary of bugs, feedback, summary of bugs, list of tasks outside of IRC; not limiting to one day.
      • introduce folks to emerging technologies + projects, so early adopters can play with it (eg labs)
      • Having physical meetups. Need to expand on the ideas listed, and drive a purpose
      • How to handle feedback from outside, and incorporate them in the most effective way into testing
  3. Waverley
    • Summary:
      • Waverley's feedback revolved around:
        • How can they help more?
        • Litmus tests - creating, updating, maintaining?
        • More regular interaction with us?
        • what are document source of truth
        • Mozilla to provide more guidance, attention, and answering when needed
          • Having special office hours, more regular interaction outside of tuesday mornings, and keeping them informed on projects and bugs
        • Encourage Waverley to interact with developers directly in bugzilla, irc, and any other avenues more often. Doing a great job now, but more of it is good.
        • Include Waverley on our monday 9am triage calls?
          • Need to provide better feedback on litmus test case writing. Also, writing good feature testplans
    • Takeaways:
      • (Tony) flagging in-litmus + bugs and ask them/us to create tests
      • (aaron, Kevin) provide our knowledge of bugs and features status in our features in test plans
      • (aaron, Kevin) Follow up with feature sign off / status. Look through their wiki page, also get in touch directly. They have a PM now, so we can work through that channel.
      • (aaron, Kevin) Need to provide better feedback on litmus test case writing. Also, writing good feature testplans
        • Create a template for a testplan
      • (Tony, Tracy) Introduce Services projects (sync triaging, client-side testing for now) to Waverley.
      • (All) regression-wanted; build config
    • Sandboxed for Future:
      • Future projects and expectations for Waverley
  4. Services Goals
    • Summary:
      • client-side automation, explore mozmill and TPS
      • server-side automation, explore funkload and TPS
      • Continuous integration is needed
      • Full QA against Load Cluster (combined Staging)
      • Pull/package FF build with it. Build out a VM that can be used by anyone
    • Takeaways:
      • (tracy, owen) What sets of tests do we automate?
      • (tracy) develop mozmill automation for client sync smoketests
      • (Owen) Funkload tests with load. Add reporting
      • (James) Get production reports so we can analyze baselines and customized load tests
      • (James) Get two servers up and running so we can snapshot them as reference system
      • (tracy, owen, James) Have proper documentation for all the above
    • Sandboxed for Future:
      • Sync Server distribution and support + Reference Setup
      • Creating VMs with reference setup openly
      • Building up all components for distribution (nginx, gunicorn, sync server, etc...)
        • Make available for community to contribute
  5. Mobile Automation
    • Summary:
      • Android is a top tier platform focus - so we should be adding automation to the mix
        • No current client mobile framework on our plate now.
        • QA can work with A-team and releng to work on fixing broken mochitests
        • crowdsource addon : get out there
      • We know Dev team has lack of automation now, and aware of it
      • Discussed pros and cons of
    • Takeaways:
      • (martijn) continue to work with joel on getting the failed mochitests on tinderbox down (Q3 Goal)
      • (martijn, aaron) continue to get the crowdsource addon in the hands of testers quickly (Q3 Goal) (see wiki page)
      • (tchung, aaron) work with QA Automation team to create a wish-list of what a client tool should look like
        • In addition, talk to Clint's team to see what they have in mind
    • Sandboxed for Future:
      • A better understanding of how dev automation is today, and their roadmap
        • What other tools can be used? (eg. more Test Harness like - what about performance testing?)
      • Investigate usage of 3rd party automation tools in DA, and going down the private cloud route
      • continued client automation solution in house
      • L10n Coverage has nothing. what about memory usage, telemetry, performance?
      • Who can write more browser-chrome tests?
        • Ask developers as well for their input: mochitests are being written from them
  6. Sync Discussion
    • Summary:
      • Weekly train work, defined with monday client-side handoff, and wednesday server-side handoff. Important to define and document what is being tested each week, in details and in public.
      • Questions around the right automation tools for client. Mozmill or TPS? Need to weigh the two
      • Server side automation will continue with funkload. QA sign-off needs to happen on Funkload results/reports since we are not controlling the activity
      • Test/Automation of Server-Side
        • 1. Weekly Train - Stage
        • 2. Maintenance Releases
        • 3. Load Testing
        • 4. Continuous Integration
      • Staging VMs are being repurposed to make a CI ENV for Dev and OPs. All QA work will be moved over to the physical LT cluster
    • Takeaways:
      • (james) build out Sync environment for QA, and host it with viewable IP
      • (tracy, owen) to talk to Henrik this week (hopefully) about usefullness of Mozmill
      • (Owen) already in touch with Jonathon about what is/is not working with TPS
      • (Tracy) to look at updating his Sync Server TP and the test cases (new, current) in LItmus. helpful to community use on Aurora/Beta branchs
    • Sandboxed for Future:
      • We should talk to Jesse Ruderman about FireFox Fuzz Testing and using ideas/techniques for Sync automation
      • What's left of Eggplant?
  7. Mobile Coffee Talk
    • Summary:
      • What's the best way to know when beta builds are coming? (usually HG to get the changes, usually a handful of small fixes, see URL link for Beta Changes (from Kevin)
      • Haven't spent much time reviewing what Waverley results from beta and release testing. Need to spend more resources reviewing their work
      • Device Anywhere is pretty good with turn around with their bugs. But need to figure out ways to setup quicker, cleaner state, deal with power issues
      • Crasher Triage has challenges. libc bugs dont tell us much, and symbols are missing on Honeycomb
      • Device Compatibility, how to increase exposure and participation? need better market feedback, QA more stern about issues found, more aggressive in bugzilla on devices, tracking devices better.
      • Feature signoff earlier. Shouldnt wait until beta, but midpoint of Aurora. what to do about late string freezes? Also, bad tracking of features getting backed out, need better tracking criteria.
      • Aurora in the marketplace
    • Takeaways:
      • (aaron, kevin) Write up device anywhere wiki page (how to use, etc.)
      • (naoki) follow up with Thomas or Android and see how to get more symbols for honeycomb
      • (All) trouble shooting documentation for Waverly / Device Anywhere
      • (All) help waverly do some troubleshooting before submitting ticket
      • (kevin, aaron) Weekly dig : add feedback analysis for Aurora
      • (martijn) deploy crowdsource addon!
    • Sandboxed for Future:
      • Device Anywhere automation
      • Is there a better release process we can push forward?
      • Integration with Desktop scheduling
  8. Beta Environment
    • Summary:
      • Defined: quick, experimental projects in an isolated dev environment (its own channel, repository, etc)
      • QA's signoff criteria: (Solid extension tested, & Loose support of product sign-off (eg. identity, Account Portal))
        • QA level of support: monitor feedback, smoketest levels of product
    • Takeaways:
      • (tony, james) How are we getting QA knowledge of incoming projects? What is our signoff process)
    • Sandboxed for Future:
      • If Beta gets big, how do we integrate other team projects into this environment? (eg. web, mobile, desktop)
        • And how would QA signoff work then? Services team owns the addon and the infra
        • Should we also have our own QA beta environment?