QA/Platform/DOM/Feature Testplan Template: Difference between revisions

no edit summary
(Created page with "= Template = ; ''To be edited as necessary'' == Introduction == ; ''Brief description of the area/feature(s) covered by this document.'' The primary purpos...")
 
No edit summary
 
(5 intermediate revisions by the same user not shown)
Line 1: Line 1:
= Template =
= Summary =
; ''To be edited as necessary''
''Provide a brief description of the feature, including links to any relevant documentation or bugs.''
                 
== Introduction ==
; ''Brief description of the area/feature(s) covered by this document.''
The primary purpose here is provide enough information about this functional area to a new person that they will be able to achieve a basic understanding of the feature and provide links to the any documentation or engineering docs for further reading is needed. If this is a new feature include which release it is targeted at.


== Testing Approach ==
= Status =
; ''High level overview of the testing methodologies used in each type of testing done for this project (Manual and Automated)''
{|
|-
| style="text-align:right" | '''Target Milestone:'''
| ''Firefox version and target release date''
|-
| style="text-align:right" | '''Bugs:'''
| ''links to key bugs/dependencies (# of blockers)''
|-
| style="text-align:right" | '''Metrics:'''
| ''links to any metrics''
|-
| style="text-align:right" | '''Status:'''
| ''current status/branch''
|-
|}


The purpose of this section is provide guidance on how this area can be tested and what methodologies are most likely to be productive. For example when testing WebRTC the approach for manual testing would be to initiate calls connections between to clients and verify audio and video quality. The automated approach would be to use predictable data sources for audio and video steams allowing you to preform data analysis on the call statistics. Additionally you will want to provide some guidance on what can and can not be tested.    
= People Involved =
''List any people involved and their role on the project.''
* ''Name (role)''


Include:
= Testing Approach =
* Examples of things to watch for.       
== Risk Profile ==
* What are some of common errors and issues that this testing is targeted at    finding               
''Describe the risks that exist in the project area and how those risks are mitigated.''
* Filing Bugs   
* Where is the spec documented and how can we check to make sure the code is adhering to the spec?
** How are bugs reported.           
* What are some of common errors and issues that ''manual'' testing should target?
** What component(s) should they be filed under      
* What are some of common errors and issues that ''automated'' testing is targeting? and where can we find those tests?
** Define keywords, whiteboard tags and other flags or verbage that is     expected to be used when reporting bugs       
* When filing bugs, how are they to be reported? What component(s) should they go under? What information makes a bug particularly actionable? What keywords, tags, flags, or other verbiage is expected when reporting bugs? What is the criteria for a bug to track a release? What is the criteria for a bug to block a release?
* What is the acceptance criteria for Nightly? Aurora? Beta? Release?
* How easily can the code be backed out or disabled?
* What pref(s) exist and how should they be used? How will they change the behavior of the browser? What are their defaults?
* What other code/features are directly or indirectly affected by the code? What about if the code has to be backed out or pref'd off?


== Get Involved ==
; ''How can volunteers and community members become involved in this project.''
* Links to One and Done tasks   
* Links to Moztrap tests
* Good First Verify in bugzilla       
* Links to any tutorials and other QA introductory material       
* Contact information and Meetings schedules and information on how to join
   
== Requirements ==
; ''What are the minimum requirements for becoming involved (Hardware, Software, Skills)''
* Describe the required test environment and provide instructions on how to create it.           
* If special skills are required, provide links to any tutorials that may be available on the subject.
* If special hardware is required, provide steps on how to verify that the testers systems meet the minimum requirements.
   
== Related Prefs ==
; ''Define any preferences or about:config options that are related to the behavior of this area''
Describe what pref or option does and what values should be used and how they will change the behavior of the browser. Be sure to include what the default value should be.
== Related Features ==
; ''What other features are either directly related too or can be affected by changes made to this feature.''
For instance, changes to the javascript engine can have effects on emscipten and asm.js. WebRTC has dependencies on graphics (Open H.264) and Networking.   
   
== Test Cases ==
== Test Cases ==
; ''Define the test cases required to test this feature/area.''
''Define the test cases required, including which tests can/should be automated, framework(s) used, and how often they should be executed.''
Include which tests can and should be automated, which framework used and how often the should be executed. * Provide link to repository(ies) for automated tests.     
* Provide link to repository(ies) for automated tests.     
* Smoke
* Smoke
* Describe basic smoke tests required to prove minimum acceptance  
* Describe basic smoke tests required to prove minimum acceptance  
Line 53: Line 46:
* Exploratory
* Exploratory
* Describe some related areas and user stories that may be useful to explore  
* Describe some related areas and user stories that may be useful to explore  
   
 
== Bug Triage ==
== Bug Triage ==
; ''Methodology for bug triage''  
''Document any bug triage meetings and/or processes, including priorities:''
* unconfirmed bugs
* development bugs
* fixed bugs
* regressions
* tracking bugs
* blocking bugs


* Hold twice weekly triage sessions for the bug states below follows:
= Getting Involved =
** Mondays from 4pm-5pm Eastern Standard Time (time intended slot for West coast US)
''Provide instructions on the various ways to help with the project.''
** Fridays from 9am-10am Eastern Standard Time (time slot for East coast US and Europeans)
* Links to One and Done tasks   
** In #qa on irc.mozilla.org
* Links to Moztrap tests
 
* Good First Verify in bugzilla       
; Queries (in order of priority)
* Links to any tutorials and other QA introductory material       
* verification of FIXED bugs:
* Contact information and Meetings schedules and information on how to join
** We have only time for fixes that are part of a new, or major rework of an existing, feature. The main task is determining if the level of automated testing is sufficient in these focus areas.
* Minimum requirements for becoming involved (Hardware, Software, Skills)
*** If a fix is determined there can't be sufficient automation coverage, flip flags to in-testsuite- and qe-verify? These must be verified manually. As such, ensure there are clear STR's. (once verified flip flag to qe-verify+ note: not part of this triage process)
** Describe the required test environment and provide instructions on how to create it.            
*** If sufficient automation exists flip flag to in-testsuite+
** If special skills are required, provide links to any tutorials that may be available on the subject.
*** If automation is possible but insufficient, flip flags to in-testsuite? and qe-verify?  Manual verification, as in above, will be needed, if automated test won't be added in a timely manner.
** If special hardware is required, provide steps on how to verify that the testers systems meet the minimum requirements.
* [http://mzl.la/1BFmneX unconfirmed all]: see if there's any bugs that need reproducing or need clearer STR's
* [http://mzl.la/1wSHQgk unconfirmed general]: move bugs into the appropriate sub-component
* intermittent failures: developers feel this is the least useful task we can be doing. But if time and interest allows, they suggest:
** 1. Getting an idea on how reproducible the issue is.  For example, can you reproduce the failure in 20 test runs locally?  Can you reproduce on the try server?  What if you got a loan on a slave and ran tests on the slave?  If the failure happens on Linux, and you have a machine that engineers can log into remotely, capturing an rr <http://rr-project.org/> trace of the failure would be tremendously helpful.
** 2. When did it start to happen?  Did it happen for as long the test was added or did it start to happen way after the test was originally written?  Can you go back on TBPL and retrigger test runs there to try to narrow down when the failure started to happen?  (Being able to reproduce locally obviously helps with bisection as well.)
 
== Risks ==
; ''What are the primary areas of risk involved in this area.''
For example Graphics has the risk of not being to have a broad enough test bed to provide coverage for edge case testing and may result in unexpected behavior when released to a wider audience.    
   
== Reporting and Status ==
; ''Describe how are test results reported and provide links to any automated test reports or dashboards.''
* List milestones and current progress       
* Include bug queries for tracked bugs       
* Sign-off status for each release tested.
Confirmed users
14,525

edits