Confirmed users
2,959
edits
| No edit summary | |||
| Line 1: | Line 1: | ||
| ==  | == Feature Test Cases Review == | ||
| === Overview === | |||
| The following document summarizes the process for executing an effective review of test cases for B2G features. | |||
| === Internal QA Workflow === | |||
| Internally, you should aim to have at least one reviewer review your test cases analyzing two main themes: | |||
| * Quality of the test cases itself | |||
| * Understandability of test cases | |||
| Quality of the test cases itself analyzes if the test cases for the feature sufficiently cover happy path and negative cases sufficiently for each user story. Understandability of the test cases analyzes if the test cases for the feature can be understood by a person who did not create the test cases to allow that same person to run that test case without ambiguity. | |||
| * Quality of test  | To setup a review for feature test cases, you should generate a query off of MozTrap that contains the relevant test cases for the feature and provide that to the reviewer. Then, the reviewer can send his review results in the following form: | ||
| * Understandability of test  | |||
| * Quality of test cases - review+ for pass, review- for needs work | |||
| * Understandability of test cases - review+ for pass, review- for needs work | |||
| * Additional comments explaining rationale behind review decisions | * Additional comments explaining rationale behind review decisions | ||
| Line 30: | Line 32: | ||
| The feedback you should primarily go after with these parties is to look at the high-level definition of your test cases to validate that they sufficiently cover known development code, UX, and requirements flows. The details of the test cases and understandability is not important to expose to these parties, as that is already sufficiently covered by the internal QA review workflow and introduces overhead to these parties, which could potentially reduce the chance that the review will actually take place sufficiently. | The feedback you should primarily go after with these parties is to look at the high-level definition of your test cases to validate that they sufficiently cover known development code, UX, and requirements flows. The details of the test cases and understandability is not important to expose to these parties, as that is already sufficiently covered by the internal QA review workflow and introduces overhead to these parties, which could potentially reduce the chance that the review will actually take place sufficiently. | ||
| To setup a review for  | To setup a review for test cases with external parties, you should generate a high-level list of the test cases with titles only in an etherpad or shared document that external parties can interact with. Then, the external parties can send his review results in the following form on a per role basis: | ||
| * Development Code Flow Coverage by Developer Lead - review+ for sufficient coverage, review- for coverage needing improvement | * Development Code Flow Coverage by Developer Lead - review+ for sufficient coverage, review- for coverage needing improvement | ||