B2G/QA/Automation/UI/Best Practices

From MozillaWiki
< B2G‎ | QA‎ | Automation‎ | UI
Jump to: navigation, search

Coding Standards

  • Use consistent coding standards to increase test clarity and simplify code reviews
Python
PEP8
JavaScript
Mozilla Developer Guide

Synchronization

  • Waits should almost always be conditional, using Marionette Wait().until() constructs.
  • Conversely, any sort of hard sleep() type construct is discouraged. These cause tests to have a longer best-case execution time, instead of adapting their runtime to the best possible allowed by the AUT.

Abstraction

There are two broad types of UI tests. Ideally, these two types of test are kept separate as they should have significantly different levels of abstraction to optimize for maintenance and clarity.

Tests that check UI mechanics

These tests check that when a particular UI widget is manipulated, something happens. This might be a text field being disabled based on the state of a radio button, or that a keyboard shortcut initiates a particular set of functionality, or that a particular button toggles a back end state directly, or anything else similarly low level.

These tests:

  • Are specifically concerned with the mechanisms of the UI.
  • Will both call and verify the UI controls and back end APIs directly in test code, since that is their specific concern.
  • Will consist of a string of UI calls such as calendar.newEntryButton.tap() and calendar.summary.text == ...
  • Should not delegate strings of UI calls used in a test to functional-style Page Objects or similar high level abstraction mechanisms. They own that sequence.
  • Will be very fragile, as any change in UI assumption will require changes to all tests making that assumption.
  • Should therefore be kept very short and isolated, with no extended sequences. Ideally, one changed assumption = one broken test.
  • Should still abstract selectors as these change frequently and aren't directly important to the test.
  • Will not generally correspond with a traditionally-scripted QA test. They'll be much shorter and simpler, more like a single test step.
  • Should have almost no logic whatsoever.

Even though these tests should not abstract the test code, they should abstract actions performed in Setup and Cleanup. They can do this by reusing functional test Page Objects, or with their own utility functions if not sharing code with functional behavior tests.

Tests that check functional behavior

These tests check that when a user goes through a sequence of high-level actions, they get an expected behavioral result.

This is subtly but importantly different than that when they manipulate the UI, they get a particular set of UI side effects. For example, a user action is "I create a calendar entry," -not- "I press the create button." An expected functional result is "I have a valid calendar entry with X defaults," -not- "this field reads this, this field reads that."

With that in mind, details of the UI are abstracted into objects used by the functional tests. These objects manipulate the UI controls to achieve the user action, but only expose the high level actions as methods to the test. These objects do not perform verifications themselves, other than what they need to do to guarantee their own state.

This lets functional behavior tests interleave with the UI behavior tests such that the latter checks UI mechanics without concern for the overall functional behavior, and the former assumes the mechanics are correct (having been verified by UI behavior tests) and check the functional behavior.

Because it's impossible to fully isolate functional behavior tests in terms of what UI or APIs they touch--any set of tests in a given area will touch all the basic actions in that area; all tests will touch the most basic actions in the AUT--these tests are potentially very fragile. By maintaining a hands-off approach on UI mechanics, these tests no longer have to own the specific UI sequences within test code and can maintain SPOT by moving UI and API assumptions into reusable objects.

These tests:

  • Should not be concerned with the UI or API details of how they perform their actions or verify results.
  • Should not call or verify UI controls or back-end APIs directly in test code.
  • Should encapsulate UI actions and checks in Page (aka App or Region) Objects. The Page Object should be the only place UI is used directly.
  • Encapsulate API details in objects or reusable functions as well.
  • Should favor verifications round trip through UI over verifications of back end APIs. They should look for what the user would look for.
  • Will consist of a string of functions at high-level, such as calendar.createEntry(...) and calendar.getEntry() == defaultEntry.
  • Will be longer and more complex, so need to be at a higher level of abstraction to promote reuse and keep clarity.
  • Will more closely correspond to a full QA-scripted test.
  • Can have basic logic to the extent a user would also perform that logic as part of a scenario. They should still optimize for straightforward scenarios.

Explicitness

  • Tests are frequently the only documentation of application behavior
  • Any extraneous detail in the test interferes with clarity and ability to review for correctness.
  • Test code should be absolutely explicit about behavior at their functional level of abstraction.
  • Test code should be implicit about behavior below their functional level of abstraction.
  • Reading a test should tell you as closely as possible what the test means to do, no more and no less.

Logic

  • Generally, minimize logic in tests. Tests do one thing and should rarely require branching on decisions.
  • However, there are exceptions. A much larger discussion on this can be found here.

State

  • The test harness should always start a test from a single known state.
  • Tests should not depend on other tests for their state. Every test should stand alone, and should be runnable in any order.
  • Tests should not try to keep application state, except to use as a comparison for verification later.

Resetting the AUT to a known state is the single most important functionality in a test suite or harness. It must be absolutely correct, and it must run as fast as possible. If it is unreliable, the tests are unreliable. If it is slow, testers will start combining tests together to avoid overhead between tests and tests will become unmaintainable.

When given a choice between putting efforts into tests and putting efforts into improving the reset/known-state handling of the harness, pick the harness every time.

Programming Best Practices

  • In general, programming best practices should be followed. Automation is still software development. These best practices are too many to list here.
  • Optimize for correctness to expected application behavior and clarity, in that order. The job of the test is both to verify correctly and communicate correct behavior to the reader.
  • Don't share ownership of test details. If a test needs a particular sequence in a particular order, the test should own that sequence in its own code, even if some other test also owns a similar or same sequence elsewhere. Don't risk letting modifications to one test make another test incorrect.
    • UI mechanics tests that must perform a specific sequence of UI operations in a specific order are performing macros, a sequence of UI actions. These are not candidates for moving into shared functions, as functions are not allowed to expose details of their implementation or they break integrity. Macros can be reused, but should be kept local to the test file.
    • Functional tests that abstract over UI still own a sequence of functional calls. Similarly, any reuse of that sequence should be kept local to the test file.
  • Do share ownership of anything that is not a test detail, including (and especially) almost any particular task done in setup and cleanup. Know your level of abstraction.
  • Sometimes setup/cleanup needs a function and the test needs a macro that do the same thing. Sharing code for these is risky. It's usually better to let setup/cleanup use a highly robust function, preferably without involving UI, that you can change at will to make faster or more robust later without risking test coverage.
  • Balance SPOT with your organizational structure. If it's not natural for two groups to communicate, those groups should not have shared ownership of code. Either refactor shared code into a module with single ownership and consumed by the others as a version-pinned library, or both groups should maintain their own copies. SPOT only helps when you all talk. Keep in mind more than a few people needing to talk to make a given change is an antipattern.

Test Runs

  • Separate stable and unstable tests. Stable test failures can be assumed to be correct, and can be escalated automatically.
  • Keep test runs green by pruning known failures. It's important to know which tests have just started failing today.
  • All test failures must be retested manually, even known intermittents.
  • Tests that are disabled must be re-added to manual test runs, or the coverage will be dropped.