QA/Platform/Graphics/Quantum/Renderer

From MozillaWiki
< QA‎ | Platform‎ | Graphics
Jump to: navigation, search

Documentation

Testplan

Risk Analysis

Risk Mitigation Timeline
Web Compatibility

(list any risks to web-compat that need to be mitigated)

(eg. automation, fuzzing, manual testing, a/b testing) (eg. when to implement and start monitoring each mitigation strategy)
Performance

(list any risks to user-perceived performance and justification to switch to new system)

(eg. automated tests, benchmarks, manual testing, user studies, measuring against "frame budget") (eg. when to implement and start monitoring each mitigation strategy)
Stability

(list any risks to crash rate, data loss, rendering correctness, etc)

(eg. automated tests, data monitoring, static analysis, fuzzing, crawling, etc) (eg. when to implement and start monitoring each mitigation strategy)
Memory

(list any risks to memory footprint, installer size, etc)

(eg. tests, data monitoring, etc) (eg. when to implement and start monitoring each mitigation strategy)
Hardware Compatibility

(list any risks to reduction in accelerated content related to hardware and blocklisting)

(eg. automated tests, manual testing, data monitoring, etc) (eg. when to implement and start monitoring each mitigation strategy)

Scope of Testing

  • Canvas
  • SVG
  • Scrolling & Zooming
  • Fonts
  • Images
  • PDF and other embedded document formats
  • Printing
  • Video
  • WebGL
  • Platform specific considerations
  • Hardware compatibility
  • Software fallback
  • Performance and memory usage
  • A/B comparisons for improvement

Automation

Status

Currently there are 138 failing reftests as of 2017-03-01 with WebRenderer enabled (tracking - metabug - wiki - treeherder). Once these are fixed we'll want to run all reftests with WebRenderer enabled and with WebRenderer disabled which, since WebRenderer depends on e10s, we'll need to run the reftests 3x (non-e10s, e10s+WR, e10s-WR).

From a platform perspective we will only be concerned with Windows 7 Platform Update and later with D3D11 enabled. During the transition, we need to test accelerated WebRender (Windows 8), accelerated DX11 (Windows 8) and unaccelerated (Windows 7). This means that, at least for now, we will not need additional tests on Mac, Linux, Android, and older Windows environments. However, we will have to support doubling-up the hardware-accelerated test path on Windows 7+ with other platforms following later until such time as we drop support for non-WebRenderer supported hardware/platforms.

Pending Review
  • Code Coverage
  • Performance
  • Code Size
  • Static Analysis
  • Benchmarking (eg. first-party, third-party, comparison to non-WR, comparison to competition)

Milestones

Milestone Target Date Target Build Needs Dependencies
M3 Feb 24 - Apr 21, 2017 Nightly 56

Required:

  • QR-enabled builds running all reftests, gfx-related mochitests (apz, gl, gpu), crashtests, and fuzzing plus any new QR tests on Windows 8 debug & opt per checkin
  • QR-disabled builds running all tests on all platforms as per normal per checkin

Nice to Have:

  • Hardware-enabled test environments (eg. AMD, Intel, and NVIDIA GPUs) running Windows 10
  • QR-enabled builds running all gfx-related tests with QR, D3D11, and Basic once per Nightly on Windows
M4 Apr 21 - Jun 26, 2017 Nightly 57
M5 Jun 26 - Nov 14, 2017 Nightly 57 WebRenderer on by default in Nightly
Reference: Program Management Plan

Manual Testing

  • Exploratory (eg. top-sites, UI elements, high-contrast themes, hiDPI displays, switching GPUs, zooming & scrolling, a11y, rtl locales, printing, addon compat, security review, etc)
  • A/B testing of new vs old
  • New use cases which might apply
  • Hardware/driver/platform compatibility to inform expanding/retracting ship targets
  • Once bug 1342450 lands verify builds by flipping the gfx.webrender.enabled pref (Win7+,D3D11+)

Integration Testing

  • Criteria for enabling on Nightly (eg. all automation passing)
  • Telemetry experimentation (eg. crash rates, user engagement via page views or scrolling, WR-specific probes)
  • Any blockers for running tests
  • Ensuring RelMan / RelQA sign-off test plan and execution prior to riding the trains
  • Does it impact other project areas (eg. WebVR, Stylo, etc)?

Out of Scope

  • What is not in scope for testing and/or release criteria?
  • Are there things we won't do, like testpilot or shield studies?
  • Are there things we won't test, like specific hardware we don't have access to?
  • Will we do a staged rollout vs normal rollout?
  • Do we care about edge-case behaviours and/or user profiles (eg. addons, themes, configurations, etc)?
  • Do we care about edge-case environments (eg. VMs, Bootcamps, old drivers, etc)?