QA/Platform/Graphics/Quantum/Renderer: Difference between revisions

No edit summary
Line 144: Line 144:
  ! Timeline
  ! Timeline
  |-
  |-
  | '''Web Compatibility Issues''':
  | '''Web Compatibility'''
(eg. list any risks to web-compat that need to be mitigated)
(list any risks to web-compat that need to be mitigated)
  | (eg. automation, fuzzing, manual testing, a/b testing)
  | (eg. automation, fuzzing, manual testing, a/b testing)
  | (eg. when to start monitoring)
  | (eg. when to implement and start monitoring each mitigation strategy)
  |-
  |-
| '''Performance'''
(list any risks to user-perceived performance and justification to switch to new system)
| (eg. automated tests, benchmarks, manual testing, user studies)
| (eg. when to implement and start monitoring each mitigation strategy)
|-
| '''Stability'''
(list any risks to crash rate, data loss, rendering correctness, etc)
| (eg. automated tests, data monitoring, static analysis, fuzzing, crawling, etc)
| (eg. when to implement and start monitoring each mitigation strategy)
|-
| '''Memory'''
(list any risks to memory footprint, installer size, etc)
| (eg. tests, data monitoring, etc)
| (eg. when to implement and start monitoring each mitigation strategy)
|-
| '''Hardware Compatibility'''
(list any risks to reduction in accelerated content related to hardware and blocklisting)
| (eg. automated tests, manual testing, data monitoring, etc)
| (eg. when to implement and start monitoring each mitigation strategy)
  |}
  |}
=== Scope of Testing ===
* Platform coverage
* Hardware coverage
* Usecase coverage
=== Automated Testing ===
* Test suites (eg. reftests, mochitests, xpcom, crash tests, code coverage, fuzzing, perf, code size, etc)
* Benchmarking (eg. first-party, third-party, comparison to non-WR, comparison to competition)
* What are the questions we want to answer, why do we care, and how will we measure?
=== Manual Testing ===
* Exploratory (eg. top-sites, UI elements, high-contrast themes, hiDPI displays, switching GPUs, zooming & scrolling, a11y, rtl locales, printing, addon compat, security review, etc)
* A/B testing of new vs old
* New use cases which might apply
* Hardware/driver/platform compatibility to inform expanding/retracting ship targets
=== Integration Testing ===
* Criteria for enabling on Nightly (eg. all automation passing)
* Telemetry experimentation (eg. crash rates, user engagement via page views or scrolling, WR-specific probes)
* Any blockers for running tests
* Ensuring RelMan / RelQA sign-off test plan and execution prior to riding the trains
* Does it impact other project areas (eg. WebVR, Stylo, etc)?
=== Out of Scope ===
* What is not in scope for testing and/or release criteria?
* Are there things we won't do, like testpilot or shield studies?
* Are there things we won't test, like specific hardware we don't have access to?
* Will we do a staged rollout vs normal rollout?
* Do we care about edge-case behaviours and/or user profiles (eg. addons, themes, configurations, etc)?
* Do we care about edge-case environments (eg. VMs, Bootcamps, old drivers, etc)?
Confirmed users
14,525

edits