QA/Fennec/TestStrategy

From MozillaWiki
< QA‎ | Fennec
Jump to: navigation, search

« QA/Fennec1.0/TestPlan

QA Test Strategy – Fennec

Overview

This is a tracking document that outlines the test strategy that Mozilla QA will follow regarding a thorough test pass of Fennec releases. It will follow the general plan on steps and process that QA will take to accomplish a "sign off" to the release strategy. Mozilla QA strives to uphold quality software, and a successful pass corresponds to no blocking issues, 'Green'-colored test pass on test cases in both execution and automation, and a thorough bug verification window of all Blocking bugs and Critical bugs.


NOTE: Updates to this document continue to change as Mozilla continues to refine development process.

Fennec Testing

As Fennec is a new project with Mozilla a lot of the test effort early on is to set up infrastructure, define basic processes, build up a QA team and community, and participate in early on decisions that shape the future of the project.

One big issue with Fennec will be testing the different platforms. We are supporting Maemo with a release for Windows Mobile Professional 6.1 shortly thereafter. Unlike testing on Linux, Windows and Mac, each platform has different OS requirements, build tools, screen sizes, window managers and user input. These unique devices (and their attributes) will be covered as individual test areas.

As the QA team grows, we will divide up the work per device. This makes the most sense as there is core expertise and unique community involvement needed for each platform and device. There will be core issues to tackle (porting automation test harnesses, general usage, test planning, release management) that are not specific to devices and can be done by anybody on the team.

Areas Covered

Areas not Covered

  • Feature Unit Tests
    • These are normally covered by feature developers, not QA. We are open to discussion about what we could do in QA to increase Unit Tests other than what automation currently exists. Need more definition here on what QA/Dev should own.
  • String Localization Tests
    • See more in the L10n Test Deliverable section.
  • In depth functionality of Plugins, Extensions, and Themes
    • Many extensions are hosted on AMO, but are not developed and tested by mozilla. We rely on our volunteers to manage these third party extensions. Mozilla QA will focus on high level Addon management, error handling, and a few top extensions supported. But extension functionality will not be covered by Mozilla QA.

Schedule

The release schedule is broken up by milestones, determined by the Fennec team. Each milestone will contain feature enhancements, bug fixes, and updates. Most testing of focused areas and regression areas will come in the latter half of the schedule, during the latter parts of alpha up until the final release candidate.

We are going to branch from mozilla-central to create a 1.9.2 branch which we will ship the Maemo Fennec 1.0 release from. At the point of this branch we will clean it up and make a Beta3 which will be the last beta before RC1.

Test Tools

This is ongoing set of Test Tools used by the QA team for automation and execution of the Fennec Product. Some of these tools are developed in-house, and are only accessible within the internal Mozilla network. The following list of tools shown here contain a short description of the tool, as well as further explanation on how the relevant tools are used for specific Test Areas below.

  • Litmus Test Case Manger -
    • Manages a suite of test cases used for smoketests, BFTs, FFTs, regression, and functional areas. Public usage.
  • Talos - Performance Testing Project
    • Talos is a performance testing project. With a framework written in Python it runs Ts (startup test) and Tp (page load test) while monitoring memory and cpu usage. Ts is a simple Javascript web page that times the loading and then exits. Tp consists of a Javascript web page that cycles through a set of locally served web pages. The web page set is a collection of the top 500 web pages as listed by Alexa; they have been 'cleaned' so that all of their content is locally served.
  • BuildBot -
    • test system for building. owned by Robcee. client / server interaction. Client requests execution activities. Server sets tests up, run results. (eg. parallel make system for builds. able to build pieces independently and can piece it together later) Like a next-gen tinderbox.
  • Reftests -
    • Unit test framework, easy to run in the browser, easy to write. The testcases don't rely on external files. Mainly useful for layout tests, only.
  • MochiTests -
    • Built on the Mochikit platform. Main developer - robert sayer. unit test framework, easy way to run the browser. Scripting actions to quickly create test cases and tie to executables using JavaScript. Also supports accessibility layers.


Test Tools Not Used For Fennec

  • JS Test Suite -
    • JavaScript project engine. Code name: Spidermonkey. Owned by Bob Clary and written in JavaScript, used to test different javascript API's and functions.
  • Security Test library -
    • Bob Clary finds ways to test security issues: automates ways to throw noise at "fuzz testers"; triggers randomness to hit edge cases. The way we use it, we target security issues. (eg. take HTML structure of webpage, look at tags in HTML, and find cases to execute within the tags.) Ways to exploit memory, leaks, execution. Tries to automate crashing tests by generating HTML and js code to crash the browser.
  • Download Checker -
    • Bob Clary again. Takes a huge list of locales and OS installs. highly error prone. Script will go to webpage, download each build, installs, runs a test, and closes it. fully automated. This is executed only after we go live. (all.html - "other system and languages")
  • Eggplant -
    • Graphical replay; Tracy owns it. Litmus smoke tests on it. Identify a spot, string a list of actions, and play back.
  • Minotaur -
    • Minotaur is a testing framework that can automatically check the preferences, search settings, bookmarks, extensions, and update channels for a Firefox build. These settings are vetted against a previous gold standard of settings and/or verification files. This is used to simplify and reduce the manual testing currently required for Localization and Partner builds of Firefox.

Categories

Feature Tests

Summary

There are very few new features inside of Fennec. With this said, any major work item (port from firefox to fennec, port to new platform, new UI piece and new features) will have a test plan and a tester who owns that area.

Planning & Scheduling

The Fennec dev team has a great online [tool] to track features by milestone. We will work off this list as a starting point for what we focus on for each release. We will create a basic test plan outlining our major focus areas, how we will test and the test cases. These will live in the [Test Plan]

The expectation is that the major features are well tested during the Alpha phase or maybe B1 at the latest. Then testing during later betas is focused on verifying bug fixes, testing additional edge cases that are identified (polish testing), and regression testing using smoke tests, BFTs, FFTs and automated runs.

Test Framework

Automation:

  • Mochitest
  • Reftests
  • Crashtests
  • Mochitest Chrome
  • XPCShell

Execution:

  • Litmus (BFTs, FFTs, smoketests)
  • Test cases within Bugzilla Bugs - some testcases are written and executed within feature bugs that are done manually.
Results

Test Results for executed test cases will be tracked within the litmus tests. For automation results, we are still working with build to get them running in Tinderbox. For the time being, we will post the results [here]

Web Compatibility Tests

Summary

The Web Compatibility Tests focuses on regression and compatibility with popular websites and web applications. For mobile, we need to test both regular sites and the mobile versions since some websites will recognize use as a mobile browser.

Planning & Scheduling

The QAE team is working on creating a online tool/database to track websites visited and if they work. We will use this tool for fennec in order to gain the most coverage as possible.

For the Beta1 and Beta2 release, we don't expect to have this tool readily available. In the meantime we will visit the top 25 alexa sites as well as get coverage on top financial and shopping sites. For the 1.0 release, we will be using the tool and define the number of sites we require to visit before release.

Test Framework
Results

Test results will be in the web compatibility tool as well as various cases in litmus. For the beta releases, this will be ad hoc and tracked via bugs in bugzilla for failures.

User Performance Tests

Summary

The User Performance Tests are not to be confused with Browser performance tests that are run with automation. Instead, we will concentrate on day to day usage of Fennec and any performance issues that we see. These would includes areas like startup, downloading, idling, and other common usage cases. In the future, we will expand this to include network performance, UI performance, and other low level areas.

Planning & Scheduling

We have a set of use cases in Litmus already. There is a need for further tests related to comparing how we work with other browsers. These tests fall in line with startup time and time to do a web transaction compared to opera and pocket ie.

Results

Test cases will exist in litmus. Any issues found will be tracked in Bugzilla.

Platform Tests

Summary

Configuration tests for Firefox are not necessary in Fennec. What we will do for Fennec is very that on all supported platforms we pass the smoketests as defined in litmus.

Planning & Scheduling

For Beta1, we will have a list of all platforms we support (versions of maemo, windows mobile, symbian). This list will include the different OS options available (in Windows Mobile, each device has unique OS components which could affect dependencies of Fennec) that we will officially support. These will be outlined in a wiki table and possibly inside of litmus. Limitations will be on available hardware and open bugs.

Results

Issues found will be tracked in Bugzilla, and test results will be recorded in Litmus.

Security Tests

Summary

Security tests are critical for catching external attacks through Fennec to the user. Mozilla is constantly finding and issuing fixes to malware, leaks, and identity attacks to assure the user of the safeguard of Fennec. The goal of these tests are to attempt to catch as many new attacks as possible, as well as regress past security issues that have been fixed but broken again.

Planning & Scheduling

We will use many of the security testing tools that we do on Firefox. Currently a lot of the tools do not work on Fennec and we will identify the tools we plan to use by Beta1.

Test Framework for Firefox
  • Browser Security tests at Scanit
  • Bob Clary's security tests - Javascript cases created and automated on Eggplant.
  • Platforms with anti virus software, firewalls, global passwords, vista parental controls
  • Jesse’s Fuzzer Tools - jsparsefuzz.js
  • Security bugs in Bugzilla
Results

There are a few security tests also tracked in litmus to be ran manually. Also a handful of security bugs in Bugzilla have test cases to be run independently. (Example: Bug 389106)


Stress/Leak Tests

Summary

We will need to test Fennec in low memory conditions and for long durations. Currently in Fennec we frequently hit the warning that a script is taking too long on the page. For Leak testing, we will instrument the build and run it under heavy usage (stress conditions) looking for memory leaks.

Planning & Scheduling

As it stands we are doing performance testing via talos and automation via tinderbox unittests. These tools look for leaks and get us some basic coverage. Post 1.0 release, we need to revisit this area and develop some tools and metrics for proper testing. This will be here with tests defined as a set of config files and tools.

Test Framework for Firefox
  • bc's SpiderMonkey
Results

Failed results will result in bugs. Other results will be tracked in a release specific wiki page.

Accessibility Tests

Summary

Accessibility and Mobile devices are worlds apart. Knowing this gap, there is work being done in the community to bridge this gap. Unfortunately this work is per device or OS and is limited in the coverage it gets. The good news is there is a solution available for Symbian and Windows Mobile. There is also some basic stuff we can do inside our product.

Planning & Scheduling

There is little momentum on mobile devices for accessibility. For the time being we are not doing any specific tests, but post 1.0 we will revisit this to find what we can reasonably do with our product. We will publish a full testplan for Mobile Accessibility to outline our tests and track our progress.

Firefox Ally (current Draft)

Test Framework
  • Accessibility via keyboard navigation for 1.0 using litmus test cases
  • www.nuance.com/talks
  • www.codefactory.es
Results

We will include any Ally test cases in litmus and report the results through there.

Bug Verifications

Summary

Verifying fixed bugs in Buzilla gives us additional testing on specific bugs that were fixed.

Planning & Scheduling

To prioritize and narrow the list of bugs, QA will focus primarily on Fennec bugs that represent highest severity, and carry the "blocking" flag. This is more so a "goal" to verify every blocking and priority 1 bug, and will be the benchmark to "ship" or "stop" the release. Starting Beta1, test will focus more on bug verification and creating test cases (litmus or automated) for missing areas.

Results

Bugs will be marked verified or reactivated for regression. All results tracked in Bugzilla.

Distribution Tests

This is not an area that we need to test in Fennec for the 1.0 release. In the future we might have private builds or specialized versions that we need to test.

L10N Tests

Summary

Fennec will ship with a large quantity of l10n builds. Currently there are [about 20] languages under development with more coming on board each week.

The l10n team translated and produces builds and puts the localized build in a beta state. This is where others look for issues (translations, dialogs, messages, etc...) and run litmus test cases. This is the process they have been using for Firefox and the 70 supported languages and it works well.

Planning & Scheduling

We don't expect to have any official builds much before the 1.0 release. There are partial or beta versions of the l10n builds available, but they are under development. When we are getting ready to finalize an RC candidate for 1.0, we will assess the list of l10n builds available and build a list of spot checks that we will perform.

The QA team will ensure we have a proper l10n set of litmus test cases for Fennec by the Fennec Beta2 release.

When it comes time for the 1.0 release, we will spot check 5 or 6 languages to ensure they install and can browse the web.

Test Framework

Execution

Results

We will track the core results in litmus. For the spot checks, we will use a wiki page similar to [this]

Install/Updates Tests

Summary

As we are releasing a 1.0 release, there is not a previous release to update from. We will have Alpha and Beta releases in the field which will require some method of updating. Fennec installs will be different than installing on a normal PC which means our install and update testing will be very different from Firefox.

Planning & Scheduling

We will do a similar test as we do for Firefox. For Maemo, we are targeting our main distribution from maemo.org and mozilla based repositories. There is no plan for beta/test channels. Per release, our test schedule will have the list of repositories to test.

The maemo test repository and binary ftp for release is located here:

We need to create a beta channel repository which will serve as nightly update tool [bug 475799]

TODO: figure out the distribution method for windows mobile

Test Framework
  • Installs (normal, failed)
  • Beta -> 1.0 updates (normal, failed)
  • 1.0 -> Future updates (normal, failed)
  • Spot Checks, Configurations
Results

Results are reported via wiki and email.

Negative/Crash Tests

With Fennec, we do not have support for session restore or a solid crash reporting system in place. Knowing that, we will not focus on these types of tests for the 1.0 release.

In future releases, we will focus on resource constraints, crashing the browser, viewing bad content and verifying in these cases that the browser recovers just fine and reports the appropriate error to the user and Mozilla.

Collecting Feedback

Our main goal here is to make sure we document and test the feedback channels. For Fennec we will not have the community and tools built up as Firefox does.

Reporting tools

Regression Tests

Fennec is so new that we don't have a need for Regression Tests. As the product matures and the QA team grows, we will start doing this. With that said, a lot of the Litmus and automated cases are regressions for Firefox and shared code (xulrunner).

Community Involvement

The goal is to actively involve more community members in testing Fennec in the Beta stages up to the 1.0 release.

There are many active participants in the community that are enthusiastic about Fennec and want to contribute to our quality assurance efforts. However, getting them involved and maximizing the value of their time and effort requires a lot of coordination, planning, and enforcing of the correct processes and procedures. Since such activities create constant challenges for our QA team, we are working on a new strategy to promote more community involvement.

Current avenues of community involvement include:

  • Friday Test Days - Will utilize for testing Fennec
    • Invite the QA community to #testday with a specific topic such as triaging unconfirmed bugs, running tests in Litmus, Q&A/Help sessions.
  • QMO - quality.mozilla.org
    • Using the website to promote news and articles on what the Mozilla QA team is working on
    • Provide events and forums for the community to get involved.
    • Create closer relationships with regular contributors and help them become leaders for our events and projects.
  • Campus Evangelism - Mozilla University - Consider this for Fennec
    • Leverage the Campus Reps program to reach out to students and professors at schools.
    • Reach out to folks interested in open source and help them discover Mozilla project and get involved in Firefox development and testing.
  • Better "Beta" product pages and programs
    • Update Minefield and Gran Paradiso pages to direct folks to QMO and other useful channels for getting involved and providing feedback.
    • Coordinate community testing with marketing/pr plans for Fennec Beta and 1.0 release