2,088
edits
Mnandigama (talk | contribs) |
mNo edit summary |
||
| Line 1: | Line 1: | ||
== [[Image:Bigfirefly.png|left|82x110px]]Introduction == | == Overview == | ||
=== [[Image:Bigfirefly.png|left|82x110px]]Introduction === | |||
Project Firefly is a Mozilla project about using code coverage data, bug data and code churn data to identify where we need to create test cases that will have the highest impact on testing the browser and reduce testing coverage gaps. | Project Firefly is a Mozilla project about using code coverage data, bug data and code churn data to identify where we need to create test cases that will have the highest impact on testing the browser and reduce testing coverage gaps. | ||
| Line 9: | Line 10: | ||
[[Image:Cov-cycle.png|center|239x238px|Coverage Analysis Cycle]] | [[Image:Cov-cycle.png|center|239x238px|Coverage Analysis Cycle]] | ||
== Why == | === Why === | ||
=== Impact of Code Coverage as sole instrument of decision analysis === | ==== Impact of Code Coverage as sole instrument of decision analysis ==== | ||
Code coverage data in isolation is a very incomplete metric in itself. Lack of coverage in a given area may indicate a potential risk but having 100% coverage does not indicate that the code is safe and secure. Look at the following example provided below. This is code coverage data grouped by component in the Firefox executable and the data is collected from automated test suites run only. Additionally there are a bunch of manual test suites that provide extra coverage. | Code coverage data in isolation is a very incomplete metric in itself. Lack of coverage in a given area may indicate a potential risk but having 100% coverage does not indicate that the code is safe and secure. Look at the following example provided below. This is code coverage data grouped by component in the Firefox executable and the data is collected from automated test suites run only. Additionally there are a bunch of manual test suites that provide extra coverage. | ||
| Line 28: | Line 29: | ||
But which files among those hundreds of files need first attention !! | But which files among those hundreds of files need first attention !! | ||
=== Additional pointers for a better decision analysis === | ==== Additional pointers for a better decision analysis ==== | ||
If, for each file, in a given component, if we have additional data pointers like number of bugs fixed for each file, the number of regression bugs fixed, number of security bugs fixed, number of crash bugs fixed, manual code coverage, branch coverage etc., we can stack rank the files in a given component based on any of those points. | If, for each file, in a given component, if we have additional data pointers like number of bugs fixed for each file, the number of regression bugs fixed, number of security bugs fixed, number of crash bugs fixed, manual code coverage, branch coverage etc., we can stack rank the files in a given component based on any of those points. | ||
| Line 34: | Line 35: | ||
That is what we have done in this exercise. We have generated data points for each file regarding the number of times it changed to number of different kinds of bugs fixed in the file to its coverage numbers in various test modes. | That is what we have done in this exercise. We have generated data points for each file regarding the number of times it changed to number of different kinds of bugs fixed in the file to its coverage numbers in various test modes. | ||
== How: Test Development Workflow == | === How: Test Development Workflow === | ||
Once we look at the current data, we need a way to identify and record decisions on about which files will get special attention for test development. | Once we look at the current data, we need a way to identify and record decisions on about which files will get special attention for test development. | ||
| Line 40: | Line 41: | ||
This section describes the work flow for recording information about those priorities and decisions; and then process for starting, tracking, and completing work to build test cases and get them into automated test suites. | This section describes the work flow for recording information about those priorities and decisions; and then process for starting, tracking, and completing work to build test cases and get them into automated test suites. | ||
== What: Code Coverage Reports and Tools == | === What: Code Coverage Reports and Tools === | ||
Code coverage runs happen on the weekends when extra build cycles area available. Shorter cycles are eventually planned if they prove useful. | Code coverage runs happen on the weekends when extra build cycles area available. Shorter cycles are eventually planned if they prove useful. | ||
| Line 53: | Line 54: | ||
*[https://wiki.mozilla.org/QA:CodeCoverage DYI Firefox Instrumentation] | *[https://wiki.mozilla.org/QA:CodeCoverage DYI Firefox Instrumentation] | ||
== | == Major Accomplishments Timelime== | ||
8/6/09 - Discussion with JS team about Code Coverage | |||
8/5/09 - Discussion with Content team about code coverage | |||
7/??/09 - completed first JS Coverage run | |||
7/07/09 - Completed patch for serialized automated coverage runs, {{bug|502689}} | |||
3/25/09 - [http://people.mozilla.com/~timr/CodeCoverage/Sec%20bugs%20Analysis.htm Presentation] on Security coverage and bugs analysis | |||
1/21/09 - First [http://people.mozilla.com/~timr/CodeCoverage/Codecoverage-Brownbag-01212009_v3.ppt Brownbag about Code Coverage] | |||
== Enhancement Requests == | == Enhancement Requests == | ||
edits