QA/TDAI/Code Coverage: Difference between revisions

Jump to navigation Jump to search
no edit summary
(New page: = Code Coverage = yiiiihah)
 
No edit summary
Line 1: Line 1:
= Code Coverage =  
= Rationale =
yiiiihah
We need a to measure how effective our test strategy is.  We need to know how well we are actually covering the code base, and we need to determine if there are areas of the code base that are not being adequately covered by the automated tests. 
 
We want to also analyze the coverage of the Litmus cases as well, and we want to combine this analysis with an analysis of bugzilla to see which bug components are "hot" in terms of activity and other metrics.  The hope here is to generate data that points to specific locations where focus is needed on improving the automated/regression testing systems.
 
Our focus here is more to show where we '''don't''' have adequate coverage by our tests.  We aren't really going to use this to point out where we are "awesome" at testing, as that would be a bit [http://www-128.ibm.com/developerworks/java/library/j-cq01316?ca=dnw-704 wrong-headed].
 
== Things Code Coverage Won't Tell Us ==
* Does the code actually work
* If the code is adequately tested (it just tells us if it '''is''' executed)
* If something with a high degree of coverage is well-tested
 
== Things it can tell us ==
* It can be used as a barometer (over time) to understand if we are expanding our tests or if we are duplicating effort over a single area.  Note that some duplication is very necessary in order to fully test various branches and pathways.
* It can be used to indicate areas of the codebase that are under exercised by the current tests, and we can (over time) see if we are making progress on extending testing to those areas.
* It can give us a short list of areas to begin new test development efforts
 
= Bug Analysis =
The idea for the bug analysis is to identify "hot" components in bugzilla.  This is where there are lots of changes going in and where there seems to be a need for help.  Our idea here is to look at the following items:
* bug arrival rates - number of new bugs/<time> per component.  This is a good indication of the stability of a component. ''Caveat:'' It could also be an indication of the triager's ability (or inability) to categorize something.  For example, the arrival rate on Firefox:General is completely meaningless.
* in-test-suite-? - This shows the number of bugs that people are thinking that need to be in the regression test suites.  This often indicates something above and beyond simple unit tests, since the developers are doing a great job at putting unit tests into their patches. ''Caveat:'' This could also just indicate areas where writing test cases is a pain (but that probably means those areas are under-covered).
* in-test-suite-+ - This is the other side of the in-test-suite question.
Confirmed users
3,816

edits

Navigation menu