Changes

Jump to: navigation, search

Community:SummerOfCode15

165 bytes added, 21:58, 12 February 2015
Automation & Tools: - fix mentor links for real
| Python, AngularJS, SQL, Javascript
| [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]| [[https://mozillians.org/en-US/u/wlach/ Will LaChanceLachance]], [[https://mozillians.org/en-US/u/mwargersjmaher/ Joel Maher]]
| The impact here is the ability for developers and release managers to see the performance impact of their changes while helping us track this.
|-
| Of the thousands of unitests which are run for each platform and each push we find many intermittent failures. This is a pain point for developers when they test their code on try server. Now that we have TreeHerder, it isn't much work to automatically annotate jobs as intermittent or a regression/failure. In mochitest we have --bisect-chunk which will retry the given test and determine if it is an intermittent or a real regression. The goal here is to do this automatically for all jobs on try server. Jobs will still turn orange. With the outcome of this project failures would need to have a different view in the UI.
| Python, Javascript
| [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]| [[https://mozillians.org/en-US/u/mwargersjmaher/ Joel Maher]]
| This will build off an existing set of tools while helping us bridge the gap towards a much better review and automated landing of patches system. In the short term, this will aid in developers who see failures and either do multiple pushes, many retriggers, or just ignore them- in summary we will not need to worry as much about wasting resources related to intermittents.
|-
| With our thousands of test files, there are hundreds that have dangerous api calls which result in leftover preferences, permissions, and timing issues. A lot of work has been done here, we need to fix tests and expand our work on these resources to all our tests. In addition to cleaning up dangerous test code, we need to understand our tests and how reliable they are. We need to build tools that will allow us to determine how safe and reliable our tests are individually and as part of a suite. Upon completion of this project we should have the majority of tests cleaned up, and a toolchain that can be easily run and generate a report on how stable each test is.
| Python, Javascript
| [[https://mozillians.org/en-US/u/jmaher/ Joel Maher]]| [[https://mozillians.org/en-US/u/mwargers/ Martijn Wargers]], [[https://mozillians.org/en-US/u/mwargersjmaher/ Joel Maher]]
| The impact this has is helping us cleanup our tests to reduce intermittents and give us tools to write better tests and understand our options for running tests in different configurations.
|}
Confirm
3,376
edits

Navigation menu