Better DXR Testing: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
(Added some thoughts in progress about DXR's testing story)
 
(Link back to DXR/)
Line 1: Line 1:
Here are some thoughts about improving the automated testing story for DXR.
Here are some thoughts about improving the automated testing story for [[DXR]].


== Problems to solve ==
== Problems to solve ==

Revision as of 01:14, 21 December 2012

Here are some thoughts about improving the automated testing story for DXR.

Problems to solve

  • It's hard to make a new test. You have to duplicate a lot of boilerplate, change dir names in said boilerplate, and the number of files go way up.
  • We have to write our own lame testrunner: that "global failed" thing in search-test.py that keeps track of whether any tests have failed. And there's a bare except in there. We would rather not maintain any pieces of our own testrunner that we don't have to. We *certainly* don't want to duplicate them.
  • There's another layer of custom test framework: run-tests.py. Right now, it's a spaghetti of shell scripts and sleeps. Surely at least we can share some common setup and teardown.

Things to keep

  • It's easy to run a test case as a regular DXR deployment so you can explore.
  • The top-level `make test` entrypoint is a nice, generic handle for Jenkins, no matter what language(s) or tool(s) the tests end up written with.