Better DXR Testing: Difference between revisions

→‎Problems to solve: Noted that the tested server runs in a different process than the testrunner.
(Link back to DXR/)
(→‎Problems to solve: Noted that the tested server runs in a different process than the testrunner.)
Line 5: Line 5:
* We have to write our own lame testrunner: that "global failed" thing in search-test.py that keeps track of whether any tests have failed. And there's a bare except in there. We would rather not maintain any pieces of our own testrunner that we don't have to. We *certainly* don't want to duplicate them.
* We have to write our own lame testrunner: that "global failed" thing in search-test.py that keeps track of whether any tests have failed. And there's a bare except in there. We would rather not maintain any pieces of our own testrunner that we don't have to. We *certainly* don't want to duplicate them.
* There's another layer of custom test framework: run-tests.py. Right now, it's a spaghetti of shell scripts and sleeps. Surely at least we can share some common setup and teardown.
* There's another layer of custom test framework: run-tests.py. Right now, it's a spaghetti of shell scripts and sleeps. Surely at least we can share some common setup and teardown.
* The tested server runs in a different process than the testrunner, making dev clumsier. (We can't just set a breakpoint in a test and step from there into the search code, for instance.)


== Things to keep ==
== Things to keep ==
* It's easy to run a test case as a regular DXR deployment so you can explore.
* It's easy to run a test case as a regular DXR deployment so you can explore.
* The top-level `make test` entrypoint is a nice, generic handle for Jenkins, no matter what language(s) or tool(s) the tests end up written with.
* The top-level `make test` entrypoint is a nice, generic handle for Jenkins, no matter what language(s) or tool(s) the tests end up written with.
Confirmed users
574

edits