Performance Testing Reviews
This page provides a set of checks that we should be doing when we are reviewing a patch, as well as providing an outline of how to tackle reviews.
Standard Checks and Process:
- Start by skimming through the patch to get a full picture of it.
- Check the try run. Make sure that the tests being run make sense for this change.
- Check that the try run performance numbers are similar to what we currently have. Otherwise, discuss/r- to see if those changes were expected or not.
- If the patch is a migration, make sure there is a documented plan (preferably with a dependent bug) of how this will proceed. Do not land unfinished migration patches, the new tests being added should be fully functional and all tested.
- If the patch changes task configurations, ensure that the full taskgraph has the desired results. We use this tool along with [www.diffchecker.com] to look at how a patch changes the tasks that will be run.
- Go through the patch now and scrutinize the code. Don't be scared to make suggestions for changes, the author might not have considered what you're suggesting or are concerned about.
- If the patch is large or complex, don't try to review it all in one sitting. It's very likely that you'll miss something. Break the review out across multiple iterations, this has multiple benefits:
- You won't need to dig very deep on the first one. Outline all the small, or easy fixes to make here.
- On the next pass, check to make sure your requests were satisfied. Then, dig deeper into the code changes. This has the benefit of allowing you to step back from the code for a bit and then look at it again. You'll likely find things you didn't consider in the first pass.
- Before providing the final review, double-check the reviewbot try run to make sure it didn't miss submitting anything on the patch. If there are failures that are related to the patch, then mention them in the review and request changes to get them fixed.