Personal tools

Auto-tools/Projects/Autolog

From MozillaWiki

Jump to: navigation, search

Contents

Goal

The Autolog project seeks to implement a TBPL-like system for viewing test results produced by the a-team's various tools, at least those of which aren't hooked up to TBPL. Such projects potentially include mobile automation, Crossweave, profile manager, etc.

Project Phases

Phase 1 (Q2 2011)

  • front-end
    • essentially replicate TBPL, minus the tree-management functions
  • back-end
    • a python server that queries ElasticSearch and returns data in JSON
    • a bugzilla cache to make bugzilla queries fast
    • a hg cache to make hg queries fast
    • a documented REST API that test tools can use to submit test results
    • a python library for python test tools to make result submission easier
    • storage for test log files on brasstacks
  • integration
    • hook up at least one test tool (TPS) to autolog

Future Phases

  • front-end
    • intelligent display of stack traces, reftest images, and other extended data
    • ability to edit, delete orange comments
    • display test results by product, instead of just by tree
    • better display of TBPL data, including builds and talos runs
    • display cumulative stats for tests, possibly by way of OrangeFactor
  • back-end
    • a cache to reduce load on ElasticSearch and make the ES queries faster
    • storage for test log files on a metrics server
    • multi-threaded server to handle requests more efficiently

Implementation

Back-end

Data is stored in ElasticSearch; the same instance that's used for OrangeFactor. Data is segregated from WOO data in separate indices. Data is published to the Autolog ES indices using the mozautolog python library.

See the data structure section, below.

Front-end

The UI is a modified version of TBPL. Unlike TBPL, The Autolog UI has a server-side component which queries ES and provides data to the Autolog JS code running in the browser. This is necessary because the ElasticSearch instance being used in inside the MPT network, and can't be queried directly from public browsers. Using a server-side component provides other benefits, such as allowing us to utilize a Bugzilla cache, and to minimize the amount of data exchanged with the browser by performing some data processing on the server.

The Autolog UI displays data published into the Autolog ES indices by default, but can be used to display the buildbot data that is parsed for the War on Orange as well.

The Autolog UI is currently running at http://brasstacks.mozilla.com/autolog/.

Eventually it might be nice to move the JS code from its current ad-hoc JQuery style to a model-controller-view pattern, using the Web UX Platform . This would make future maintenance and new features potentially easier to write, although it would also mean any such changes would be impractical to back-port to TBPL.

Data Structure

Terminology

Testgroup: a collection of test suites that are run on the same machine as part of the same group. Each green or orange letter in TBPL is a testgroup. Each testgroup has one more testsuites.

Testsuite: a group of tests that are run by the same process.

In TBPL, mochitest-other is a testgroup, while its constituent tests, like mochitest-a11y and mochitest-ipcplugins, are testsuites. Most testgroups (like reftest) have only one testsuite.

Autolog data structure in ElasticSearch

Autolog data is separated into four different doc_types in ElasticSearch: testgroups, testsuites, testfailures, and perfdata. All these belong to the 'logs' index.

Testgroup structure

{
   buildtype: "opt"
   version: null
   total_test_failures: 0
   buildid: "20110626154522"
   testgroup_id: "1309132817-ca13f590057dddb932141eb837c72ad0820a82d9"
   harness: "buildbot"
   branch: null
   date: "2011-06-27"
   testrun: "4e87265b9c11.1"
   logurl: "http://stage.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mozilla-inbound-macosx64/1309128322/mozilla-inbound_leopard_test-jsreftest-build63.txt.gz"
   testgroup: "jsreftest"
   buildurl: "http://stage.mozilla.org/pub/mozilla.org/firefox/tinderbox-builds/mozilla-inbound-macosx64/1309128322/firefox-7.0a1.en-US.mac.dmg"
   builder: "mozilla-inbound_leopard_test-jsreftest"
   tree: "mozilla-inbound"
   productname: null
   machine: "talos-r3-leopard-020"
   platform: "macosx"
   testsuite_count: 1
   starttime: 1309132817
   frameworkfailures: null
   os: "leopard"
   total_perf_records: 0
   pending: false
   revision: "4e87265b9c11"
}

Testsuite structure

{
   testsuite: "reftest"
   testgroup: "reftest-ipc"
   starttime: 1309132966
   cmdline: "python reftest/runreftest.py --appname=firefox/firefox-bin --utility-path=bin --extra-profile-file=bin/plugins --symbols-path=symbols --setpref=browser.tabs.remote=true reftest/tests/layout/reftests/reftest-sanity/reftest.list"
   buildid: "20110626160901"
   todo: "12"
   testgroup_id: "1309132966-f71cdc0789f22a84f62aea84cc826b93a266bf1f"
   os: "fedora"
   tree: "mozilla-central"
   elapsedtime: "11"
   failed: "0"
   buildtype: "opt"
   machine: "talos-r3-fed-055"
   platform: "linux"
   passed: "58"
   date: "2011-06-27"
   testsuite_id: "1309132966-f71cdc0789f22a84f62aea84cc826b93a266bf1f-testsuite1"
   testfailure_count: 0
   revision: "6a3e7aebda53"
}

Testfailure structure

{
   buildtype: "opt"
   errors: [
       {
           status: "TEST-UNEXPECTED-FAIL"
           text: "test failed (with xpcshell return code: 0), see following log:"
       },
       {
           status: "TEST-UNEXPECTED-FAIL"
           text: "2147746065"
       }
   ]
   testfailure_id: "1309132818-6476508d66a9686a94e1d1c1b8671f5f73125e76-testfailure1.1"
   buildid: "20110626154522"
   os: "leopard"
   testgroup_id: "1309132818-6476508d66a9686a94e1d1c1b8671f5f73125e76"
   tree: "mozilla-inbound"
   machine: "talos-r3-leopard-013"
   platform: "macosx"
   starttime: 1309132818
   date: "2011-06-27"
   test: "xpcshell/tests/services/sync/tests/unit/test_syncengine_sync.js"
   testgroup: "xpcshell"
   revision: "4e87265b9c11"
   testsuite_id: "1309132818-6476508d66a9686a94e1d1c1b8671f5f73125e76-testsuite1"
   logurl: null
}

Perfdata structure

{
   buildtype: "debug"
   testgroup: "xperfstartup"
   platform: "win32"
   buildid: "20110216155739"
   os: "win7"
   testgroup_id: "2xj4Fex1Qh-_H1g3kfpnRw"
   tree: "mozilla-central"
   machine: "test"
   perfdata: [
       {
           read_bytes: 1085440
           type: "diskIO"
           name: "\Device\HarddiskVolume1\test0511\firefox\xul.dll"
           reads: 44
       },
       {
           count: 314
           type: "pagefaults"
           name: "firefox.exe"
       },
       {
           count: 314159
           type: "pagefaults"
           name: "system"
       }
   ]
   starttime: 1297900396
   date: "2011-02-16"
   test: "xperfstartup"
   testsuite: "xperfstartup"
   testsuite_id: "4xaGI6ejSgSTWL0x3eENpw"
   revision: "0f777e59d48c"
}

Q: Why do we separate the data into three document types, why not just use one big document?
A: Because searches in ElasticSearch are must faster and easier with basic data types; searching inside complex nested JSON is slower and the syntax is much more complex.

Q: Can't the python library automatically provide 'os' and 'platform'?
A: It would be nice, wouldn't it? Unfortunately, there are lots of things which can confuse the issue; e.g., if you're using mozilla-build on Windows, it will see your 64-bit version of Windows as win32, regardless of what you're testing. Similarly, we sometimes test 32-bit Mac stuff on macosx64. It seems safest to have the test tools provide this data instead of trying to guess.

Q: Why do we have both testgroup and testsuite?
A: It's entirely to support mochitest-other.  :( In most cases, each testgroup will have 1 testsuite.

Q: Where are the test runs in this structure?
A: We've been using the term 'testrun' to mean different things in different places. In this structure, I imagine 'testrun' to mean the same thing as it does in OrangeFactor: that is, a collection of testgroups that are run against the same primary changeset.

Q: Is this really the best way to include data about multiple products, or code from multiple repos?
A: I'm not sure. I suggested this structure because it's easy to use when searching ES. Other structures are possible. For instance, we could create a 'product' document type, and store all the products there, and then just include indexes to this document in the 'testgroup' document. The downside to this is that getting certain data out of ES would require multiple queries.

TBPL Data Structure

The basic unit of data in TBPL is the push. A push according to TBPL looks like this:

"b853c6efa929": {
 "id": 19218,
 "pusher": "dougt@mozilla.com",
 "date": "2011-03-17T20:50:37.000Z",
 "toprev": "b853c6efa929",
 "defaultTip": "b853c6efa929",
 "patches": [
  {
   "rev": "b853c6efa929",
   "author": "Doug Turner",
   "desc": "Bug 642291 - crash [@ nsBufferedInputStream::Write] demos.mozilla.org motovational poster. ipc
           serialization does not work here, removing it. r=bent a=blocking-fennec",
   "tags": {
    "length": 0,
    "prevObject": {
     "length": 0
    }
   }
  }
 ]
},

Additionally, each push can have a 'results' key, which contains all the results associated with that push. If the 'results' key exists, it looks like this:

'results': {
  'linux': {
    'opt': {
      'Reftest': [ an array of machineResults ],
      'Mochitest': [ an array of machineResults ],
      etc,
    },
    'debug': {}
  },
  'linux64': {}, etc
}

Each 'machineResult' looks like this:

"1300280775.1300281487.29409.gz": {
 "tree": "Firefox",
 "machine": {
  "name": "Rev3 WINNT 5.1 mozilla-central opt test mochitests-2/5",
  "os": "windowsxp",
  "type": "Mochitest",
  "debug": false,
  "latestFinishedRun": (a reference to the last finished run for this machine),
  "runs": 0,
  "runtime": 0,
  "averageCycleTime": 0
 },
 "slave": "talos-r3-xp-039",
 "runID": "1300280775.1300281487.29409.gz",
 "state": "success",
 "startTime": "2011-03-16T13:06:15.000Z",
 "endTime": "2011-03-16T13:19:01.000Z",
 "briefLogURL": "http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1300280775.1300281487.29409.gz",
 "fullLogURL": "http://tinderbox.mozilla.org/showlog.cgi?log=Firefox/1300280775.1300281487.29409.gz&fulltext=1",
 "summaryURL": "php/getSummary.php?tree=Firefox&id=1300280775.1300281487.29409.gz",
 "revs": {
  "mozilla-central": "ee18eff42c2e"
 },
 "notes": [],
 "errorParser": "unittest",
 "_scrape": [
  " s: talos-r3-xp-039",
  "<a href=http://hg.mozilla.org/mozilla-central/rev/ee18eff42c2e title=\"Built from revision  e18eff42c2e\">rev:ee18eff42c2e</a>",
  " mochitest-plain-2
11855/0/292" ] 'push': (reference to the push this belongs to), 'getTestResults': a function, 'getScrapeResults': a function, 'getUnitTestResults': a function, 'getTalosResults': a function, },

All of this gets fed into UserInterface.js in the handleUpdatedPush function().

Publishing data into Autolog

Tools wishing to publish data into Autolog should use the mozautolog python library (http://hg.mozilla.org/users/jgriffin_mozilla.com/mozautolog/). See documentation at http://hg.mozilla.org/users/jgriffin_mozilla.com/mozautolog/raw-file/tip/README.html.

A simple script that uploads some (static, for the sake of clarity) performance data to Autolog might look like:

 from mozautolog import RESTfulAutologTestGroup
 
 def main():
   testgroup = RESTfulAutologTestGroup(
     testgroup = 'xperfstartup',
     os = 'win7',
     platform = 'win32',
     machine = 'test',
     starttime = 1297900397,
     builder = 'mozilla-central_win7-debug_test-xperfstartup',
     server = '127.0.0.1:9200',
     restserver = 'http://127.0.0.1:8051/'
   )
   testgroup.set_primary_product(
     tree = 'mozilla-central',
     buildtype = 'debug',
     buildid = '20110216155739',
     revision = '0f777e59d48c',
   )
   testgroup.add_perf_data(
     test = 'xperfstartup',
     type = 'diskIO',
     name = '\\Device\\HarddiskVolume1\\test0511\\firefox\\xul.dll',
     reads = 44,
     read_bytes = 1085440
   )
   testgroup.add_perf_data(
     test = 'xperfstartup',
     type = 'pagefaults',
     name = 'firefox.exe',
     count = 314
   )
   testgroup.add_perf_data(
     test = 'xperfstartup',
     type = 'pagefaults',
     name = 'system',
     count = 314159
   )
   testgroup.submit()
 
 if __name__ == '__main__':
   main()

Setting up a Development Environment

Pre-requisites:

Steps:

  1. Setup a local instance of ElasticSearch for development purposes. (You can optionally populate it with test data, see README-testdata.txt, but this is broken as of 2012-11-28). By default, this will operate on http://localhost:9200/
  2. Edit autolog_server.conf, changing both es_server and bz_cache_server in [autolog] to localhost:9200
  3. Edit js/Config.js, changing autologServer attribute of the Config object to "http://localhost:8051/". You may also need to add extra IDs to the OSNames.
  4. Start the autolog server in the autolog repo, using python autolog_server.py . (yes, include the dot at the end)
  5. Host the autolog repo using a webserver; I use Apache but presumably nginx or anything else would work equally well. Alternatively, you can load it locally with a file URL.
  6. Navigate to index.html in the autolog repo; depending on how you've configured your webserver this might look something like http://localhost/autolog/ or file:///path/to/autolog/index.html.

Notes:

  • The test data is inserted into the local ES using relative dates, i.e., "5 minutes ago". If you are testing code the day after you added the test data, you might want to reset the test data so that it appears with more recent dates. To do so, use these commands (from the autolog repo):
 python testdata.py --wipe
 python testdata.py