Auto-tools/Projects/Loderunner

From MozillaWiki
Jump to: navigation, search

We have a ton of duplicated code in our test harness infrastructure. We have several components that ought to be easy to reuse and aren't. Some that were created to be easy to reuse and still aren't. We need a better set of APIs for doing low-level harness tasks that all our code can reuse and share and that shared code should provide an API that is useful and simple to use so that other people (and us) can build on top of it without reinventing the wheel or further duplicating then tailoring the existing code. This project aims to provide this low level API.

Let's start a discussion on the API and so in that vein, let's list out the requirements.

Some Requirements

I've broken down requirements into specific actions to make it easier to grapple with one problem space at a time.

Profile Management

  • Purpose: Used by the harness to manage the profile for the application under test
  • Create an empty, blank profile
  • Create a profile with a set of default data
    • Able to specify a set of extensions for the profile
    • Able to specify a set of preferences for the profile
    • Able to specify a set of bookmark data for the profile (bookmarks, folders, tags, livemarks, annotations, etc)
    • Able to specify a set of stored password data for the profile
    • Able to specify a set of download data for the profile
    • Able to specify a set of history data for the profile
    • Able to specify a set of "sessionrestore" data for the profile
  • Extension point - allow people to easily create their own profiles with their own unique data

Process Management

  • Obtain running processes
  • Kill running processes
  • Start a process in a monitorable and killable way

Application Management

  • Purpose: Used by the harness to handle the specific details of the application under test
  • Starts an application
    • able to specify any command line arguments to the application under test
    • able to specify environment variables to the application under test
  • Determine if the application is hung
  • Determine if the application crashes
  • Provide the application's path
  • Provide the application's name
    • Able to log the crash stack if such a crash occurs
  • Kill the application and all child processes
  • Able to start the application with a given profile
  • Ability to seamlessly work on all platforms (currently win32, osX, linux)
  • Support for remote platforms either through integrating specific technologies or extensibility
  • Extension point - allow an application to use any command line attributes easily
  • Extension point - detect the application version, in case functionality of other components forks based on app version

Log Management

  • Purpose: Provides ability to log information from both within the application and from the outside harness
  • Allow levels of logging
    • Example: DEBUG, INFO, ERROR, WARNING etc
  • Provides log to stdout in a common format for all test harnesses, regardless if the logging is occuring from the harness logic (Python) or from application logic (JavaScript)
  • Handles writing to log files
  • Handles writing to std out
  • Consider logging to web services or alternative formats. I am thinking talos uploading to webserver, or uploading results to brasstacks. Extension point?
  • Extension Point - provide both in test logging as well as debug logging for testing during development.
  • Extension Point - should be able to reuse this logging functionality in new programs without bringing along unreleated pieces (should be stand alone)

See also the Log Parsing section.

Dependency Management

  • Purpose: Allow test harnesses to create dependencies that they need in order to run, for example, a webserver
  • Provide a way to start dependent application
  • Provide a way to stop dependent application
  • Provide a way to send command line arguments to the dependent applcation
  • Extensiblity point - this has to be very extensible by nature. Each new dependent application will need its own "Dependency Manager" api which can then be used by the test harness in conjunction with the Application manager to ensure that the application has all needed dependencies when it starts up.

Command Line Manager

  • Purpose: Maintains a common set of command lines across all the harnesses.
  • Handles parsing
  • Handles defaults
  • Handles help messages
  • Handles validation of command lines (which can be overrided by the harness)
  • Extensibilty point - test harnesses will need to be able to add their own specific command line options

Test List Generator

  • Purpose: builds a lists of tests to run
  • Handles reading and parsing a manifest file
  • Optionally accepts a list of criteria to use for including/excluding tests (e.g., "maemo", "android")
  • Extension point - derive the test list from other sources, e.g., a database

Config File Manager (potentially)

  • Purpose: processes config files, which could be used to replace the current interpolation mechanism used in several test harnesses.
  • Handles reading configuration variables from a file
  • Makes the config variables available for use by other components

Log Parser (potentially)

While this might be worth tackling separately, as none of the harnesses use this, parsing failures from the logs is also related. Having a unified log format will make this much easier. Current topfails as well as brasstacks (and probably more) have this duplicate functionality. The parser should really ship with the writer once it is a free-standing component.

API Discussion

So perhaps a harness that uses the above abstractions could look like this:

mylogger = LogHandler(self.myCmdLineMgr.logFileOption)
myPrefs = [{"pref": "user.pref.foo", "value": "5"}]
profile = ProfileManager(myPrefs)
# Create a dependency manager object for Httpd
mywebsrv = HttpdWebServer()

# Do some set up
profile.writePref([{"pref":"some.new.pref","value":"some new value"}])

# Start our webserver
mywebserv.start()

myApplication = ApplicationManager(profile, myCmdLineMgr.getAllOptions(), mylogger)
myApplication.start()

Why a list of dictionaries for prefs instead of just a dictionary, e.g:

myPrefs = { "user.pref.foo": "5" }
...
profile.writePref(**{'some.new.pref': 'some.new.value'})

Some Goals

  • Each piece of functionality should have a clear interface built into it. Each component should only rely on the published interfaces of the other components. That way, we can easily quanitfy the impact of a given change by knowing what depends on what.
    • In the example above, I assert that the Application Manager doesn't know a thing about the commandline manager, it simply uses a list returned by getAllOptions(), for example.
  • Each interface definition for each component will be required and codified into a set of unit tests for the component. That way we can easily know if we break them. Furthermore the tests will become their own documentation for how the component is used.

Some Non-Goals

  • We don't want to support every nuance of the existing harnesses in these components. This API should be extensible enough to support those harnesses but not so that it is hamstrung by supporting them.

Plan

  1. Identify the set of components to generalize
  2. Identify the interfaces on the components (review the existing harnesses and see what basic functions need to be supported in these)
  3. Identify the API usage mechanism for extensibility and for harness use (review existing harnesses and see what specialized needs each will have and ensure those can be provided given the interface definitions in step 2 and the API usage mechanism of this step).
  4. Code the API use into a set of tests
  5. Code the interfaces and the modules.
  6. Determine packaging solution for these components - some might want to be command line tools (profile manager comes to mind), co-location in directories might work, but providing reusable python modules might also be a good strategy)
  7. Determine release criteria for how these components are used in the main tree with the main harnesses. How many tests pass? What are the release steps? Where do they land in the main tree? What changes to the make file targets are needed to pick them up, etc?
  8. Integrate said modules into the existing harnesses, refactoring specialized data as needed (and moving basic functionality into these components where appropriate)
  9. Profit!

Identify the set of components to generalize

  • mozrunner
    • deals with profiles
      • creates new (temporary) profiles
      • installs addons
      • sets preferences
    • manages processes
    • runs applications
  • talos
    • deals with profiles (though tightly coupled with running tests)