Auto-tools/Projects/SpeedTests: Difference between revisions

no edit summary
No edit summary
No edit summary
Line 1: Line 1:
See results at [http://brasstacks.mozilla.com/speedtests.html].
== Results ==
 
See results at http://brasstacks.mozilla.com/speedtests.html.


== Description ==
== Description ==
Line 27: Line 29:
Unfortunately, I think legal restrictions prevent us from distributing the Microsoft-originated tests themselves.
Unfortunately, I think legal restrictions prevent us from distributing the Microsoft-originated tests themselves.


== Issues ==
== Configuration ==
 
The main configuration task is unfortunately not easy.  All the browsers must have a stored profile that permits its browser to
* open a new window (all tests are run in a separate window to control window size)
* AJAX calls, to send the results to the server
* load a page from localhost
 
Some browsers are more particular than others about what is allowed out of the box.


As with any automation, there are finnicky things needing to be worked out (or around):
The profiles should also have empty caches or a clear-cache-on-exit setting.  This ensures that any changes to the tests on the server will be picked up the next time the clients are run.


* Window sizeFor a proper comparison, we want all browsers to run in the same-sized window.  However, not all OSs appear to provide a way to dictate window size when launching an application (testing so far has been on OS X, for which I have not been able to find a suitable solution).
I have been collecting some stock profiles, but unfortunately they are not always compatible between releasesThe most reliable way to set up the stored profiles is through the "archive" and "load" commands, followed by testing in testmode (see below).
** RESOLVED A start page was added to open the tests themselves in a second window of a set size (currently 1024x768).  There are very slight differences in how browsers interpret sizes specified in window.open() (e.g. window.innerHeight and .innerWidth can vary) but not enough to make much of a difference.


* Security preferences.  Some browsers, notably Opera, don't like pages that try to redirect to localhost.  Some prompt the user to restore the previous session upon launch.  This requires some sort of initial setup; hopefully a saved profile will be sufficientAlthough, Opera sometimes forgets security settings and reverts to user prompts--not useful in an automated environment.
Finally, in order to prevent updates to browsers that might affect performance while tests are being run, it is recommended that the network be partially disabled, with access only to the test server.  This can be done by disabling DNS and putting the server IP into the hosts fileA standalone script, nw.py, has been provided to enable and disable the DNS on Windows machines (must be run as administrator).
** RESOLVED Using a stock profile to overwrite the current profile each time the tests are launched appears to have overcome these problems.


* Frozen tests.  It's entirely possible that a test won't finish, either because of a problem in the test or wonky browser behaviour.  The controller could kill the browser in this case, but the controller doesn't know how long the current test has been running (or even which test it's executing).  Furthermore, we'd probably lose the rest of the test results.<br><br>A possible solution would be to move the test-loading logic to the controller instead of being done in the test itself.  However this prevents a casual user from running the test suite independent of a controller (by just going to http://.../nexttest/).  It also requires finer control over the browser.
== Usage ==
** RESOLVED We give the controller a total test time of 10 minutes, after which it kills the browser and moves on.  This obviously can't be used when going straight to the tests URL without using the controller, but data gained the latter way is much harder to use anyway.


* Browser and system updatesSome browsers check for, and, either silently or with a user prompt, upgrade the browser.  Also Windows updates might be downloaded and/or automatically installed.  This is not desirable since it can throw off results if a test is running at the same time.
The client can be started by just running "python speedtests.py"If you don't want to run all the browsers, you can append a list of desired browsers, e.g. "python speedtests.py nightly 'internet explorer' chrome".
** RESOLVED We limit network access on the machine.  We do this by disabling DNS (by providing an invalid DNS server, e.g. 127.0.0.1) and adding the test server IP address to the hosts file.  Unfortunately this requires periodic manual maintenance in order to install desired updates (e.g. to test against recent browser releases), but it should be minimal.


* Sudden drops in performanceAfter a few days of running well, several browsers suddenly reported extremely different results, generally much lower. There were also service interruptions.  These problems continued unabated.
It can also be run in test mode by appending the "-t" or "--testmode" optionThis causes the server to return simple test pages that do not run any tests but exercise the framework itself, namely, chaining the tests and pinging back the local controller.
** RESOLVED (I THINK?) After a configuration change, the machine was no longer automatically logging in.  This caused some runs not to occur at all, and some ran through a virtual remote-desktop terminal.  Also, remote desktop can apparently do strange things; there is some evidence that the nightlies stopped working properly after they were launched from remote desktopVNC seems to have restored the correct behaviour. VNC should be the only form of access to the machines to eliminate strange problems like this.
 
There are also two commands to help you set up and test the stored profiles:
* archive <browser>: Store the current profile for <browser>
* load <browser>: Starts <browser> with the currently stored profile
 
== Outstanding Issues ==
 
* We limit network access on the machine in order to prevent browser and system updates, which might throw off results if a test is running at the same timeUnfortunately this requires periodic manual maintenance in order to install desired updates (e.g. to test against recent browser releases), but it should be minimal.


== Major Tasks Remaining ==
== Major Tasks Remaining ==
* Set up a second Windows box with different hardware. (8-16 h, maybe more, depending on problems encountered)
** Modify reports slightly to account for different machines running the same platform.  Differentiating by IP or by MAC address may be sufficient.


* Set up a Mac box. (8-16 h)
* Set up a Mac box. (8-16 h)
Line 55: Line 64:
* Set up a Linux box. (8-16 h)
* Set up a Linux box. (8-16 h)


* Various tweaks to reports as requested by customers (variable, say 16 h total).
* (to do eventually) The local controller could act as a proxy for the test server.  This would allow better logging on the client side in order to diagnose problems and to provide the tester with immediate results.
Confirmed users
1,927

edits