TestEngineering/Services/LoadsToolsAndTesting1

From MozillaWiki
< TestEngineering‎ | Services
Revision as of 01:54, 3 September 2014 by Jbonacci (talk | contribs) (Created page with "= Loads V1 and Vaurien = Two tools Loads (V1) and Vaurien * Most Stage deployment verification is partially handled through the use of the Loads tool for stress/load (and some...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search

Loads V1 and Vaurien

Two tools Loads (V1) and Vaurien

  • Most Stage deployment verification is partially handled through the use of the Loads tool for stress/load (and someday performance) testing.
  • Vaurien is a TCP proxy which will let you simulate chaos between your application and a backend server.
    • One active(?) POC for Vaurien is with GeoLocation (ichnaea).

Usage Rules

  • Note: There are a number of open bugs and issues (see below) that require Loads use to be focused and specific per project:
    • Do not over do the loads test - start with the default values in the config files.
    • Do not run more than two tests in parallel.
    • Do not use more than 5 agents per load test unless you need to use more.
    • Do not run a load test of more than 8 - 10 hours
    • There are more limitations/rules...

Repos

Bugs

Documentation

Loads Cluster Dashboard

Deployment and AWS Instances:

  • Master, two slaves in US West
  • loads-master (broker and agent processes)
  • loads-slave-1 (agent processes)
  • loads-slave-2 (agent processes)
  • Note: there is no CF stack or ELB for this cluster
  • Note: the load cluster state/health can be check directly from the dashboard (see above)

Monitoring the cluster via Stackdriver

Monitoring the Loads Cluster

  • Via the dashboard: http://loads.services.mozilla.com/
  • Check the loads cluster state/health directly from the dashboard:
    • Agents statuses
    • Launch a health check on all agents

Monitoring the Stage environment during Load Tests

  • We have various dashboards created by OPs that capture and display all sorts of data via the Heka/ES/Kibana pipeline
    • Heka dashboard
    • Kibana dashboard
    • Stackdriver

Load Test Results

  • Load test results are always listed in the dashboard.
  • A clean up of the dashboard is a high priority for V2 - we want a much better/accurate representation of the test(s) run with the kind of human-readable results that provide additional meaning/context to the metrics provided by the various OPs dashboards

Reporting (or lack of it)

  • There were plans to create some reporting in the style of what we had with the Funkload tool.
  • There are bugs open about getting some reporting out of Loads.
  • No action taken at this time, but a very good candidate for V2.

QA Wikis

Current projects using Loads

New/Planned projects using Loads

  • SimplePush (probably)
  • Tiles (maybe)

Other projects doing load testing

(which I think uses Siege)

Vaurien


Loads V2

Comparison of Load Test Tools

Tasks Tarek says: So what I was thinking: I can lead Loads v2 development with the help of QA and Ops and Benjamin for SimplePush, and then slowly transition ownership to the QA and Ops team - because at the end of the day that's the two teams that should benefit the most from this tool. New Repos

New Documentation

September Brown Bag and Info Session


Brainstorming Loads and V2

  • What we need going forward
  • What we want going forward

Some issues (generalized - see the github issues for details): 1- very long runs (>10hours) are not really working. This is a design problem. 2- spinning new slaves to make big tests has not yet been automated. We have 2 slaves boxes that run 10 agents each. This was enough for most of our needs though. 3- The dashboard is scarce. It'll tell you what;s going on, but we don't have any real reporting features yet. 4- running a test using another language than Python is a bit of a pain (you need to do some zmq messaging) Stealing from Tarek's slide deck:

  • Day-long runs don't really work
  • Crappy dashboard
  • No direct link to Logs/CPU/Mem usage of stressed servers
  • No automatic slaves deployment yet
  • Python client only really supported
  • High bar to implement clients in Haskell/Go
  • Also, we have a lot of open bugs that need to get fixed. Some prevent better use of the tool for newer projects/services.
  • Get Loads "fixed" for Mac 10.9 and XCode 5.1.1: https://bugzilla.mozilla.org/show_bug.cgi?id=1010567
  • Figure out how to run loads from personal AWS instances

Some of this is already in progress... Monitoring What we currently have for Stage What do we want/need? Reporting Loads dashboard What about CPU/memory information (like from atop, tops) Links to some snapshoted graphs code version red/yellow/green states Deployment bug Bugs opened Bugs closed Scaling Quarterly results/trending PM targets expected targets actual targets Wiki design One per service? One per service per deployment? Weekly reporting What does the QE team want to see Getting Load tests to work from an AWS instance as localhost Getting the right data/metrics requirements from PMs then extracting that information and displaying on the Loads dashboard and/or in the OPs-built dashboards