Socorro/Pre-PHX Smoketest Schedule: Difference between revisions
< Socorro
Jump to navigation
Jump to search
(formatting) |
(notes from daniel about hbase) |
||
| Line 28: | Line 28: | ||
** disable entire components for 20min to test system recovery | ** disable entire components for 20min to test system recovery | ||
*** hbase | *** hbase | ||
**** kill off: | |||
***** random region server | |||
***** important region server | |||
***** random thrift server | |||
****** whoever is connceted should get a blip | |||
***** hbase master | |||
***** name node | |||
***** drbd (coordinate with IT) | |||
****** specifically this means: killing the hardware for primary set of admin notes | |||
*** postgresql | *** postgresql | ||
*** monitor | *** monitor | ||
Revision as of 21:27, 19 January 2011
test plan
- What we are going to test and how in terms of load
- what:
- at what point do collectors fall over?
- start with 10 test nodes at 24k crashes each, versus all socorro collectors
- ramp up to 40 test nodes
- if it can handle full traffic, take collectors out until it fails
- back down nodes/crashes until we find a stable place
- check ganglia to see where our bottlenecks are
- test both direct-to-disk and direct-to-hbase crash storage systems
- start with 10 test nodes at 24k crashes each, versus all socorro collectors
- crashes are collected without error
- all submitted crashes are collected and processed
- check apache logs for collector (syslog not reliable)
- check processor and collector logs for errors
- confirm that all crashes are stored in hbase
- at what point do collectors fall over?
- how:
grinder (bug 619815) + 20 VMs (bug 619814)- Lars added stats and iteration to submitter.py for initial smoke-test bug 622311
- 40 seamicro nodes standing by to test, using socorro-loadtest.sh
- pool of 240k crashes, taken over 10 days from MPT prod (Jan 1st through 10th)
- when:
- waiting on deps in tracking bug 619811
- tentative start date - Wednesday Jan 12 2010
- minimum 2-3 days testing; as much as we can get
- what:
- what component failure tests we will run
- disable entire components for 20min to test system recovery
- hbase
- kill off:
- random region server
- important region server
- random thrift server
- whoever is connceted should get a blip
- hbase master
- name node
- drbd (coordinate with IT)
- specifically this means: killing the hardware for primary set of admin notes
- kill off:
- postgresql
- monitor
- all processors
- hbase
- disable individual nodes to test the ability of the other nodes to cope and at what point they get overloaded
- one, two, and three collectors
- one to five processors
- postgresql failover test
- failover master01->master02
- will require manual failover of all components
- disable entire components for 20min to test system recovery
known problems
unicode-in-metadata problem
- crash submitter fails to insert JSON with unicode in it
"UnicodeEncodeError: 'ascii' codec can't encode character u'\u0142' in position 378: ordinal not in range(128)"
- ~1.8% of crashes in test data, reproducable
unreliable syslog
- FIXED
cannot get 100% reliable collector logs, syslog drops packets bug 623410need to keep this in mind when running direct-to-hbase insert mode
- rare, intermittent pycurl SSL errors
- "ERROR (77, 'Problem with the SSL CA cert (path? access rights?)')"
- ~0.5% of crash crash submissions, not reproducable
pycurl random crashes
*** longjmp causes uninitialized stack frame ***: python terminated ======= Backtrace: ========= /lib/libc.so.6(__fortify_fail+0x4d)[0x274fed] /lib/libc.so.6[0x274f5a] /lib/libc.so.6(__longjmp_chk+0x49)[0x274ec9] /usr/lib/libcurl.so.4[0x5874b99] [0x641400] [0x641424]
- this stops the run on one node
- hacked around in the socorro-loadtest.sh for the "forever" infinite-loop case