Confirmed users
513
edits
Klahnakoski (talk | contribs) (Added summary) |
|||
| Line 148: | Line 148: | ||
* details: Create a Proof of Concept “big data” project which will store information about every test file we run: test status, error details, test machine and test duration to begin with. We will use this project to develop schemas and queries that work with data this large, and we will use this data to normalize chunk sizes and provide details about which tests never fail. | * details: Create a Proof of Concept “big data” project which will store information about every test file we run: test status, error details, test machine and test duration to begin with. We will use this project to develop schemas and queries that work with data this large, and we will use this data to normalize chunk sizes and provide details about which tests never fail. | ||
* '''progress since last update''': | * '''progress since last update''': | ||
** Currently integrating the ETL code and Ahal's worker management code. Works in development, but needs to be deployed on AWS | |||
** Enhancing the ETL to deal with 600Meg structured logs; ensuring streams are used everywhere, otherwise Python runs out of memory. | |||
** ETL is filling the ES cluster (still only one node), but way too slow to keep up. | |||
** Started building the ActiveData service (https://github.com/klahnakoski/ActiveData); focus is on test infrastructure to ensure service can be tested, and more tests added easily. | |||
=== Implement the ability to normalize chunk durations in mochitest [ahal] === | === Implement the ability to normalize chunk durations in mochitest [ahal] === | ||