CloudServices/Loop/MySQL: Difference between revisions

From MozillaWiki
Jump to navigation Jump to search
No edit summary
(Update the overall information since we now need to migrate the data.)
Line 7: Line 7:
=== Tentative schedule ===
=== Tentative schedule ===


We need to have this landed on our prod stack '''before''' we hit firefox 35 release if possible.
We would like to have this landed on our prod stack '''before''' we hit firefox 35 release if possible.
Depending if we can drop the data or not, the schedule would change.


In case the data can be dropped but not after a certain point in time, we need to define what this time is and it would mean that we deploy something early on:
* '''30 of october''' (next week): Finish the coding;
* '''31st of october''': Deploy it to the staging server and see if we're able to migrate data correctly (QA!);
* Once this is done, we plan to have some feature-freeze time when we have time to investigate how MySQL is behaving and to any bugfix that's needed.
 
=== Data migration ===
 
After a quick discussion with hello product folks, we *need* to actually migrate the redis data to the MySQL database (from Redis).


* '''28 of october''': Code ready, we are ready for deployment;
Good news is that it means we have more time than we expected to actually do the migration, the only limiting factor we have is the size of the redis storage (currently set to 4GB).
* '''29 of october''': MySQL deployed on the staging stack, QA can verify;
* Once it's deployed to staging and verified, we can deploy to prod.
* Once this is done, we plan to have some feature-freeze time when we have time to investigate how MySQL is behaving and to any bugfix that's needed.


In case we need data migration, it means that we need to write the scripts to do this data migration, decide on how to deal with that and make it happen before the firefox 35 release.
I don't think 4GB will be enough to handle the Firefox release user base, but I don't have yet any figures to back that up. We're currently, with 10% of the beta user base, using about 400MB of storage (this is for ~10K calls a day and represents ~9% of the space avail. in Redis).


=== Data migration ===
The plan is to migrate automatically the data using the following process:


This schedule is tied to the fact we would not need to write data migration scripts.
- Read from MySQL, if the keys aren't present, read them from redis and insert them into MySQL;
We need to find out if that's acceptable to drop the data we have for beta use, so we can start with a new empty database, and what is the last time we can drop the beta data.
- After some time, just drop the data in Redis because we don't need it anymore.

Revision as of 12:53, 24 October 2014

Loop migration to MySQL

The loop-server will be transistioning from using Redis for all its storage to using MySQL for the persistant data and Redis for the volatile data.

This is because it is far easier from an OPs point of view to operate a MySQL database for non-volatile data.

Tentative schedule

We would like to have this landed on our prod stack before we hit firefox 35 release if possible.

  • 30 of october (next week): Finish the coding;
  • 31st of october: Deploy it to the staging server and see if we're able to migrate data correctly (QA!);
  • Once this is done, we plan to have some feature-freeze time when we have time to investigate how MySQL is behaving and to any bugfix that's needed.

Data migration

After a quick discussion with hello product folks, we *need* to actually migrate the redis data to the MySQL database (from Redis).

Good news is that it means we have more time than we expected to actually do the migration, the only limiting factor we have is the size of the redis storage (currently set to 4GB).

I don't think 4GB will be enough to handle the Firefox release user base, but I don't have yet any figures to back that up. We're currently, with 10% of the beta user base, using about 400MB of storage (this is for ~10K calls a day and represents ~9% of the space avail. in Redis).

The plan is to migrate automatically the data using the following process:

- Read from MySQL, if the keys aren't present, read them from redis and insert them into MySQL; - After some time, just drop the data in Redis because we don't need it anymore.