How to move a Mongo database without downtime
Over the weekend we had to migrate a database to another host, and wanted to do it without downtime. We searched for solutions on how to “live” migrate a Mongo database, but everything we found suggested adding the new host to the replicaset to auto-sync, and then have it take over as master. Unfortunately for us, our new database host didn’t offer this level of access to the replicaset.
Mongo offers two commands for bulk transferring a database:
mongorestore. These create a binary snapshot of the database and restore it to exact fidelity. But what about updates that happen as mongodump and mongorestore are running? Well there’s a solution to that if your database provider offers you access to the Mongo oplog – ours did. So we wrote a script to “live sync” changes from the oplog to the new database.
Here are the steps:
- Start livesync.js and then transfer.sh in separate processes. Consider running these on separate really big EC2 instances, for maximum throughput.
- When transfer.sh finishes, switch your server environments over to point to the new database.
- Once livesync.js stops reporting changes, it means no there is no more activity on that database and the script can be terminated.
The purpose of livesync is to copy over whole database documents by _id while the bulk transfer is running. Fortunately, mongorestore will not try to overwrite a document by _id. So if a document is changed in the old database and copied over via livesync, it will not be overwritten when transfer.sh gets around to restoring it.
This solution is very much a hack and not meant for all situations. Data can still get lost. For example, while switching over the environments in a rolling fashion, two servers are updating the same document in two different databases – one copy will be lost. So use it carefully!
Join us if you like working on interesting problems like these!