One way to create a snapshot of the data in an existing master database is to use the mysqldump tool. Once the data dump has been completed, you then import this data into the slave before starting the replication process.
To obtain a snapshot of the data using mysqldump:
If you have not already locked the tables on the server to prevent statements that update data from executing:
Start a session on the server by connecting to it with the
command-line client, and flush all tables and block write
statements by executing the
FLUSH TABLES WITH
READ LOCK
statement:
mysql> FLUSH TABLES WITH READ LOCK;
Remember to use SHOW MASTER
STATUS
and record the binary log details for use
when starting up the slave. The point in time of your
snapshot and the binary log position must match. See
Section 16.1.1.4, “Obtaining the Replication Master Binary Log Coordinates”.
In another session, use mysqldump to create a dump either of all the databases you want to replicate, or of selected individual databases. For example:
shell> mysqldump --all-databases --lock-all-tables >dbdump.db
An alternative to using a bare dump, is to use the
--master-data
option, which automatically
appends the CHANGE MASTER TO
statement required on the slave to start the replication
process.
shell> mysqldump --all-databases --master-data >dbdump.db
In the client where you acquired the read lock, release the lock:
mysql> UNLOCK TABLES;
Alternatively, exit the first session to release the read lock.
When choosing databases to include in the dump, remember that you will need to filter out databases on each slave that you do not want to include in the replication process.
You will need either to copy the dump file to the slave, or to use the file from the master when connecting remotely to the slave to import the data.
User Comments
When your master is overloaded or is short on disk space, execute mysqldump on the slave, connecting remotely to the master: this saves master from extra disk writes when creating the dump file.
You may also pipe the output to gzip:
mysqldump --user=abc --password=abc | gzip > backup.sql.gz
To execute on the other servers, without uncompressing, use the following command:
zcat backup.sql.gz | mysql --user=abc --password=abc
Note that the --master-data option only includes MASTER_LOG_FILE and MASTER_LOG_POS arguments of the "CHANGE MASTER TO" statement; you'll likely want to edit the dump to include MASTER_HOST, MASTER_USER and MASTER_PASSWORD values when importing to a fresh slave server (or realize that you'll need to re-issue a CHANGE MASTER TO statement after the import).
The mysqldump process locked up for me if I first did "FLUSH TABLES WITH READ LOCK;" as described above. I later discovered this would only happen for databases that contained an InnoDB table.
If you are having this issue you can fix it my adding the --single-transaction argument to your mysqldump call.
Add your own comment.