[+/-]
This section discusses making backups and restoring from them using MySQL Cluster replication. We assume that the replication servers have already been configured as covered previously (see Section 6.5, “Preparing the MySQL Cluster for Replication”, and the sections immediately following). This having been done, the procedure for making a backup and then restoring from it is as follows:
There are two different methods by which the backup may be started.
Method A.
This method requires that the cluster backup process was
previously enabled on the master server, prior to
starting the replication process. This can be done by
including the following line in a
[mysql_cluster]
section in the
my.cnf file
, where
management_host
is the IP
address or host name of the
NDB
management server for
the master cluster, and port
is the management server's port number:
ndb-connectstring=management_host
[:port
]
The port number needs to be specified only if the default port (1186) is not being used. See Section 2.2, “MySQL Cluster Multi-Computer Configuration”, for more information about ports and port allocation in MySQL Cluster.
In this case, the backup can be started by executing this statement on the replication master:
shellM
>ndb_mgm -e "START BACKUP"
Method B.
If the my.cnf
file does not specify
where to find the management host, you can start the
backup process by passing this information to the
NDB
management client as
part of the START BACKUP
command.
This can be done as shown here, where
management_host
and
port
are the host name and
port number of the management server:
shellM
>ndb_mgm
management_host
:port
-e "START BACKUP"
In our scenario as outlined earlier (see Section 6.5, “Preparing the MySQL Cluster for Replication”), this would be executed as follows:
shellM
>ndb_mgm rep-master:1186 -e "START BACKUP"
Copy the cluster backup files to the slave that is being
brought on line. Each system running an
ndbd process for the master cluster will
have cluster backup files located on it, and
all of these files must be copied to the
slave to ensure a successful restore. The backup files can be
copied into any directory on the computer where the slave
management host resides, so long as the MySQL and NDB binaries
have read permissions in that directory. In this case, we will
assume that these files have been copied into the directory
/var/BACKUPS/BACKUP-1
.
It is not necessary that the slave cluster have the same
number of ndbd processes (data nodes) as
the master; however, it is highly recommended this number be
the same. It is necessary that the slave
be started with the --skip-slave-start
option, to prevent premature startup of the replication
process.
Create any databases on the slave cluster that are present on the master cluster that are to be replicated to the slave.
A CREATE DATABASE
(or
CREATE
SCHEMA
) statement corresponding to each database
to be replicated must be executed on each SQL node in the
slave cluster.
Reset the slave cluster using this statement in the MySQL Monitor:
mysqlS
>RESET SLAVE;
It is important to make sure that the slave's
apply_status
table does not contain any
records prior to running the restore process. You can
accomplish this by running this SQL statement on the slave:
mysqlS
>DELETE FROM mysql.ndb_apply_status;
You can now start the cluster restoration process on the
replication slave using the ndb_restore
command for each backup file in turn. For the first of these,
it is necessary to include the -m
option to
restore the cluster metadata:
shellS
>ndb_restore -c
slave_host
:port
-nnode-id
\-b
backup-id
-m -rdir
dir
is the path to the directory
where the backup files have been placed on the replication
slave. For the ndb_restore commands
corresponding to the remaining backup files, the
-m
option should not be
used.
For restoring from a master cluster with four data nodes (as
shown in the figure in
Chapter 6, MySQL Cluster Replication) where the backup
files have been copied to the directory
/var/BACKUPS/BACKUP-1
, the proper
sequence of commands to be executed on the slave might look
like this:
shellS
>ndb_restore -c rep-slave:1186 -n 2 -b 1 -m \
-r ./var/BACKUPS/BACKUP-1
shellS
>ndb_restore -c rep-slave:1186 -n 3 -b 1 \
-r ./var/BACKUPS/BACKUP-1
shellS
>ndb_restore -c rep-slave:1186 -n 4 -b 1 \
-r ./var/BACKUPS/BACKUP-1
shellS
>ndb_restore -c rep-slave:1186 -n 5 -b 1 -e \
-r ./var/BACKUPS/BACKUP-1
The -e
(or
--restore-epoch
) option in the final
invocation of ndb_restore in this example
is required in order that the epoch is written to the slave
mysql.ndb_apply_status
. Without this
information, the slave will not be able to synchronize
properly with the master. (See
Section 4.17, “ndb_restore — Restore a MySQL Cluster Backup”.)
Now you need to obtain the most recent epoch from the
ndb_apply_status
table on the slave (as
discussed in
Section 6.8, “Implementing Failover with MySQL Cluster Replication”):
mysqlS
>SELECT @latest:=MAX(epoch)
FROM mysql.ndb_apply_status;
Using @latest
as the epoch value obtained
in the previous step, you can obtain the correct starting
position @pos
in the correct binary log
file @file
from the master's
mysql.ndb_binlog_index
table using the
query shown here:
mysqlM
>SELECT
->@file:=SUBSTRING_INDEX(File, '/', -1),
->@pos:=Position
->FROM mysql.ndb_binlog_index
->WHERE epoch > @latest
->ORDER BY epoch ASC LIMIT 1;
In the event that there is currently no replication traffic,
you can get this information by running
SHOW MASTER STATUS
on the
master and using the value in the Position
column for the file whose name has the suffix with the
greatest value for all files shown in the
File
column. However, in this case, you
must determine this and supply it in the next step manually or
by parsing the output with a script.
Using the values obtained in the previous step, you can now
issue the appropriate CHANGE MASTER
TO
statement in the slave's mysql
client:
mysqlS
>CHANGE MASTER TO
->MASTER_LOG_FILE='@file',
->MASTER_LOG_POS=@pos;
Now that the slave “knows” from what point in
which binlog
file to start reading data
from the master, you can cause the slave to begin replicating
with this standard MySQL statement:
mysqlS
>START SLAVE;
To perform a backup and restore on a second replication channel, it is necessary only to repeat these steps, substituting the host names and IDs of the secondary master and slave for those of the primary master and slave replication servers where appropriate, and running the preceding statements on them.
For additional information on performing Cluster backups and restoring Cluster from backups, see Section 5.3, “Online Backup of MySQL Cluster”.