This section provides a simplified outline of the steps involved when MySQL Cluster data nodes are started. More complete information can be found in MySQL Cluster Start Phases.
These phases are the same as those reported in the output from the
command in the management client. (See
Section 17.5.2, “Commands in the MySQL Cluster Management Client”, for more
information about this command.)
node_id
STATUS
Start types. There are several different startup types and modes, as shown here:
Initial Start.
The cluster starts with a clean file system on all data
nodes. This occurs either when the cluster started for
the very first time, or when all data nodes are
restarted using the --initial
option.
Disk Data files are not removed when restarting a
node using --initial
.
System Restart. The cluster starts and reads data stored in the data nodes. This occurs when the cluster has been shut down after having been in use, when it is desired for the cluster to resume operations from the point where it left off.
Node Restart. This is the online restart of a cluster node while the cluster itself is running.
Initial Node Restart. This is the same as a node restart, except that the node is reinitialized and started with a clean file system.
Setup and initialization (Phase -1). Prior to startup, each data node (ndbd process) must be initialized. Initialization consists of the following steps:
Obtain a node ID
Fetch configuration data
Allocate ports to be used for inter-node communications
Allocate memory according to settings obtained from the configuration file
When a data node or SQL node first connects to the management node, it reserves a cluster node ID. To make sure that no other node allocates the same node ID, this ID is retained until the node has managed to connect to the cluster and at least one ndbd reports that this node is connected. This retention of the node ID is guarded by the connection between the node in question and ndb_mgmd.
Normally, in the event of a problem with the node, the node disconnects from the management server, the socket used for the connection is closed, and the reserved node ID is freed. However, if a node is disconnected abruptly — for example, due to a hardware failure in one of the cluster hosts, or because of network issues — the normal closing of the socket by the operating system may not take place. In this case, the node ID continues to be reserved and not released until a TCP timeout occurs 10 or so minutes later.
To take care of this problem, you can use PURGE STALE
SESSIONS
. Running this statement forces all reserved
node IDs to be checked; any that are not being used by nodes
actually connected to the cluster are then freed.
Beginning with MySQL 5.1.11, timeout handling of node ID
assignments is implemented. This performs the ID usage checks
automatically after approximately 20 seconds, so that
PURGE STALE SESSIONS
should no longer be
necessary in a normal Cluster start.
After each data node has been initialized, the cluster startup process can proceed. The stages which the cluster goes through during this process are listed here:
Phase 0.
The NDBFS
and
NDBCNTR
blocks start (see
NDB
Kernel Blocks). The
cluster file system is cleared, if the cluster was started
with the --initial
option.
Phase 1.
In this stage, all remaining
NDB
kernel blocks are
started. Cluster connections are set up, inter-block
communications are established, and Cluster heartbeats are
started. In the case of a node restart, API node
connections are also checked.
When one or more nodes hang in Phase 1 while the remaining node or nodes hang in Phase 2, this often indicates network problems. One possible cause of such issues is one or more cluster hosts having multiple network interfaces. Another common source of problems causing this condition is the blocking of TCP/IP ports needed for communications between cluster nodes. In the latter case, this is often due to a misconfigured firewall.
Phase 2.
The NDBCNTR
kernel block checks the
states of all existing nodes. The master node is chosen,
and the cluster schema file is initialized.
Phase 3.
The DBLQH
and DBTC
kernel blocks set up communications between them. The
startup type is determined; if this is a restart, the
DBDIH
block obtains permission to
perform the restart.
Phase 4.
For an initial start or initial node restart, the redo log
files are created. The number of these files is equal to
NoOfFragmentLogFiles
.
For a system restart:
Read schema or schemas.
Read data from the local checkpoint.
Apply all redo information until the latest restorable global checkpoint has been reached.
For a node restart, find the tail of the redo log.
Phase 5. Most of the database-related portion of a data node start is performed during this phase. For an initial start or system restart, a local checkpoint is executed, followed by a global checkpoint. Periodic checks of memory usage begin during this phase, and any required node takeovers are performed.
Phase 6. In this phase, node groups are defined and set up.
Phase 7.
The arbitrator node is selected and begins to function.
The next backup ID is set, as is the backup disk write
speed. Nodes reaching this start phase are marked as
Started
. It is now possible for API
nodes (including SQL nodes) to connect to the cluster.
connect.
Phase 8.
If this is a system restart, all indexes are rebuilt (by
DBDIH
).
Phase 9. The node internal startup variables are reset.
Phase 100 (OBSOLETE). Formerly, it was at this point during a node restart or initial node restart that API nodes could connect to the node and begin to receive events. Currently, this phase is empty.
Phase 101.
At this point in a node restart or initial node restart,
event delivery is handed over to the node joining the
cluster. The newly joined node takes over responsibility
for delivering its primary data to subscribers. This phase
is also referred to as SUMA
handover phase.
After this process is completed for an initial start or system restart, transaction handling is enabled. For a node restart or initial node restart, completion of the startup process means that the node may now act as a transaction coordinator.
User Comments
Add your own comment.