Multiple SQL nodes.
The following are issues relating to the use of multiple MySQL
servers as MySQL Cluster SQL nodes, and are specific to the
NDBCLUSTER
storage engine:
No distributed table locks.
A LOCK TABLES
works only
for the SQL node on which the lock is issued; no other SQL
node in the cluster “sees” this lock. This is
also true for a lock issued by any statement that locks
tables as part of its operations. (See next item for an
example.)
ALTER TABLE
operations.
ALTER TABLE
is not fully
locking when running multiple MySQL servers (SQL nodes).
(As discussed in the previous item, MySQL Cluster does not
support distributed table locks.)
Replication. MySQL replication will not work correctly if updates are done on multiple MySQL servers. However, if the database partitioning scheme is done at the application level and no transactions take place across these partitions, replication can be made to work.
Database autodiscovery.
Autodiscovery of databases is not supported for multiple
MySQL servers accessing the same MySQL Cluster. However,
autodiscovery of tables is supported in such cases. What
this means is that after a database named
db_name
is created or imported
using one MySQL server, you should issue a CREATE
DATABASE
statement on each additional MySQL server that accesses
the same MySQL Cluster. (As of MySQL 5.0.2, you may also
use db_name
CREATE SCHEMA
.) Once this
has been done for a given MySQL server, that server should
be able to detect the database tables without error.
db_name
DDL operations.
DDL operations are not node failure safe. If a node fails
while trying to perform one of these (such as
CREATE TABLE
or
ALTER TABLE
), the data
dictionary is locked and no further DDL statements can be
executed without restarting the cluster.
Multiple management nodes. When using multiple management servers:
You must give nodes explicit IDs in connectstrings because automatic allocation of node IDs does not work across multiple management servers.
In addition, all API nodes (including MySQL servers acting as SQL nodes), should list all management servers using the same order in their connectstrings.
You must take extreme care to have the same configurations for all management servers. No special checks for this are performed by the cluster.
Prior to MySQL 4.1.15, all data nodes had to be restarted after bringing up the cluster in order for the management nodes to be able to see one another.
Multiple data node processes. While it is possible to run multiple cluster processes concurrently on a single host, it is not always advisable to do so for reasons of performance and high availability, as well as other considerations. In particular, in MySQL 4.1, we do not support for production use any MySQL Cluster deployment in which more than one ndbd process is run on a single physical machine.
We may support multiple data nodes per host in a future MySQL release, following additional testing. However, in MySQL 4.1, such configurations can be considered experimental only.
Multiple network addresses. Multiple network addresses per data node are not supported. Use of these is liable to cause problems: In the event of a data node failure, an SQL node waits for confirmation that the data node went down but never receives it because another route to that data node remains open. This can effectively make the cluster inoperable.
It is possible to use multiple network hardware
interfaces (such as Ethernet cards) for a
single data node, but these must be bound to the same address.
This also means that it not possible to use more than one
[tcp]
section per connection in the
config.ini
file. See
Section 15.3.2.7, “MySQL Cluster TCP/IP Connections”, for more
information.
User Comments
Add your own comment.