This is a new Beta development release, incorporating new
features in the NDBCLUSTER
storage
engine and fixing recently discovered bugs in MySQL Cluster NDB
6.4.0.
Obtaining MySQL Cluster NDB 6.4.1. MySQL Cluster NDB 6.4.1 is a source-only release. You can obtain the source code from ftp://ftp.mysql.com/pub/mysql/download/cluster_telco/mysql-5.1.31-ndb-6.4.1/.
This Beta release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1, 6.2, 6.3, and 6.4 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.31 (see Section C.1.21, “Changes in MySQL 5.1.31 (19 January 2009)”).
This Beta release, as any other pre-production release, should not be installed on production level systems or systems with critical data. Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Important Change:
Formerly, when the management server failed to create a
transporter for a data node connection,
net_write_timeout
seconds
elapsed before the data node was actually allowed to disconnect.
Now in such cases the disconnection occurs immediately.
(Bug#41965)
See also Bug#41713.
Formerly, when using MySQL Cluster Replication, records for
“empty” epochs — that is, epochs in which no
changes to NDBCLUSTER
data or
tables took place — were inserted into the
ndb_apply_status
and
ndb_binlog_index
tables on the slave even
when --log-slave-updates
was
disabled. Beginning with MySQL Cluster NDB 6.2.16 and MySQL
Cluster NDB 6.3.13 this was changed so that these
“empty” eopchs were no longer logged. However, it
is now possible to re-enable the older behavior (and cause
“empty” epochs to be logged) by using the
--ndb-log-empty-epochs
option. For more
information, see Section 16.1.3.3, “Replication Slave Options and Variables”.
See also Bug#37472.
Cluster Replication: IPv6 networking is now supported between MySQL Cluster SQL nodes. This means that it is now possible to replicate between instances of MySQL Cluster using IPv6 addresses.
Currently, other MySQL Cluster processes (ndbd, ndbmtd, ndb_mgmd, and ndb_mgm) do not support IPv6 connections. This means that all MySQL Cluster data nodes, management servers, and management clients must connect to and be accessible from one another using IPv4. In addition, SQL nodes must use IPv4 to communicate with the cluster. There is also not yet any support in the NDB and MGM APIs for IPv6, which means that applications written using the MySQL Cluster APIs must make connections using IPv4. For more information, see Section 17.6.3, “Known Issues in MySQL Cluster Replication”.
Bugs fixed:
A maximum of 11 TUP
scans were allowed in
parallel.
(Bug#42084)
The management server could hang after attempting to halt it
with the STOP
command in the management
client.
(Bug#42056)
See also Bug#40922.
When using ndbmtd, one thread could flood another thread, which would cause the system to stop with a job buffer full condition (currently implemented as an abort). This could be caused by committing or aborting a large transaction (50000 rows or more) on a single data node running ndbmtd. To prevent this from happening, the number of signals that can be accepted by the system threads is calculated before excuting them, and only executing them if sufficient space is found. (Bug#42052)
MySQL Cluster would not compile when using
libwrap
. This issue was known to occur only
in MySQL Cluster NDB 6.4.0.
(Bug#41918)
Trying to execute an
ALTER ONLINE TABLE
... ADD COLUMN
statement while inserting rows into the
table caused mysqld to crash.
(Bug#41905)
When a data node connects to the management server, the node sends its node ID and transporter type; the management server then verifies that there is a transporter set up for that node and that it is in the correct state, and then sends back an acknowledgement to the connecting node. If the transporter was not in the correct state, no reply was sent back to the connecting node, which would then hang until a read timeout occurred (60 seconds). Now, if the transporter is not in the correct state, the management server acknowledges this promptly, and the node immediately disconnects. (Bug#41713)
See also Bug#41965.
Issuing EXIT
in the management client
sometimes caused the client to hang.
(Bug#40922)
In the event that a MySQL Cluster backup failed due to file permissions issues, conflicting reports were issued in the management client. (Bug#34526)
If all data nodes were shut down, MySQL clients were unable to
access NDBCLUSTER
tables and data
even after the data nodes were restarted, unless the MySQL
clients themselves were restarted.
(Bug#33626)
User Comments
Add your own comment.