This release incorporates new features in the
NDBCLUSTER
storage engine and fixes
recently discovered bugs in MySQL Cluster NDB 7.0.12.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1, 6.2, 6.3, and 7.0 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.41 (see Section C.1.7, “Changes in MySQL 5.1.41 (05 November 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
A new configuration parameter
HeartbeatThreadPriority
makes it possible to
select between a first-in, first-out or round-round scheduling
policy for management node and API node heartbeat threads, as
well as to set the priority of these threads. See
Section 17.3.2.5, “Defining a MySQL Cluster Management Server”, or
Section 17.3.2.7, “Defining SQL and Other API Nodes in a MySQL Cluster”, for more
information.
(Bug#49617)
Start phases are now written to the data node logs. (Bug#49158)
Disk Data:
The ndb_desc utility can now show the extent
space and free extent space for subordinate
BLOB
and
TEXT
columns (stored in hidden
BLOB
tables by NDB). A
--blob-info
option has been
added for this program that causes ndb_desc
to generate a report for each subordinate
BLOB table. For more information, see
Section 17.4.9, “ndb_desc — Describe NDB Tables”.
(Bug#50599)
Bugs fixed:
When performing a system restart of a MySQL Cluster where multi-threaded data nodes were in use, there was a slight risk that the restart would hang due to incorrect serialization of signals passed between LQH instances and proxies; some signals were sent using a proxy, and others directly, which meant that the order in which they were sent and received could not be guaranteed. If signals arrived in the wrong order, this could cause one or more data nodes to hang. Now all signals that need to be sent and received in the same order are sent using the same path. (Bug#51645)
When one or more data nodes read their LCPs and applied undo logs significantly faster than others, this could lead to a race condition causing system restarts of data nodes to hang. This could most often occur when using both ndbd and ndbmtd processes for the data nodes. (Bug#51644)
When deciding how to divide the REDO log, the
DBDIH
kernel block saved more than was needed
to restore the previous local checkpoint, which could cause REDO
log space to be exhausted prematurely (NDB
error 410).
(Bug#51547)
DML operations can fail with NDB
error 1220
(REDO log files overloaded...) if the
opening and closing of REDO log files takes too much time. If
this occurred as a GCI marker was being written in the REDO log
while REDO log file 0 was being opened or closed, the error
could persist until a GCP stop was encountered. This issue could
be triggered when there was insufficient REDO log space (for
example, with configuration parameter settings
NoOfFragmentLogFiles = 6
and
FragmentLogFileSize = 6M
) with a load
including a very high number of updates.
(Bug#51512)
See also Bug#20904.
A side effect of the ndb_restore
--disable-indexes
and
--rebuild-indexes
options is
to change the schema versions of indexes. When a
mysqld later tried to drop a table that had
been restored from backup using one or both of these options,
the server failed to detect these changed indexes. This caused
the table to be dropped, but the indexes to be left behind,
leading to problems with subsequent backup and restore
operations.
(Bug#51374)
ndb_restore crashed while trying to restore a corrupted backup, due to missing error handling. (Bug#51223)
The ndb_restore message Successfully
created index `PRIMARY`...
was directed to
stderr
instead of stdout
.
(Bug#51037)
When using NoOfReplicas
equal to 1 or 2, if
data nodes from one node group were restarted 256 times and
applications were running traffic such that it would encounter
NDB
error 1204
(Temporary failure, distribution
changed), the live node in the node group would
crash, causing the cluster to crash as well. The crash occurred
only when the error was encountered on the 256th restart; having
the error on any previous or subsequent restart did not cause
any problems.
(Bug#50930)
Replication of a MySQL Cluster using multi-threaded data nodes
could fail with forced shutdown of some data nodes due to the
fact that ndbmtd exhausted
LongMessageBuffer
much more quickly than
ndbd. After this fix, passing of replication
data between the DBTUP
and
SUMA
NDB kernel blocks is done using
DataMemory
rather than
LongMessageBuffer
.
Until you can upgrade, you may be able to work around this issue
by increasing the LongMessageBuffer
setting;
doubling the default should be sufficient in most cases.
(Bug#46914)
A SELECT
requiring a sort could
fail with the error Can't find record in
'table
' when run
concurrently with a DELETE
from
the same table.
(Bug#45687)
Cluster API: An issue internal to ndb_mgm could cause problems when trying to start a large number of data nodes at the same time. (Bug#51273)
User Comments
Add your own comment.