This is a bugfix release, fixing recently discovered bugs in the previous MySQL Cluster NDB 6.2 release.
This release incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.2 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.44 (see Section C.1.3, “Changes in MySQL 5.1.44 (04 February 2010)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Cluster API:
It is now possible to determine, using the
ndb_desc utility or the NDB API, which data
nodes contain replicas of which partitions. For
ndb_desc, a new
--extra-node-info
option is
added to cause this information to be included in its output. A
new method
NdbDictionary::Object::Table::getFragmentNodes()
is added to the NDB API for obtaining this information
programmatically.
(Bug#51184)
On Solaris platforms, the MySQL Cluster management server and
NDB API applications now use CLOCK_REALTIME
as the default clock.
(Bug#46183)
Bugs fixed:
Important Change:
The --with-ndb-port-base
option for
configure did not function correctly, and has
been deprecated. Attempting to use this option produces the
warning Ignoring deprecated option
--with-ndb-port-base.
Beginning with MySQL Cluster NDB 7.1.0, the deprecation warning
itself is removed, and the --with-ndb-port-base
option is simply handled as an unknown and invalid option if you
try to use it.
(Bug#47941)
See also Bug#38502.
Cluster Replication: Important Change:
In a MySQL Cluster acting as a replication slave and having
multiple SQL nodes, only the SQL node receiving events directly
from the master recorded DDL statements in its binary logs
unless this SQL node also had binary logging enabled; otherwise,
other SQL nodes in the slave cluster failed to log DDL
statements, regardless of their individual
--log-bin
settings.
The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:
DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.
DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.
Replicated DDL and DML statements on the slave are logged by
any attached mysqld that has both
--log-bin
and
--log-slave-updates
enabled.
Replicated DDL and DML statements are logged with the server
ID of the original (master) MySQL server by any attached
mysqld that has both
--log-bin
and
--log-slave-updates
enabled.
Affect on upgrades. When upgrading from a previous MySQL CLuster release, you should perform either one of the following:
Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on “old” SQL nodes until all SQL nodes have been upgraded.
Make sure that
--log-slave-updates
is
enabled on all SQL nodes performing binary logging prior
to the upgrade, so that all DDL is captured.
Logging of DML statements was not affected by this issue.
Packaging:
The pkg
installer for MySQL Cluster on
Solaris did not perform a complete installation due to an
invalid directory reference in the post-install script.
(Bug#41998)
When using NoOfReplicas
equal to 1 or 2, if
data nodes from one node group were restarted 256 times and
applications were running traffic such that it would encounter
NDB
error 1204
(Temporary failure, distribution
changed), the live node in the node group would
crash, causing the cluster to crash as well. The crash occurred
only when the error was encountered on the 256th restart; having
the error on any previous or subsequent restart did not cause
any problems.
(Bug#50930)
If a query on an NDB
table compared
a constant string value to a column, and the length of the
string was greater than that of the column, condition pushdown
did not work correctly. (The string was truncated to fit the
column length before being pushed down.) Now in such cases, the
condition is no longer pushed down.
(Bug#49459)
When performing tasks that generated large amounts of I/O (such as when using ndb_restore), an internal memory buffer could overflow, causing data nodes to fail with signal 6.
Subsequent analysis showed that this buffer was not actually required, so this fix removes it. (Bug#48861)
Performing intensive inserts and deletes in parallel with a high
scan load could a data node crashes due to a failure in the
DBACC
kernel block. This was because checking
for when to perform bucket splits or merges considered the first
4 scans only.
(Bug#48700)
The creation of an ordered index on a table undergoing DDL operations could cause a data node crash under certain timing-dependent conditions. (Bug#48604)
In certain cases, performing very large inserts on
NDB
tables when using
ndbmtd caused the memory allocations for
ordered or unique indexes (or both) to be exceeded. This could
cause aborted transactions and possibly lead to data node
failures.
(Bug#48037)
See also Bug#48113.
When employing NDB
native backup to
back up and restore an empty NDB
table that used a non-sequential
AUTO_INCREMENT
value, the
AUTO_INCREMENT
value was not restored
correctly.
(Bug#48005)
SHOW CREATE TABLE
did not display
the AUTO_INCREMENT
value for
NDB
tables having
AUTO_INCREMENT
columns.
(Bug#47865)
Under some circumstances, when a scan encountered an error early
in processing by the DBTC
kernel block (see
The DBTC
Block), a node
could crash as a result. Such errors could be caused by
applications sending incorrect data, or, more rarely, by a
DROP TABLE
operation executed in
parallel with a scan.
(Bug#47831)
When starting a node and synchronizing tables, memory pages were allocated even for empty fragments. In certain situations, this could lead to insufficient memory. (Bug#47782)
mysqld allocated an excessively large buffer
for handling BLOB
values due to
overestimating their size. (For each row, enough space was
allocated to accommodate every
BLOB
or
TEXT
column value in the result
set.) This could adversely affect performance when using tables
containing BLOB
or
TEXT
columns; in a few extreme
cases, this issue could also cause the host system to run out of
memory unexpectedly.
(Bug#47574)
NDBCLUSTER
uses a dynamically-allocated
buffer to store BLOB
or
TEXT
column data that is read
from rows in MySQL Cluster tables.
When an instance of the NDBCLUSTER
table
handler was recycled (this can happen due to table definition
cache pressure or to operations such as
FLUSH TABLES
or
ALTER TABLE
), if the last row
read contained blobs of zero length, the buffer was not freed,
even though the reference to it was lost. This resulted in a
memory leak.
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
Now execute repeatedly a SELECT
on this table, such that the zero-length
LONGTEXT
row is
last, followed by a FLUSH
TABLES
statement (which forces the handler object to
be re-used), as shown here:
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
Prior to the fix, this resulted in a memory leak proportional to
the size of the stored
LONGTEXT
value
each time these two statements were executed.
(Bug#47573)
Large transactions involving joins between tables containing
BLOB
columns used excessive
memory.
(Bug#47572)
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug#47505)
NDB
stores blob column data in a
separate, hidden table that is not accessible from MySQL. If
this table was missing for some reason (such as accidental
deletion of the file corresponding to the hidden table) when
making a MySQL Cluster native backup, ndb_restore crashed when
attempting to restore the backup. Now in such cases, ndb_restore
fails with the error message Table
table_name
has blob column
(column_name
) with missing parts
table in backup instead.
(Bug#47289)
For very large values of MaxNoOfTables
+
MaxNoOfAttributes
, the calculation for
StringMemory
could overflow when creating
large numbers of tables, leading to NDB error 773
(Out of string memory, please modify StringMemory
config parameter), even when
StringMemory
was set to
100
(100 percent).
(Bug#47170)
The default value for the StringMemory
configuration parameter, unlike other MySQL Cluster
configuration parameters, was not set in
ndb/src/mgmsrv/ConfigInfo.cpp
.
(Bug#47166)
Signals from a failed API node could be received after an
API_FAILREQ
signal (see
Operations and Signals)
has been received from that node, which could result in invalid
states for processing subsequent signals. Now, all pending
signals from a failing API node are processed before any
API_FAILREQ
signal is received.
(Bug#47039)
See also Bug#44607.
Using triggers on NDB
tables caused
ndb_autoincrement_prefetch_sz
to be treated as having the NDB kernel's internal default
value (32) and the value for this variable as set on the
cluster's SQL nodes to be ignored.
(Bug#46712)
Full table scans failed to execute when the cluster contained more than 21 table fragments.
The number of table fragments in the cluster can be calculated
as the number of data nodes, times 8 (that is, times the value
of the internal constant
MAX_FRAG_PER_NODE
), divided by the number
of replicas. Thus, when NoOfReplicas = 1
at
least 3 data nodes were required to trigger this issue, and
when NoOfReplicas = 2
at least 4 data nodes
were required to do so.
Ending a line in the config.ini
file with
an extra semicolon character (;
) caused
reading the file to fail with a parsing error.
(Bug#46242)
When combining an index scan and a delete with a primary key delete, the index scan and delete failed to initialize a flag properly. This could in rare circumstances cause a data node to crash. (Bug#46069)
Problems could arise when using
VARCHAR
columns
whose size was greater than 341 characters and which used the
utf8_unicode_ci
collation. In some cases,
this combination of conditions could cause certain queries and
OPTIMIZE TABLE
statements to
crash mysqld.
(Bug#45053)
Running an ALTER TABLE
statement
while an NDB backup was in progress caused
mysqld to crash.
(Bug#44695)
If a node failed while sending a fragmented long signal, the receiving node did not free long signal assembly resources that it had allocated for the fragments of the long signal that had already been received. (Bug#44607)
When performing auto-discovery of tables on individual SQL
nodes, NDBCLUSTER
attempted to overwrite
existing MyISAM
.frm
files and corrupted them.
Workaround.
In the mysql client, create a new table
(t2
) with same definition as the corrupted
table (t1
). Use your system shell or file
manager to rename the old .MYD
file to
the new file name (for example, mv t1.MYD
t2.MYD). In the mysql client,
repair the new table, drop the old one, and rename the new
table using the old file name (for example,
RENAME TABLE t2
TO t1
).
When starting a cluster with a great many tables, it was possible for MySQL client connections as well as the slave SQL thread to issue DML statements against MySQL Cluster tables before mysqld had finished connecting to the cluster and making all tables writeable. This resulted in Table ... is read only errors for clients and the Slave SQL thread.
This issue is fixed by introducing the
--ndb-wait-setup
option for the
MySQL server. This provides a configurable maximum amount of
time that mysqld waits for all
NDB
tables to become writeable,
before allowing MySQL clients or the slave SQL thread to
connect.
(Bug#40679)
See also Bug#46955.
Running ndb_restore with the
--print
or --print_log
option
could cause it to crash.
(Bug#40428, Bug#33040)
When building MySQL Cluster, it was possible to configure the
build using --with-ndb-port
without supplying a
port number. Now in such cases, configure
fails with an error.
(Bug#38502)
See also Bug#47941.
An insert on an NDB
table was not
always flushed properly before performing a scan. One way in
which this issue could manifest was that
LAST_INSERT_ID()
sometimes failed
to return correct values when using a trigger on an
NDB
table.
(Bug#38034)
If the cluster crashed during the execution of a
CREATE LOGFILE GROUP
statement,
the cluster could not be restarted afterwards.
(Bug#36702)
See also Bug#34102.
Some joins on large NDB
tables
having TEXT
or
BLOB
columns could cause
mysqld processes to leak memory. The joins
did not need to reference the
TEXT
or
BLOB
columns directly for this
issue to occur.
(Bug#36701)
When the MySQL server SQL mode included
STRICT_TRANS_TABLES
, storage
engine warnings and error codes specific to
NDB
were returned when errors occurred,
instead of the MySQL server errors and error codes expected by
some programming APIs (such as Connector/J) and applications.
(Bug#35990)
On Mac OS X 10.5, commands entered in the management client
failed and sometimes caused the client to hang, although
management client commands invoked using the
--execute
(or
-e
) option from the system shell worked
normally.
For example, the following command failed with an error and hung until killed manually, as shown here:
ndb_mgm>SHOW
Warning, event thread startup failed, degraded printouts as result, errno=36^C
However, the same management client command, invoked from the system shell as shown here, worked correctly:
shell> ndb_mgm -e "SHOW"
See also Bug#34438.
When a copying operation exhausted the available space on a data
node while copying large BLOB
columns, this could lead to failure of the data node and a
Table is full error on the SQL node which
was executing the operation. Examples of such operations could
include an ALTER TABLE
that
changed an INT
column to a
BLOB
column, or a bulk insert of
BLOB
data that failed due to
running out of space or to a duplicate key error.
(Bug#34583, Bug#48040)
Trying to insert more rows than would fit into an
NDB
table caused data nodes to crash. Now in
such situations, the insert fails gracefully with error 633
Table fragment hash index has reached maximum
possible size.
(Bug#34348)
The error message text for NDB
error code 410 (REDO log files
overloaded...) was truncated.
(Bug#23662)
Replication:
When mysqlbinlog
--verbose
was used to read a
binary log that had been recorded using the row-based format,
the output for events that updated some but not all columns of
tables was not correct.
(Bug#47323)
Replication:
In some cases, a STOP SLAVE
statement could cause the replication slave to crash. This issue
was specific to MySQL on Windows or Macintosh platforms.
(Bug#45238, Bug#45242, Bug#45243, Bug#46013, Bug#46014, Bug#46030)
See also Bug#40796.
Disk Data: Inserts of blob column values into a Disk Data table that exhausted the tablespace resulted in misleading error messages about rows not being found in the table rather than the expected error Out of extents, tablespace full. (Bug#48113)
Disk Data: A local checkpoint of an empty fragment could cause a crash during a system restart which was based on that LCP. (Bug#47832)
See also Bug#41915.
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.
This miscalculation was not reflected in the contents of the
INFORMATION_SCHEMA.FILES
table,
as it applied to extents allocated to a fragment, and not to a
file.
Disk Data:
If the value set in the config.ini
file for
FileSystemPathDD
,
FileSystemPathDataFiles
, or
FileSystemPathUndoFiles
was identical to the
value set for FileSystemPath
, that parameter
was ignored when starting the data node with
--initial
option. As a result, the Disk Data
files in the corresponding directory were not removed when
performing an initial start of the affected data node or data
nodes.
(Bug#46243)
Disk Data: Repeatedly creating and then dropping Disk Data tables could eventually lead to data node failures. (Bug#45794, Bug#48910)
Disk Data:
When a crash occurs due to a problem in Disk Data code, the
currently active page list is printed to
stdout
(that is, in one or more
ndb_
files). One of these lists could contain an endless loop; this
caused a printout that was effectively never-ending. Now in such
cases, a maximum of 512 entries is printed from each list.
(Bug#42431)nodeid
_out.log
Cluster Replication:
When expire_logs_days
was set,
the thread performing the purge of the log files could deadlock,
causing all binary log operations to stop.
(Bug#49536)
Cluster Replication: When using multiple active replication channels, it was sometimes possible that a node group would fail on the slave cluster, causing the slave cluster to shut down. (Bug#47935)
Cluster Replication:
When recording a binary log using the
--ndb-log-update-as-write
and
--ndb-log-updated-only
options
(both enabled by default) and later attempting to apply that
binary log with mysqlbinlog, any operations
that were played back from the log but which updated only some
(but not all) columns caused any columns that were not updated
to be reset to their default values.
(Bug#47674)
Cluster Replication:
mysqlbinlog failed to apply correctly a
binary log that had been recorded using
--ndb-log-update-as-write=1
.
(Bug#46662)
Cluster API:
When reading blob data with lock mode
LM_SimpleRead
, the lock was not upgraded as
expected.
(Bug#51034)
Cluster API:
When a DML operation failed due to a uniqueness violation on an
NDB
table having more than one
unique index, it was difficult to determine which constraint
caused the failure; it was necessary to obtain an
NdbError
object, then decode its
details
property, which in could lead to
memory management issues in application code.
To help solve this problem, a new API method
Ndb::getNdbErrorDetail()
is added, providing
a well-formatted string containing more precise information
about the index that caused the unque constraint violation. The
following additional changes are also made in the NDB API:
Use of NdbError.details
is now deprecated
in favor of the new method.
The NdbDictionary::listObjects()
method
has been modified to provide more information.
For more information, see
Ndb::getNdbErrorDetail()
,
The NdbError
Structure, and
Dictionary::listObjects()
.
(Bug#48851)
Cluster API:
The NDB API methods Dictionary::listEvents()
,
Dictionary::listIndexes()
,
Dictionary::listObjects()
, and
NdbOperation::getErrorLine()
formerly had
both const
and non-const
variants. The non-const
versions of these
methods have been removed. In addition, the
NdbOperation::getBlobHandle()
method has been
re-implemented in order to provide consistent internal
semantics.
(Bug#47798)
Cluster API:
In some circumstances, if an API node encountered a data node
failure between the creation of a transaction and the start of a
scan using that transaction, then any subsequent calls to
startTransaction()
and
closeTransaction()
could cause the same
transaction to be started and closed repeatedly.
(Bug#47329)
Cluster API: A duplicate read of a column caused NDB API applications to crash. (Bug#45282)
Cluster API:
Performing multiple operations using the same primary key within
the same
NdbTransaction::execute()
call could lead to a data node crash.
This fix does not make change the fact that performing
multiple operations using the same primary key within the same
execute()
is not supported; because there
is no way to determine the order of such operations, the
result of such combined operations remains undefined.
See also Bug#44015.
Cluster API:
The error handling shown in the example file
ndbapi_scan.cpp
included with the MySQL
Cluster distribution was incorrect.
(Bug#39573)
Cluster API:
When using blobs, calling getBlobHandle()
requires the full key to have been set using
equal()
, because
getBlobHandle()
must access the key for
adding blob table operations. However, if
getBlobHandle()
was called without first
setting all parts of the primary key, the application using it
crashed. Now, an appropriate error code is returned instead.
(Bug#28116, Bug#48973)
API: The fix for Bug#24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the “dummy” thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug#42850)
On some Unix/Linux platforms, an error during build from source
could be produced, referring to a missing
LT_INIT
program. This is due to versions of
libtool 2.1 and earlier.
(Bug#51009)
On Mac OS X or Windows, sending a SIGHUP
signal to the server or an asynchronous flush (triggered by
flush_time
) caused the server
to crash.
(Bug#47525)
1) In rare cases, if a thread was interrupted during a
FLUSH
PRIVILEGES
operation, a debug assertion occurred later
due to improper diagnostic area setup. 2) A
KILL
operation could cause a
console error message referring to a diagnostic area state
without first ensuring that the state existed.
(Bug#33982)
When using the ARCHIVE
storage
engine, SHOW TABLE STATUS
displayed incorrect
information for Max_data_length
,
Data_length
and
Avg_row_length
.
(Bug#29203)
Installation of MySQL on Windows would fail to set the correct location for the character set files, which could lead to mysqld and mysql failing to initialize properly. (Bug#17270)
User Comments
Add your own comment.