MySQL Cluster NDB 7.0.8 was pulled shortly after release due to Bug#47844. Users seeking to upgrade from a previous MySQL Cluster NDB 7.0 release should instead use MySQL Cluster NDB 7.0.8a, which contains a fix for this bug, in addition to all bugfixes and improvements made in MySQL Cluster NDB 7.0.8.
This release incorporates new features in the
NDBCLUSTER
storage engine and fixes
recently discovered bugs in MySQL Cluster NDB 7.0.7.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1, 6.2, 6.3, and 7.0 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.37 (see Section C.1.13, “Changes in MySQL 5.1.37 (13 July 2009)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
A new option --log-name
is
added for ndb_mgmd. This option can be used
to provide a name for the current node and then to identify it
in messages written to the cluster log. For more information,
see Section 17.4.4, “ndb_mgmd — The MySQL Cluster Management Server Daemon”.
(Bug#47643)
--config-dir
is now accepted by
ndb_mgmd as an alias for the
--configdir
option.
(Bug#42013)
Disk Data:
Two new columns have been added to the output of
ndb_desc to make it possible to determine how
much of the disk space allocated to a given table or fragment
remains free. (This information is not available from the
INFORMATION_SCHEMA.FILES
table,
since the FILES
table applies only
to Disk Data files.) For more information, see
Section 17.4.9, “ndb_desc — Describe NDB Tables”.
(Bug#47131)
Bugs fixed:
Important Change:
Previously, the MySQL Cluster management node and data node
programs, when run on Windows platforms, required the
--nodaemon
option in order to produce console
output. Now, these programs run in the foreground when invoked
from the command line on Windows, which is the same behavior
that mysqld.exe displays on Windows.
(Bug#45588)
Cluster Replication: Important Change:
In a MySQL Cluster acting as a replication slave and having
multiple SQL nodes, only the SQL node receiving events directly
from the master recorded DDL statements in its binary logs
unless this SQL node also had binary logging enabled; otherwise,
other SQL nodes in the slave cluster failed to log DDL
statements, regardless of their individual
--log-bin
settings.
The fix for this issue aligns binary logging of DDL statements with that of DML statements. In particular, you should take note of the following:
DDL and DML statements on the master cluster are logged with the server ID of the server that actually writes the log.
DDL and DML statements on the master cluster are logged by any attached mysqld that has binary logging enabled.
Replicated DDL and DML statements on the slave are logged by
any attached mysqld that has both
--log-bin
and
--log-slave-updates
enabled.
Replicated DDL and DML statements are logged with the server
ID of the original (master) MySQL server by any attached
mysqld that has both
--log-bin
and
--log-slave-updates
enabled.
Affect on upgrades. When upgrading from a previous MySQL CLuster release, you should perform either one of the following:
Upgrade servers that are performing binary logging before those that are not; do not perform any DDL on “old” SQL nodes until all SQL nodes have been upgraded.
Make sure that
--log-slave-updates
is
enabled on all SQL nodes performing binary logging prior
to the upgrade, so that all DDL is captured.
Logging of DML statements was not affected by this issue.
The following issues with error logs generated by ndbmtd were addressed:
The version string was sometimes truncated, or even not shown, depending on the number of threads in use (the more threads, the worse the problem). Now the version string is shown in full, as well as the filenames for all tracefiles (where available).
In the event of a crash, the thread number of the thread that crashed was not printed. Now this information is supplied, if available.
mysqld allocated an excessively large buffer
for handling BLOB
values due to
overestimating their size. (For each row, enough space was
allocated to accommodate every
BLOB
or
TEXT
column value in the result
set.) This could adversely affect performance when using tables
containing BLOB
or
TEXT
columns; in a few extreme
cases, this issue could also cause the host system to run out of
memory unexpectedly.
(Bug#47574)
NDBCLUSTER
uses a dynamically-allocated
buffer to store BLOB
or
TEXT
column data that is read
from rows in MySQL Cluster tables.
When an instance of the NDBCLUSTER
table
handler was recycled (this can happen due to table definition
cache pressure or to operations such as
FLUSH TABLES
or
ALTER TABLE
), if the last row
read contained blobs of zero length, the buffer was not freed,
even though the reference to it was lost. This resulted in a
memory leak.
For example, consider the table defined and populated as shown here:
CREATE TABLE t (a INT PRIMARY KEY, b LONGTEXT) ENGINE=NDB; INSERT INTO t VALUES (1, REPEAT('F', 20000)); INSERT INTO t VALUES (2, '');
Now execute repeatedly a SELECT
on this table, such that the zero-length
LONGTEXT
row is
last, followed by a FLUSH
TABLES
statement (which forces the handler object to
be re-used), as shown here:
SELECT a, length(b) FROM bl ORDER BY a; FLUSH TABLES;
Prior to the fix, this resulted in a memory leak proportional to
the size of the stored
LONGTEXT
value
each time these two statements were executed.
(Bug#47573)
Large transactions involving joins between tables containing
BLOB
columns used excessive
memory.
(Bug#47572)
After an NDB
table had an
ALTER ONLINE
TABLE
operation performed on it in a MySQL Cluster
running a MySQL Cluster NDB 6.3.x release, it could not be
upgraded online to a MySQL Cluster NDB 7.0.x release. This issue
was detected using MySQL Cluster NDB 6.3.20, but is likely to
effect any MySQL Cluster NDB 6.3.x release supporting online DDL
operations.
(Bug#47542)
When using multi-threaded data nodes (ndbmtd)
with NoOfReplicas
set to a value greater than
2, attempting to restart any of the data nodes caused a forced
shutdown of the entire cluster.
(Bug#47530)
A variable was left uninitialized while a data node copied data from its peers as part of its startup routine; if the starting node died during this phase, this could lead a crash of the cluster when the node was later restarted. (Bug#47505)
Handling of LQH_TRANS_REQ
signals was done
incorrectly in DBLQH
when the transaction
coordinator failed during a LQH_TRANS_REQ
session. This led to incorrect handling of multiple node
failures, particularly when using ndbmtd.
(Bug#47476)
The NDB kernel's parser (in
ndb/src/common/util/Parser.cpp
) did not
interpret the backslash (“\
”)
character correctly.
(Bug#47426)
During an online alter table operation, the new table definition was made available for users during the prepare-phase when it should only be exposed during and after a commit. This issue could affect NDB API applications, mysqld processes, or data node processes. (Bug#47375)
Aborting an online add column operation (for example, due to resource problems on a single data node, but not others) could lead to a forced node shutdown. (Bug#47364)
Clients attempting to connect to the cluster during shutdown could sometimes cause the management server to crash. (Bug#47325)
The size of the table descriptor pool used in the
DBTUP
kernel block was incorrect. This could
lead to a data node crash when an LQH sent a
CREATE_TAB_REF
signal.
(Bug#47215)
See also Bug#44908.
When a data node restarts, it first runs the redo log until
reaching the latest restorable global checkpoint; after this it
scans the remainder of the redo log file, searching for entries
that should be invalidated so they are not used in any
subsequent restarts. (It is possible, for example, if restoring
GCI number 25, that there might be entries belonging to GCI 26
in the redo log.) However, under certain rare conditions, during
the invalidation process, the redo log files themselves were not
always closed while scanning ahead in the redo log. In rare
cases, this could lead to MaxNoOfOpenFiles
being exceeded, causing a the data node to crash.
(Bug#47171)
For very large values of MaxNoOfTables
+
MaxNoOfAttributes
, the calculation for
StringMemory
could overflow when creating
large numbers of tables, leading to NDB error 773
(Out of string memory, please modify StringMemory
config parameter), even when
StringMemory
was set to
100
(100 percent).
(Bug#47170)
The default value for the StringMemory
configuration parameter, unlike other MySQL Cluster
configuration parameters, was not set in
ndb/src/mgmsrv/ConfigInfo.cpp
.
(Bug#47166)
Signals from a failed API node could be received after an
API_FAILREQ
signal (see
Operations and Signals)
has been received from that node, which could result in invalid
states for processing subsequent signals. Now, all pending
signals from a failing API node are processed before any
API_FAILREQ
signal is received.
(Bug#47039)
See also Bug#44607.
When reloading the management server configuration, only the last changed parameter was logged. (Bug#47036)
When using ndbmtd, a parallel
DROP TABLE
operation could cause
data nodes to have different views of which tables should be
included in local checkpoints; this discrepancy could lead to a
node failure during an LCP.
(Bug#46873)
Using triggers on NDB
tables caused
ndb_autoincrement_prefetch_sz
to be treated as having the NDB kernel's internal default
value (32) and the value for this variable as set on the
cluster's SQL nodes to be ignored.
(Bug#46712)
Now, when started with
--initial
--reload
,
ndb_mgmd tries to copy the configuration of
an existing ndb_mgmd process with a confirmed
configuration. This works only if the configuration files used
by both management nodes are exactly the same.
(Bug#45495, Bug#46488)
See also Bug#42015.
On Windows, ndbd
--initial
could hang in
an endless loop while attempting to remove directories.
(Bug#45402)
For multi-threaded data nodes, insufficient fragment records
were allocated in the DBDIH
NDB kernel block,
which could lead to error 306 when creating many tables; the
number of fragment records allocated did not take into account
the number of LQH instances.
(Bug#44908)
Running an ALTER TABLE
statement
while an NDB backup was in progress caused
mysqld to crash.
(Bug#44695)
When performing auto-discovery of tables on individual SQL
nodes, NDBCLUSTER
attempted to overwrite
existing MyISAM
.frm
files and corrupted them.
Workaround.
In the mysql client, create a new table
(t2
) with same definition as the corrupted
table (t1
). Use your system shell or file
manager to rename the old .MYD
file to
the new file name (for example, mv t1.MYD
t2.MYD). In the mysql client,
repair the new table, drop the old one, and rename the new
table using the old file name (for example,
RENAME TABLE t2
TO t1
).
When started with the --initial
and --reload
options, if
ndb_mgmd could not find a configuration file
or connect to another management server, it appeared to hang.
Now, when trying to fetch its configuration from another
management node, ndb_mgmd checks and signals
(Trying to get configuration from other
mgmd(s)
) each 30 seconds that it has not yet done so.
(Bug#42015)
See also Bug#45495.
Running ndb_restore with the
--print
or --print_log
option
could cause it to crash.
(Bug#40428, Bug#33040)
An insert on an NDB
table was not
always flushed properly before performing a scan. One way in
which this issue could manifest was that
LAST_INSERT_ID()
sometimes failed
to return correct values when using a trigger on an
NDB
table.
(Bug#38034)
When a data node received a TAKE_OVERTCCONF
signal from the master before that node had received a
NODE_FAILREP
, a race condition could in
theory result.
(Bug#37688)
Some joins on large NDB
tables
having TEXT
or
BLOB
columns could cause
mysqld processes to leak memory. The joins
did not need to reference the
TEXT
or
BLOB
columns directly for this
issue to occur.
(Bug#36701)
On Mac OS X 10.5, commands entered in the management client
failed and sometimes caused the client to hang, although
management client commands invoked using the
--execute
(or
-e
) option from the system shell worked
normally.
For example, the following command failed with an error and hung until killed manually, as shown here:
ndb_mgm>SHOW
Warning, event thread startup failed, degraded printouts as result, errno=36^C
However, the same management client command, invoked from the system shell as shown here, worked correctly:
shell> ndb_mgm -e "SHOW"
See also Bug#34438.
Replication:
In some cases, a STOP SLAVE
statement could cause the replication slave to crash. This issue
was specific to MySQL on Windows or Macintosh platforms.
(Bug#45238, Bug#45242, Bug#45243, Bug#46013, Bug#46014, Bug#46030)
See also Bug#40796.
Disk Data: Calculation of free space for Disk Data table fragments was sometimes done incorrectly. This could lead to unnecessary allocation of new extents even when sufficient space was available in existing ones for inserted data. In some cases, this might also lead to crashes when restarting data nodes.
This miscalculation was not reflected in the contents of the
INFORMATION_SCHEMA.FILES
table,
as it applied to extents allocated to a fragment, and not to a
file.
Cluster API:
In some circumstances, if an API node encountered a data node
failure between the creation of a transaction and the start of a
scan using that transaction, then any subsequent calls to
startTransaction()
and
closeTransaction()
could cause the same
transaction to be started and closed repeatedly.
(Bug#47329)
Cluster API:
Performing multiple operations using the same primary key within
the same
NdbTransaction::execute()
call could lead to a data node crash.
This fix does not make change the fact that performing
multiple operations using the same primary key within the same
execute()
is not supported; because there
is no way to determine the order of such operations, the
result of such combined operations remains undefined.
See also Bug#44015.
API: The fix for Bug#24507 could lead in some cases to client application failures due to a race condition. Now the server waits for the “dummy” thread to return before exiting, thus making sure that only one thread can initialize the POSIX threads library. (Bug#42850)
User Comments
Add your own comment.