This release incorporates new features in the
NDBCLUSTER
storage engine and fixes
recently discovered bugs in MySQL Cluster NDB 7.0.13.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster NDB 6.1, 6.2, 6.3, and 7.0 releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.1 through MySQL 5.1.44 (see Section C.1.3, “Changes in MySQL 5.1.44 (04 February 2010)”).
Please refer to our bug database at http://bugs.mysql.com/ for more details about the individual bugs fixed in this version.
Functionality added or changed:
Cluster API:
It is now possible to determine, using the
ndb_desc utility or the NDB API, which data
nodes contain replicas of which partitions. For
ndb_desc, a new
--extra-node-info
option is
added to cause this information to be included in its output. A
new method
NdbDictionary::Object::Table::getFragmentNodes()
is added to the NDB API for obtaining this information
programmatically.
(Bug#51184)
Formerly, the REPORT
and
DUMP
commands returned output to all
ndb_mgm clients connected to the same MySQL
Cluster. Now, these commands return their output only to the
ndb_mgm client that actually issued the
command.
(Bug#40865)
Cluster Replication: Replication:
MySQL Cluster Replication now supports attribute promotion and
demotion for row-based replication between columns of different
but similar types on the master and the slave. For example, it
is possible to promote an INT
column on the master to a BIGINT
column on the slave, and to demote a
TEXT
column to a
VARCHAR
column.
The implementation of type demotion distinguishes between lossy
and non-lossy type conversions, and their use on the slave can
be controlled by setting the
slave_type_conversions
global
server system variable.
For more information about attribute promotion and demotion for row-based replication in MySQL Cluster, see Attribute promotion and demotion (MySQL Cluster). (Bug#47163, Bug#46584)
Bugs fixed:
If a node or cluster failure occurred while
mysqld was scanning the
ndb.ndb_schema
table (which it does when
attempting to connect to the cluster), insufficient error
handling could lead to a crash by mysqld in
certain cases. This could happen in a MySQL Cluster with a great
many tables, when trying to restart data nodes while one or more
mysqld processes were restarting.
(Bug#52325)
In MySQL Cluster NDB 7.0 and later, DDL operations are performed within schema transactions; the NDB kernel code for starting a schema transaction checks that all data nodes are at the same version before allowing a schema transaction to start. However, when a version mismatch was detected, the client was not actually informed of this problem, which caused the client to hang. (Bug#52228)
After running a mixed series of node and system restarts, a system restart could hang or fail altogether. This was caused by setting the value of the newest completed global checkpoint too low for a data node performing a node restart, which led to the node reporting incorrect GCI intervals for its first local checkpoint. (Bug#52217)
When performing a complex mix of node restarts and system
restarts, the node that was elected as master sometimes required
optimized node recovery due to missing REDO
information. When this happened, the node crashed with
Failure to recreate object ... during restart, error
721 (because the DBDICT
restart
code was run twice). Now when this occurs, node takeover is
executed immediately, rather than being made to wait until the
remaining data nodes have started.
(Bug#52135)
See also Bug#48436.
The internal variable ndb_new_handler
, which
is no longer used, has been removed.
(Bug#51858)
ha_ndbcluster.cc
was not compiled with the
same SAFEMALLOC
and
SAFE_MUTEX
flags as the MySQL Server.
(Bug#51857)
When debug compiling MySQL Cluster on Windows, the mysys library was not compiled with -DSAFEMALLOC and -DSAFE_MUTEX, due to the fact that my_socket.c was misnamed as my_socket.cc. (Bug#51856)
The redo log protects itself from being filled up by
periodically checking how much space remains free. If
insufficient redo log space is available, it sets the state
TAIL_PROBLEM
which results in transactions
being aborted with error code 410 (out of redo
log). However, this state was not set following a
node restart, which meant that if a data node had insufficient
redo log space following a node restart, it could crash a short
time later with Fatal error due to end of REDO
log. Now, this space is checked during node
restarts.
(Bug#51723)
Restoring a MySQL Cluster backup between platforms having different endianness failed when also restoring metadata and the backup contained a hashmap not already present in the database being restored to. This issue was discovered when trying to restore a backup made on Solaris/SPARC to a MySQL Cluster running on Solaris/x86, but could conceivably occur in other cases where the endianness of the platform on which the backup was taken differed from that of the platform being restored to. (Bug#51432)
The output of the ndb_mgm client
REPORT BACKUPSTATUS
command could sometimes
contain errors due to uninitialized data.
(Bug#51316)
A GROUP BY
query against
NDB
tables sometimes did not use
any indexes unless the query included a FORCE
INDEX
option. With this fix, indexes are used by such
queries (where otherwise possible) even when FORCE
INDEX
is not specified.
(Bug#50736)
The following issues were fixed in the
ndb_mgm client REPORT
MEMORYUSAGE
command:
The client sometimes inserted extra
ndb_mgm>
prompts within the output.
For data nodes running ndbmtd,
IndexMemory
was reported before
DataMemory
.
Also for data nodes running ndbmtd, there
were multiple IndexMemory
entries listed
in the output.
Issuing a command in the ndb_mgm client after it had lost its connection to the management server could cause the client to crash. (Bug#49219)
The mysql client system
command did not work properly. This issue was only known to
affect the version of the mysql client that
was included with MySQL Cluster NDB 7.0 and MySQL Cluster NDB
7.1 releases.
(Bug#48574)
The internal ErrorReporter::formatMessage()
method could in some cases cause a buffer overflow.
(Bug#47120)
Information about several management client commands was missing
from (that is, truncated in) the output of the
HELP
command.
(Bug#46114)
The ndb_print_backup_file utility failed to function, due to a previous internal change in the NDB code. (Bug#41512, Bug#48673)
When the MemReportFrequency
configuration
parameter was set in config.ini
, the
ndb_mgm client REPORT
MEMORYUSAGE
command printed its output multiple times.
(Bug#37632)
ndb_mgm -e "... REPORT ..." did not write any
output to stdout
.
The fix for this issue also prevents the cluster log from being
flooded with INFO
messages when
DataMemory
usage reaches 100%, and insures
that when when the usage is decreased, an appropriate message is
written to the cluster log.
(Bug#31542, Bug#44183, Bug#49782)
Replication:
Metadata for GEOMETRY
fields was not properly
stored by the slave in its definitions of tables.
(Bug#49836)
See also Bug#48776.
Replication:
Column length information generated by
InnoDB
did not match that generated
by MyISAM
, which caused invalid
metadata to be written to the binary log when trying to
replicate BIT
columns.
(Bug#49618)
Disk Data: Inserts of blob column values into a MySQL Cluster Disk Data table that exhausted the tablespace resulted in misleading no such tuple error messages rather than the expected error tablespace full.
This issue appeared similar to Bug#48113, but had a different underlying cause. (Bug#52201)
Disk Data:
The error message returned after atttempting to execute
ALTER LOGFILE GROUP
on an
nonexistent logfile group did not indicate the reason for the
failure.
(Bug#51111)
Disk Data:
DDL operations on Disk Data tables having a relatively small
UNDO_BUFFER_SIZE
could fail unexpectedly.
Cluster Replication:
The
--ndb-log-empty-epochs
option did not work correctly.
(Bug#49559)
Cluster API:
When reading blob data with lock mode
LM_SimpleRead
, the lock was not upgraded as
expected.
(Bug#51034)
On some Unix/Linux platforms, an error during build from source
could be produced, referring to a missing
LT_INIT
program. This is due to versions of
libtool 2.1 and earlier.
(Bug#51009)
1) In rare cases, if a thread was interrupted during a
FLUSH
PRIVILEGES
operation, a debug assertion occurred later
due to improper diagnostic area setup. 2) A
KILL
operation could cause a
console error message referring to a diagnostic area state
without first ensuring that the state existed.
(Bug#33982)
User Comments
Add your own comment.