The [ndbd]
and [ndbd
default]
sections are used to configure the behavior
of the cluster's data nodes.
There are many parameters which control buffer sizes, pool sizes, timeouts, and so forth. The only mandatory parameters are:
Either ExecuteOnComputer
or
HostName
, which must be defined in the
local [ndbd]
section.
The parameter NoOfReplicas
, which must be
defined in the[ndbd default]
section, as
it is common to all Cluster data nodes.
Most data node parameters are set in the [ndbd
default]
section. Only those parameters explicitly
stated as being able to set local values are allowed to be
changed in the [ndbd]
section. Where present,
HostName
, Id
and
ExecuteOnComputer
must
be defined in the local [ndbd]
section, and
not in any other section of config.ini
. In
other words, settings for these parameters are specific to one
data node.
For those parameters affecting memory usage or buffer sizes, it
is possible to use K
, M
,
or G
as a suffix to indicate units of 1024,
1024×1024, or 1024×1024×1024. (For example,
100K
means 100 × 1024 = 102400.)
Parameter names and values are currently case-sensitive.
Identifying data nodes.
The Id
value (that is, the data node
identifier) can be allocated on the command line when the node
is started or in the configuration file.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default |
|
|
Range | 1-48 |
This is the node ID used as the address of the node for all cluster internal messages. For data nodes, this is an integer in the range 1 to 48 inclusive. Each node in the cluster must have a unique identifier.
This parameter can also be written as
NodeId
, although the short form is
sufficient (and preferred for this reason).
Restart Type | system | |
Permitted Values | ||
Type | string |
|
Default |
|
|
Range | - |
This refers to the Id
set for one of the
computers defined in a [computer]
section.
Restart Type | system | |
Permitted Values | ||
Type | string |
|
Default | localhost |
|
Range | - |
Specifying this parameter defines the hostname of the
computer on which the data node is to reside. To specify a
hostname other than localhost
, either
this parameter or ExecuteOnComputer
is
required.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default |
|
|
Range | 1-64K |
Each node in the cluster uses a port to connect to other nodes. By default, this port is allocated dynamically in such a way as to ensure that no two nodes on the same host computer receive the same port number, so it should normally not be necessary to specify a value for this parameter.
However, if you need to be able to open specific ports in a
firewall to permit communication between data nodes and API
nodes (including SQL nodes), you can set this parameter to
the number of the desired port in an
[ndbd]
section or (if you need to do this
for multiple data nodes) the [ndbd
default]
section of the
config.ini
file, and then open the port
having that number for incoming connections from SQL nodes,
API nodes, or both.
Connections from data nodes to management nodes is done
via the ndb_mgmd management port (the
management server's PortNumber
;
see Section 17.3.2.4, “Defining a MySQL Cluster Management Server”) so
outgoing connections to that port from any data nodes
should always be allowed.
Restart Type | initial, system | |
Permitted Values | ||
Type | numeric |
|
Default | None |
|
Range | 1-4 |
|
Permitted Values | ||
Type | numeric |
|
Default | None |
|
Range | 1-4 |
|
Permitted Values | ||
Type | numeric |
|
Default | 2 |
|
Range | 1-4 |
|
Permitted Values | ||
Type | numeric |
|
Default | 2 |
|
Range | 1-4 |
This global parameter can be set only in the [ndbd
default]
section, and defines the number of
replicas for each table stored in the cluster. This
parameter also specifies the size of node groups. A node
group is a set of nodes all storing the same information.
Node groups are formed implicitly. The first node group is
formed by the set of data nodes with the lowest node IDs,
the next node group by the set of the next lowest node
identities, and so on. By way of example, assume that we
have 4 data nodes and that NoOfReplicas
is set to 2. The four data nodes have node IDs 2, 3, 4 and
5. Then the first node group is formed from nodes 2 and 3,
and the second node group by nodes 4 and 5. It is important
to configure the cluster in such a manner that nodes in the
same node groups are not placed on the same computer because
a single hardware failure would cause the entire cluster to
fail.
If no node IDs are provided, the order of the data nodes
will be the determining factor for the node group. Whether
or not explicit assignments are made, they can be viewed in
the output of the management client's
SHOW
command.
There is no default value for
NoOfReplicas
; the recommended value is 2
for most common usage scenarios.
The maximum possible value is 4; currently, only the values 1 and 2 are actually supported (see Bug#18621).
Setting NoOfReplicas
to 1 means that
there is only a single copy of all Cluster data; in this
case, the loss of a single data node causes the cluster to
fail because there are no additional copies of the data
stored by that node.
The value for this parameter must divide evenly into the
number of data nodes in the cluster. For example, if there
are two data nodes, then NoOfReplicas
must be equal to either 1 or 2, since 2/3 and 2/4 both yield
fractional values; if there are four data nodes, then
NoOfReplicas
must be equal to 1, 2, or 4.
Restart Type | initial, node | |
Permitted Values | ||
Type | string |
|
Default | . |
|
Range | - |
This parameter specifies the directory where trace files, log files, pid files and error logs are placed.
The default is the data node process working directory.
Restart Type | initial, node | |
Permitted Values | ||
Type | string |
|
Default | DataDir |
|
Range | - |
This parameter specifies the directory where all files
created for metadata, REDO logs, UNDO logs, and data files
are placed. The default is the directory specified by
DataDir
.
This directory must exist before the ndbd process is initiated.
The recommended directory hierarchy for MySQL Cluster
includes /var/lib/mysql-cluster
, under
which a directory for the node's file system is created. The
name of this subdirectory contains the node ID. For example,
if the node ID is 2, this subdirectory is named
ndb_2_fs
.
Restart Type | initial, node | |
Permitted Values | ||
Type | string |
|
Default | FileSystemPath/BACKUP |
|
Range | - |
This parameter specifies the directory in which backups are
placed. If omitted, the default backup location is the
directory named BACKUP
under the
location specified by the FileSystemPath
parameter. (See above.)
Data Memory, Index Memory, and String Memory
DataMemory
and IndexMemory
are [ndbd]
parameters specifying the size of
memory segments used to store the actual records and their
indexes. In setting values for these, it is important to
understand how DataMemory
and
IndexMemory
are used, as they usually need to
be updated to reflect actual usage by the cluster:
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 80M |
|
Range | 1M-1024G |
This parameter defines the amount of space (in bytes) available for storing database records. The entire amount specified by this value is allocated in memory, so it is extremely important that the machine has sufficient physical memory to accommodate it.
The memory allocated by DataMemory
is
used to store both the actual records and indexes. Each
record is currently of fixed size. (Even
VARCHAR
columns are stored as
fixed-width columns.) There is a 16-byte overhead on each
record; an additional amount for each record is incurred
because it is stored in a 32KB page with 128 byte page
overhead (see below). There is also a small amount wasted
per page due to the fact that each record is stored in only
one page.
The maximum record size is currently 8052 bytes.
The memory space defined by DataMemory
is
also used to store ordered indexes, which use about 10 bytes
per record. Each table row is represented in the ordered
index. A common error among users is to assume that all
indexes are stored in the memory allocated by
IndexMemory
, but this is not the case:
Only primary key and unique hash indexes use this memory;
ordered indexes use the memory allocated by
DataMemory
. However, creating a primary
key or unique hash index also creates an ordered index on
the same keys, unless you specify USING
HASH
in the index creation statement. This can be
verified by running ndb_desc -d
db_name
table_name
in the
management client.
The memory space allocated by DataMemory
consists of 32KB pages, which are allocated to table
fragments. Each table is normally partitioned into the same
number of fragments as there are data nodes in the cluster.
Thus, for each node, there are the same number of fragments
as are set in NoOfReplicas
.
In addition, due to the way in which new pages are allocated
when the capacity of the current page is exhausted, there is
an additional overhead of approximately 18.75%. When more
DataMemory
is required, more than one new
page is allocated, according to the following formula:
number of new pages = FLOOR(number of current pages × 0.1875) + 1
For example, if 15 pages are currently allocated to a given
table and an insert to this table requires additional
storage space, the number of new pages allocated to the
table is FLOOR(15 × 0.1875) + 1 =
FLOOR(2.8125) + 1 = 2 + 1 =
3
. Now 15 + 3 = 18 memory pages are
allocated to the table. When the last of these 18 pages
becomes full, FLOOR(18 × 0.1875) + 1
= FLOOR(3.3750) + 1 = 3 + 1 =
4
new pages are allocated, so the total number of
pages allocated to the table is now 22.
Once a page has been allocated, it is currently not possible
to return it to the pool of free pages, except by deleting
the table. (This also means that
DataMemory
pages, once allocated to a
given table, cannot be used by other tables.) Performing a
node recovery also compresses the partition because all
records are inserted into empty partitions from other live
nodes.
The DataMemory
memory space also contains
UNDO information: For each update, a copy of the unaltered
record is allocated in the DataMemory
.
There is also a reference to each copy in the ordered table
indexes. Unique hash indexes are updated only when the
unique index columns are updated, in which case a new entry
in the index table is inserted and the old entry is deleted
upon commit. For this reason, it is also necessary to
allocate enough memory to handle the largest transactions
performed by applications using the cluster. In any case,
performing a few large transactions holds no advantage over
using many smaller ones, for the following reasons:
Large transactions are not any faster than smaller ones
Large transactions increase the number of operations that are lost and must be repeated in event of transaction failure
Large transactions use more memory
The default value for DataMemory
is 80MB;
the minimum is 1MB. There is no maximum size, but in reality
the maximum size has to be adapted so that the process does
not start swapping when the limit is reached. This limit is
determined by the amount of physical RAM available on the
machine and by the amount of memory that the operating
system may commit to any one process. 32-bit operating
systems are generally limited to 2–4GB per process;
64-bit operating systems can use more. For large databases,
it may be preferable to use a 64-bit operating system for
this reason.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 18M |
|
Range | 1M-1T |
This parameter controls the amount of storage used for hash indexes in MySQL Cluster. Hash indexes are always used for primary key indexes, unique indexes, and unique constraints. Note that when defining a primary key and a unique index, two indexes will be created, one of which is a hash index used for all tuple accesses as well as lock handling. It is also used to enforce unique constraints.
The size of the hash index is 25 bytes per record, plus the size of the primary key. For primary keys larger than 32 bytes another 8 bytes is added.
The default value for IndexMemory
is
18MB. The minimum is 1MB.
Restart Type | system | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-4G |
This parameter determines how much memory is allocated for
strings such as table names, and is specified in an
[ndbd]
or [ndbd
default]
section of the
config.ini
file. A value between
0
and 100
inclusive is
interpreted as a percent of the maximum default value, which
is calculated based on a number of factors including the
number of tables, maximum table name size, maximum size of
.FRM
files,
MaxNoOfTriggers
, maximum column name
size, and maximum default column value. In general it is
safe to assume that the maximum default value is
approximately 5 MB for a MySQL Cluster having 1000 tables.
A value greater than 100
is interpreted
as a number of bytes.
In MySQL 5.0, the default value is
100
— that is, 100 percent of the
default maximum, or roughly 5 MB. It is possible to reduce
this value safely, but it should never be less than 5
percent. If you encounter Error 773 Out of string
memory, please modify StringMemory config parameter:
Permanent error: Schema error, this means that
means that you have set the StringMemory
value too low. 25
(25 percent) is not
excessive, and should prevent this error from recurring in
all but the most extreme conditions, as when there are
hundreds or thousands of NDB
tables with names whose lengths and columns whose number
approach their permitted maximums.
The following example illustrates how memory is used for a table. Consider this table definition:
CREATE TABLE example ( a INT NOT NULL, b INT NOT NULL, c INT NOT NULL, PRIMARY KEY(a), UNIQUE(b) ) ENGINE=NDBCLUSTER;
For each record, there are 12 bytes of data plus 12 bytes
overhead. Having no nullable columns saves 4 bytes of overhead.
In addition, we have two ordered indexes on columns
a
and b
consuming roughly
10 bytes each per record. There is a primary key hash index on
the base table using roughly 29 bytes per record. The unique
constraint is implemented by a separate table with
b
as primary key and a
as
a column. This other table consumes an additional 29 bytes of
index memory per record in the example
table
as well 8 bytes of record data plus 12 bytes of overhead.
Thus, for one million records, we need 58MB for index memory to handle the hash indexes for the primary key and the unique constraint. We also need 64MB for the records of the base table and the unique index table, plus the two ordered index tables.
You can see that hash indexes takes up a fair amount of memory space; however, they provide very fast access to the data in return. They are also used in MySQL Cluster to handle uniqueness constraints.
Currently, the only partitioning algorithm is hashing and ordered indexes are local to each node. Thus, ordered indexes cannot be used to handle uniqueness constraints in the general case.
An important point for both IndexMemory
and
DataMemory
is that the total database size is
the sum of all data memory and all index memory for each node
group. Each node group is used to store replicated information,
so if there are four nodes with two replicas, there will be two
node groups. Thus, the total data memory available is 2 ×
DataMemory
for each data node.
It is highly recommended that DataMemory
and
IndexMemory
be set to the same values for all
nodes. Data distribution is even over all nodes in the cluster,
so the maximum amount of space available for any node can be no
greater than that of the smallest node in the cluster.
DataMemory
and IndexMemory
can be changed, but decreasing either of these can be risky;
doing so can easily lead to a node or even an entire MySQL
Cluster that is unable to restart due to there being
insufficient memory space. Increasing these values should be
acceptable, but it is recommended that such upgrades are
performed in the same manner as a software upgrade, beginning
with an update of the configuration file, and then restarting
the management server followed by restarting each data node in
turn.
Updates do not increase the amount of index memory used. Inserts take effect immediately; however, rows are not actually deleted until the transaction is committed.
Transaction parameters.
The next three [ndbd]
parameters that we
discuss are important because they affect the number of
parallel transactions and the sizes of transactions that can
be handled by the system.
MaxNoOfConcurrentTransactions
sets the
number of parallel transactions possible in a node.
MaxNoOfConcurrentOperations
sets the number
of records that can be in update phase or locked
simultaneously.
Both of these parameters (especially
MaxNoOfConcurrentOperations
) are likely
targets for users setting specific values and not using the
default value. The default value is set for systems using small
transactions, to ensure that these do not use excessive memory.
Restart Type | system | |
Permitted Values | ||
Type | numeric |
|
Default | 4096 |
|
Range | 32-4G |
Each cluster data node requires a transaction record for each active transaction in the cluster. The task of coordinating transactions is distributed among all of the data nodes. The total number of transaction records in the cluster is the number of transactions in any given node times the number of nodes in the cluster.
Transaction records are allocated to individual MySQL servers. Each connection to a MySQL server requires at least one transaction record, plus an additional transaction object per table accessed by that connection. This means that a reasonable minimum for this parameter is
MaxNoOfConcurrentTransactions = (maximum number of tables accessed in any single transaction + 1) * number of cluster SQL nodes
Suppose that there are 4 SQL nodes using the cluster. A single join involving 5 tables requires 6 transaction records; if there are 5 such joins in a transaction, then 5 * 6 = 30 transaction records are required for this transaction, per MySQL server, or 30 * 4 = 120 transaction records total.
This parameter must be set to the same value for all cluster data nodes. This is due to the fact that, when a data node fails, the oldest surviving node re-creates the transaction state of all transactions that were ongoing in the failed node.
Changing the value of
MaxNoOfConcurrentTransactions
requires a
complete shutdown and restart of the cluster.
The default value is 4096.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 32K |
|
Range | 32-4G |
It is a good idea to adjust the value of this parameter according to the size and number of transactions. When performing transactions of only a few operations each and not involving a great many records, there is no need to set this parameter very high. When performing large transactions involving many records need to set this parameter higher.
Records are kept for each transaction updating cluster data, both in the transaction coordinator and in the nodes where the actual updates are performed. These records contain state information needed to find UNDO records for rollback, lock queues, and other purposes.
This parameter should be set to the number of records to be updated simultaneously in transactions, divided by the number of cluster data nodes. For example, in a cluster which has four data nodes and which is expected to handle 1,000,000 concurrent updates using transactions, you should set this value to 1000000 / 4 = 250000.
Read queries which set locks also cause operation records to be created. Some extra space is allocated within individual nodes to accommodate cases where the distribution is not perfect over the nodes.
When queries make use of the unique hash index, there are actually two operation records used per record in the transaction. The first record represents the read in the index table and the second handles the operation on the base table.
The default value is 32768.
This parameter actually handles two values that can be configured separately. The first of these specifies how many operation records are to be placed with the transaction coordinator. The second part specifies how many operation records are to be local to the database.
A very large transaction performed on an eight-node cluster
requires as many operation records in the transaction
coordinator as there are reads, updates, and deletes
involved in the transaction. However, the operation records
of the are spread over all eight nodes. Thus, if it is
necessary to configure the system for one very large
transaction, it is a good idea to configure the two parts
separately. MaxNoOfConcurrentOperations
will always be used to calculate the number of operation
records in the transaction coordinator portion of the node.
It is also important to have an idea of the memory requirements for operation records. These consume about 1KB per record.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | UNDEFINED |
|
Range | 32-4G |
By default, this parameter is calculated as 1.1 ×
MaxNoOfConcurrentOperations
. This fits
systems with many simultaneous transactions, none of them
being very large. If there is a need to handle one very
large transaction at a time and there are many nodes, it is
a good idea to override the default value by explicitly
specifying this parameter.
Transaction temporary storage.
The next set of [ndbd]
parameters is used
to determine temporary storage when executing a statement that
is part of a Cluster transaction. All records are released
when the statement is completed and the cluster is waiting for
the commit or rollback.
The default values for these parameters are adequate for most situations. However, users with a need to support transactions involving large numbers of rows or operations may need to increase these values to enable better parallelism in the system, whereas users whose applications require relatively small transactions can decrease the values to save memory.
MaxNoOfConcurrentIndexOperations
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 8K |
|
Range | 0-4G |
For queries using a unique hash index, another temporary set
of operation records is used during a query's execution
phase. This parameter sets the size of that pool of records.
Thus, this record is allocated only while executing a part
of a query. As soon as this part has been executed, the
record is released. The state needed to handle aborts and
commits is handled by the normal operation records, where
the pool size is set by the parameter
MaxNoOfConcurrentOperations
.
The default value of this parameter is 8192. Only in rare cases of extremely high parallelism using unique hash indexes should it be necessary to increase this value. Using a smaller value is possible and can save memory if the DBA is certain that a high degree of parallelism is not required for the cluster.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 4000 |
|
Range | 0-4G |
The default value of MaxNoOfFiredTriggers
is 4000, which is sufficient for most situations. In some
cases it can even be decreased if the DBA feels certain the
need for parallelism in the cluster is not high.
A record is created when an operation is performed that affects a unique hash index. Inserting or deleting a record in a table with unique hash indexes or updating a column that is part of a unique hash index fires an insert or a delete in the index table. The resulting record is used to represent this index table operation while waiting for the original operation that fired it to complete. This operation is short-lived but can still require a large number of records in its pool for situations with many parallel write operations on a base table containing a set of unique hash indexes.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1M |
|
Range | 1K-4G |
The memory affected by this parameter is used for tracking operations fired when updating index tables and reading unique indexes. This memory is used to store the key and column information for these operations. It is only very rarely that the value for this parameter needs to be altered from the default.
The default value for
TransactionBufferMemory
is 1MB.
Normal read and write operations use a similar buffer, whose
usage is even more short-lived. The compile-time parameter
ZATTRBUF_FILESIZE
(found in
ndb/src/kernel/blocks/Dbtc/Dbtc.hpp
)
set to 4000 × 128 bytes (500KB). A similar buffer for
key information, ZDATABUF_FILESIZE
(also
in Dbtc.hpp
) contains 4000 × 16 =
62.5KB of buffer space. Dbtc
is the
module that handles transaction coordination.
Scans and buffering.
There are additional [ndbd]
parameters in
the Dblqh
module (in
ndb/src/kernel/blocks/Dblqh/Dblqh.hpp
)
that affect reads and updates. These include
ZATTRINBUF_FILESIZE
, set by default to
10000 × 128 bytes (1250KB) and
ZDATABUF_FILE_SIZE
, set by default to
10000*16 bytes (roughly 156KB) of buffer space. To date, there
have been neither any reports from users nor any results from
our own extensive tests suggesting that either of these
compile-time limits should be increased.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 256 |
|
Range | 2-500 |
This parameter is used to control the number of parallel
scans that can be performed in the cluster. Each transaction
coordinator can handle the number of parallel scans defined
for this parameter. Each scan query is performed by scanning
all partitions in parallel. Each partition scan uses a scan
record in the node where the partition is located, the
number of records being the value of this parameter times
the number of nodes. The cluster should be able to sustain
MaxNoOfConcurrentScans
scans concurrently
from all nodes in the cluster.
Scans are actually performed in two cases. The first of these cases occurs when no hash or ordered indexes exists to handle the query, in which case the query is executed by performing a full table scan. The second case is encountered when there is no hash index to support the query but there is an ordered index. Using the ordered index means executing a parallel range scan. The order is kept on the local partitions only, so it is necessary to perform the index scan on all partitions.
The default value of
MaxNoOfConcurrentScans
is 256. The
maximum value is 500.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | UNDEFINED |
|
Range | 32-4G |
Specifies the number of local scan records if many scans are
not fully parallelized. If the number of local scan records
is not provided, it is calculated as the product of
MaxNoOfConcurrentScans
and the number of
data nodes in the system. The minimum value is 32.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 64 |
|
Range | 1-992 |
This parameter is used to calculate the number of lock records used to handle concurrent scan operations.
The default value is 64; this value has a strong connection
to the ScanBatchSize
defined in the SQL
nodes.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1M |
|
Range | 512K-4G |
|
Permitted Values | ||
Type | numeric |
|
Default | 4M |
|
Range | 512K-4G |
This is an internal buffer used for passing messages within individual nodes and between nodes. Although it is highly unlikely that this would need to be changed, it is configurable. By default, it is set to 1MB.
The following [ndbd]
parameters control log
and checkpoint behavior.
Restart Type | initial, node | |
Permitted Values | ||
Type | numeric |
|
Default | 8 |
|
Range | 3-4G |
This parameter sets the number of REDO log files for the node, and thus the amount of space allocated to REDO logging. Because the REDO log files are organized in a ring, it is extremely important that the first and last log files in the set (sometimes referred to as the “head” and “tail” log files, respectively) do not meet. When these approach one another too closely, the node begins aborting all transactions encompassing updates due to a lack of room for new log records.
A REDO
log record is not removed until
three local checkpoints have been completed since that log
record was inserted. Checkpointing frequency is determined
by its own set of configuration parameters discussed
elsewhere in this chapter.
How these parameters interact and proposals for how to configure them are discussed in Section 17.3.2.11, “Configuring MySQL Cluster Parameters for Local Checkpoints”.
The default parameter value is 8, which means 8 sets of 4
16MB files for a total of 512MB. In other words, REDO log
space is always allocated in blocks of 64MB. In scenarios
requiring a great many updates, the value for
NoOfFragmentLogFiles
may need to be set
as high as 300 or even higher to provide sufficient space
for REDO logs.
If the checkpointing is slow and there are so many writes to
the database that the log files are full and the log tail
cannot be cut without jeopardizing recovery, all updating
transactions are aborted with internal error code 410
(Out of log file space temporarily
). This
condition prevails until a checkpoint has completed and the
log tail can be moved forward.
This parameter cannot be changed “on the
fly”; you must restart the node using
--initial
. If you wish to change this
value for all data nodes in a running cluster, you can do
so via a rolling node restart (using
--initial
when starting each data node).
Restart Type | node |
This parameter sets a ceiling on how many internal threads to allocate for open files. Any situation requiring a change in this parameter should be reported as a bug.
The default value is 40.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 25 |
|
Range | 0-4G |
This parameter sets the maximum number of trace files that are kept before overwriting old ones. Trace files are generated when, for whatever reason, the node crashes.
The default is 25 trace files.
Metadata objects.
The next set of [ndbd]
parameters defines
pool sizes for metadata objects, used to define the maximum
number of attributes, tables, indexes, and trigger objects
used by indexes, events, and replication between clusters.
Note that these act merely as “suggestions” to
the cluster, and any that are not specified revert to the
default values shown.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1000 |
|
Range | 32-4G |
Defines the number of attributes that can be defined in the cluster.
The default value is 1000, with the minimum possible value being 32. The maximum is 4294967039. Each attribute consumes around 200 bytes of storage per node due to the fact that all metadata is fully replicated on the servers.
When setting MaxNoOfAttributes
, it is
important to prepare in advance for any
ALTER TABLE
statements that
you might want to perform in the future. This is due to the
fact, during the execution of ALTER
TABLE
on a Cluster table, 3 times the number of
attributes as in the original table are used, and a good
practice is to allow double this amount. For example, if the
MySQL Cluster table having the greatest number of attributes
(greatest_number_of_attributes
)
has 100 attributes, a good starting point for the value of
MaxNoOfAttributes
would be 6 *
.
greatest_number_of_attributes
=
600
You should also estimate the average number of attributes per table and multiply this by the total number of MySQL Cluster tables. If this value is larger than the value obtained in the previous paragraph, you should use the larger value instead.
Assuming that you can create all desired tables without any
problems, you should also verify that this number is
sufficient by trying an actual ALTER
TABLE
after configuring the parameter. If this is
not successful, increase
MaxNoOfAttributes
by another multiple of
MaxNoOfTables
and test it again.
Restart Type | node | |
Permitted Values (>= 5.0.0) | ||
Type | numeric |
|
Default | 128 |
|
Range | 8-20320 |
A table object is allocated for each table and for each unique hash index in the cluster. This parameter sets the maximum number of table objects for the cluster as a whole.
For each attribute that has a
BLOB
data type an extra table
is used to store most of the
BLOB
data. These tables also
must be taken into account when defining the total number of
tables.
The default value of this parameter is 128. The minimum is 8 and the maximum is 20320. (This is a change from MySQL 4.1.) Each table object consumes approximately 20KB per node.
The sum of MaxNoOfTables
,
MaxNoOfOrderedIndexes
, and
MaxNoOfUniqueHashIndexes
must not
exceed 232 –
2
(4294967294).
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 128 |
|
Range | 0-4G |
For each ordered index in the cluster, an object is
allocated describing what is being indexed and its storage
segments. By default, each index so defined also defines an
ordered index. Each unique index and primary key has both an
ordered index and a hash index.
MaxNoOfOrderedIndexes
sets the total
number of hash indexes that can be in use in the system at
any one time.
The default value of this parameter is 128. Each hash index object consumes approximately 10KB of data per node.
The sum of MaxNoOfTables
,
MaxNoOfOrderedIndexes
, and
MaxNoOfUniqueHashIndexes
must not
exceed 232 –
2
(4294967294).
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 64 |
|
Range | 0-4G |
For each unique index that is not a primary key, a special
table is allocated that maps the unique key to the primary
key of the indexed table. By default, an ordered index is
also defined for each unique index. To prevent this, you
must specify the USING HASH
option when
defining the unique index.
The default value is 64. Each index consumes approximately 15KB per node.
The sum of MaxNoOfTables
,
MaxNoOfOrderedIndexes
, and
MaxNoOfUniqueHashIndexes
must not
exceed 232 –
2
(4294967294).
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 768 |
|
Range | 0-4G |
Internal update, insert, and delete triggers are allocated for each unique hash index. (This means that three triggers are created for each unique hash index.) However, an ordered index requires only a single trigger object. Backups also use three trigger objects for each normal table in the cluster.
This parameter sets the maximum number of trigger objects in the cluster.
The default value is 768.
This parameter is deprecated in MySQL 5.0; you
should use MaxNoOfOrderedIndexes
and
MaxNoOfUniqueHashIndexes
instead.
This parameter is used only by unique hash indexes. There needs to be one record in this pool for each unique hash index defined in the cluster.
The default value of this parameter is 128.
Boolean parameters.
The behavior of data nodes is also affected by a set of
[ndbd]
parameters taking on boolean values.
These parameters can each be specified as
TRUE
by setting them equal to
1
or Y
, and as
FALSE
by setting them equal to
0
or N
.
Restart Type | node | |
Permitted Values (>= 5.0.0, <= 5.0.35) | ||
Type | boolean |
|
Default | 0 |
|
Range | 0-1 |
|
Permitted Values (>= 5.0.36) | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-2 |
For a number of operating systems, including Solaris and Linux, it is possible to lock a process into memory and so avoid any swapping to disk. This can be used to help guarantee the cluster's real-time characteristics.
Beginning with MySQL 5.0.36, this parameter takes one of the
integer values 0
, 1
,
or 2
, which act as follows:
0
: Disables locking. This is the
default value.
1
: Performs the lock after allocating
memory for the process.
2
: Performs the lock before memory
for the process is allocated.
Previously, this parameter was a Boolean.
0
or false
was the
default setting, and disabled locking. 1
or true
enabled locking of the process
after its memory was allocated.
Beginning with MySQL 5.0.36, it is no longer possible to
use true
or false
for the value of this parameter; when upgrading from a
previous version, you must change the value to
0
, 1
, or
2
.
To make use of this parameter, the data node process must be run as system root.
Restart Type | node | |
Permitted Values | ||
Type | boolean |
|
Default | true |
|
Range | - |
This parameter specifies whether an ndbd process should exit or perform an automatic restart when an error condition is encountered.
This feature is enabled by default.
Restart Type | initial, system | |
Permitted Values | ||
Type | boolean |
|
Default | 0 |
|
Range | 0-1 |
It is possible to specify MySQL Cluster tables as diskless, meaning that tables are not checkpointed to disk and that no logging occurs. Such tables exist only in main memory. A consequence of using diskless tables is that neither the tables nor the records in those tables survive a crash. However, when operating in diskless mode, it is possible to run ndbd on a diskless computer.
This feature causes the entire cluster to operate in diskless mode.
When this feature is enabled, Cluster online backup is disabled. In addition, a partial start of the cluster is not possible.
Diskless
is disabled by default.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 2 |
|
Range | 0-4 |
This feature is accessible only when building the debug version where it is possible to insert errors in the execution of individual blocks of code as part of testing.
This feature is disabled by default.
Controlling Timeouts, Intervals, and Disk Paging
There are a number of [ndbd]
parameters
specifying timeouts and intervals between various actions in
Cluster data nodes. Most of the timeout values are specified in
milliseconds. Any exceptions to this are mentioned where
applicable.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 6000 |
|
Range | 70-4G |
To prevent the main thread from getting stuck in an endless loop at some point, a “watchdog” thread checks the main thread. This parameter specifies the number of milliseconds between checks. If the process remains in the same state after three checks, the watchdog thread terminates it.
This parameter can easily be changed for purposes of experimentation or to adapt to local conditions. It can be specified on a per-node basis although there seems to be little reason for doing so.
The default timeout is 4000 milliseconds (4 seconds).
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 30000 |
|
Range | 0-4G |
This parameter specifies how long the Cluster waits for all data nodes to come up before the cluster initialization routine is invoked. This timeout is used to avoid a partial Cluster startup whenever possible.
This parameter is overridden when performing an initial start or initial restart of the cluster.
The default value is 30000 milliseconds (30 seconds). 0 disables the timeout, in which case the cluster may start only if all nodes are available.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 60000 |
|
Range | 0-4G |
If the cluster is ready to start after waiting for
StartPartialTimeout
milliseconds but is
still possibly in a partitioned state, the cluster waits
until this timeout has also passed. If
StartPartitionedTimeout
is set to 0, the
cluster waits indefinitely.
This parameter is overridden when performing an initial start or initial restart of the cluster.
The default timeout is 60000 milliseconds (60 seconds).
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-4G |
If a data node has not completed its startup sequence within the time specified by this parameter, the node startup fails. Setting this parameter to 0 (the default value) means that no data node timeout is applied.
For nonzero values, this parameter is measured in milliseconds. For data nodes containing extremely large amounts of data, this parameter should be increased. For example, in the case of a data node containing several gigabytes of data, a period as long as 10–15 minutes (that is, 600000 to 1000000 milliseconds) might be required to perform a node restart.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1500 |
|
Range | 10-4G |
One of the primary methods of discovering failed nodes is by the use of heartbeats. This parameter states how often heartbeat signals are sent and how often to expect to receive them. After missing three heartbeat intervals in a row, the node is declared dead. Thus, the maximum time for discovering a failure through the heartbeat mechanism is four times the heartbeat interval.
The default heartbeat interval is 1500 milliseconds (1.5 seconds). This parameter must not be changed drastically and should not vary widely between nodes. If one node uses 5000 milliseconds and the node watching it uses 1000 milliseconds, obviously the node will be declared dead very quickly. This parameter can be changed during an online software upgrade, but only in small increments.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1500 |
|
Range | 100-4G |
Each data node sends heartbeat signals to each MySQL server
(SQL node) to ensure that it remains in contact. If a MySQL
server fails to send a heartbeat in time it is declared
“dead,” in which case all ongoing transactions
are completed and all resources released. The SQL node
cannot reconnect until all activities initiated by the
previous MySQL instance have been completed. The
three-heartbeat criteria for this determination are the same
as described for HeartbeatIntervalDbDb
.
The default interval is 1500 milliseconds (1.5 seconds). This interval can vary between individual data nodes because each data node watches the MySQL servers connected to it, independently of all other data nodes.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 20 |
|
Range | 0-31 |
This parameter is an exception in that it does not specify a time to wait before starting a new local checkpoint; rather, it is used to ensure that local checkpoints are not performed in a cluster where relatively few updates are taking place. In most clusters with high update rates, it is likely that a new local checkpoint is started immediately after the previous one has been completed.
The size of all write operations executed since the start of the previous local checkpoints is added. This parameter is also exceptional in that it is specified as the base-2 logarithm of the number of 4-byte words, so that the default value 20 means 4MB (4 × 220) of write operations, 21 would mean 8MB, and so on up to a maximum value of 31, which equates to 8GB of write operations.
All the write operations in the cluster are added together.
Setting TimeBetweenLocalCheckpoints
to 6
or less means that local checkpoints will be executed
continuously without pause, independent of the cluster's
workload.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 2000 |
|
Range | 10-32000 |
When a transaction is committed, it is committed in main memory in all nodes on which the data is mirrored. However, transaction log records are not flushed to disk as part of the commit. The reasoning behind this behavior is that having the transaction safely committed on at least two autonomous host machines should meet reasonable standards for durability.
It is also important to ensure that even the worst of cases — a complete crash of the cluster — is handled properly. To guarantee that this happens, all transactions taking place within a given interval are put into a global checkpoint, which can be thought of as a set of committed transactions that has been flushed to disk. In other words, as part of the commit process, a transaction is placed in a global checkpoint group. Later, this group's log records are flushed to disk, and then the entire group of transactions is safely committed to disk on all computers in the cluster.
This parameter defines the interval between global checkpoints. The default is 2000 milliseconds.
TimeBetweenInactiveTransactionAbortCheck
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1000 |
|
Range | 1000-4G |
Timeout handling is performed by checking a timer on each transaction once for every interval specified by this parameter. Thus, if this parameter is set to 1000 milliseconds, every transaction will be checked for timing out once per second.
The default value is 1000 milliseconds (1 second).
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 4G |
|
Range | 0-4G |
This parameter states the maximum time that is permitted to lapse between operations in the same transaction before the transaction is aborted.
The default for this parameter is zero (no timeout). For a real-time database that needs to ensure that no transaction keeps locks for too long, this parameter should be set to a relatively small value. The unit is milliseconds.
TransactionDeadlockDetectionTimeout
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1200 |
|
Range | 50-4G |
When a node executes a query involving a transaction, the node waits for the other nodes in the cluster to respond before continuing. A failure to respond can occur for any of the following reasons:
The node is “dead”
The operation has entered a lock queue
The node requested to perform the action could be heavily overloaded.
This timeout parameter states how long the transaction coordinator waits for query execution by another node before aborting the transaction, and is important for both node failure handling and deadlock detection. In MySQL 5.0.20 and earlier versions, setting it too high could cause undesirable behavior in situations involving deadlocks and node failure. Beginning with MySQL 5.0.21, active transactions occurring during node failures are actively aborted by the MySQL Cluster Transaction Coordinator, and so high settings are no longer an issue with this parameter.
The default timeout value is 1200 milliseconds (1.2 seconds). The effective minimum value is 100 milliseconds; it is possible to set it as low as 50 milliseconds, but any such value is treated as 100 ms. (Bug#44099)
NoOfDiskPagesToDiskAfterRestartTUP
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 40 |
|
Range | 1-4G |
When executing a local checkpoint, the algorithm flushes all
data pages to disk. Merely doing so as quickly as possible
without any moderation is likely to impose excessive loads
on processors, networks, and disks. To control the write
speed, this parameter specifies how many pages per 100
milliseconds are to be written. In this context, a
“page” is defined as 8KB. This parameter is
specified in units of 80KB per second, so setting
NoOfDiskPagesToDiskAfterRestartTUP
to a
value of 20
entails writing 1.6MB in data
pages to disk each second during a local checkpoint. This
value includes the writing of UNDO log records for data
pages. That is, this parameter handles the limitation of
writes from data memory. UNDO log records for index pages
are handled by the parameter
NoOfDiskPagesToDiskAfterRestartACC
. (See
the entry for IndexMemory
for information
about index pages.)
In short, this parameter specifies how quickly to execute
local checkpoints. It operates in conjunction with
NoOfFragmentLogFiles
,
DataMemory
, and
IndexMemory
.
For more information about the interaction between these parameters and possible strategies for choosing appropriate values for them, see Section 17.3.2.11, “Configuring MySQL Cluster Parameters for Local Checkpoints”.
The default value is 40 (3.2MB of data pages per second).
NoOfDiskPagesToDiskAfterRestartACC
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 20 |
|
Range | 1-4G |
This parameter uses the same units as
NoOfDiskPagesToDiskAfterRestartTUP
and
acts in a similar fashion, but limits the speed of writing
index pages from index memory.
The default value of this parameter is 20 (1.6MB of index memory pages per second).
NoOfDiskPagesToDiskDuringRestartTUP
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 40 |
|
Range | 1-4G |
This parameter is used in a fashion similar to
NoOfDiskPagesToDiskAfterRestartTUP
and
NoOfDiskPagesToDiskAfterRestartACC
, only
it does so with regard to local checkpoints executed in the
node when a node is restarting. A local checkpoint is always
performed as part of all node restarts. During a node
restart it is possible to write to disk at a higher speed
than at other times, because fewer activities are being
performed in the node.
This parameter covers pages written from data memory.
The default value is 40 (3.2MB per second).
NoOfDiskPagesToDiskDuringRestartACC
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 20 |
|
Range | 1-4G |
Controls the number of index memory pages that can be written to disk during the local checkpoint phase of a node restart.
As with
NoOfDiskPagesToDiskAfterRestartTUP
and
NoOfDiskPagesToDiskAfterRestartACC
,
values for this parameter are expressed in terms of 8KB
pages written per 100 milliseconds (80KB/second).
The default value is 20 (1.6MB per second).
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1000 |
|
Range | 10-4G |
This parameter specifies how long data nodes wait for a response from the arbitrator to an arbitration message. If this is exceeded, the network is assumed to have split.
The default value is 1000 milliseconds (1 second).
Buffering and logging.
Several [ndbd]
configuration parameters
corresponding to former compile-time parameters were
introduced in MySQL 4.1.5. These enable the advanced user to
have more control over the resources used by node processes
and to adjust various buffer sizes at need.
These buffers are used as front ends to the file system when
writing log records to disk. If the node is running in diskless
mode, these parameters can be set to their minimum values
without penalty due to the fact that disk writes are
“faked” by the NDB
storage engine's file system abstraction layer.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 2M |
|
Range | 1M-4G |
The UNDO index buffer, whose size is set by this parameter,
is used during local checkpoints. The
NDB
storage engine uses a
recovery scheme based on checkpoint consistency in
conjunction with an operational REDO log. To produce a
consistent checkpoint without blocking the entire system for
writes, UNDO logging is done while performing the local
checkpoint. UNDO logging is activated on a single table
fragment at a time. This optimization is possible because
tables are stored entirely in main memory.
The UNDO index buffer is used for the updates on the primary key hash index. Inserts and deletes rearrange the hash index; the NDB storage engine writes UNDO log records that map all physical changes to an index page so that they can be undone at system restart. It also logs all active insert operations for each fragment at the start of a local checkpoint.
Reads and updates set lock bits and update a header in the hash index entry. These changes are handled by the page-writing algorithm to ensure that these operations need no UNDO logging.
This buffer is 2MB by default. The minimum value is 1MB,
which is sufficient for most applications. For applications
doing extremely large or numerous inserts and deletes
together with large transactions and large primary keys, it
may be necessary to increase the size of this buffer. If
this buffer is too small, the NDB storage engine issues
internal error code 677 (Index UNDO buffers
overloaded
).
It is not safe to decrease the value of this parameter during a rolling restart.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 16M |
|
Range | 1M-4G |
This parameter sets the size of the UNDO data buffer, which performs a function similar to that of the UNDO index buffer, except the UNDO data buffer is used with regard to data memory rather than index memory. This buffer is used during the local checkpoint phase of a fragment for inserts, deletes, and updates.
Because UNDO log entries tend to grow larger as more operations are logged, this buffer is also larger than its index memory counterpart, with a default value of 16MB.
This amount of memory may be unnecessarily large for some applications. In such cases, it is possible to decrease this size to a minimum of 1MB.
It is rarely necessary to increase the size of this buffer. If there is such a need, it is a good idea to check whether the disks can actually handle the load caused by database update activity. A lack of sufficient disk space cannot be overcome by increasing the size of this buffer.
If this buffer is too small and gets congested, the NDB storage engine issues internal error code 891 (Data UNDO buffers overloaded).
It is not safe to decrease the value of this parameter during a rolling restart.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 8M |
|
Range | 1M-4G |
All update activities also need to be logged. The REDO log makes it possible to replay these updates whenever the system is restarted. The NDB recovery algorithm uses a “fuzzy” checkpoint of the data together with the UNDO log, and then applies the REDO log to play back all changes up to the restoration point.
RedoBuffer
sets the size of the buffer in
which the REDO log is written, and is 8MB by default. The
minimum value is 1MB.
If this buffer is too small, the NDB storage engine issues
error code 1221 (REDO log buffers
overloaded
).
It is not safe to decrease the value of this parameter during a rolling restart.
Controlling log messages.
In managing the cluster, it is very important to be able to
control the number of log messages sent for various event
types to stdout
. For each event category,
there are 16 possible event levels (numbered 0 through 15).
Setting event reporting for a given event category to level 15
means all event reports in that category are sent to
stdout
; setting it to 0 means that there
will be no event reports made in that category.
By default, only the startup message is sent to
stdout
, with the remaining event reporting
level defaults being set to 0. The reason for this is that these
messages are also sent to the management server's cluster log.
An analogous set of levels can be set for the management client to determine which event levels to record in the cluster log.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 1 |
|
Range | 0-15 |
The reporting level for events generated during startup of the process.
The default level is 1.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for events generated as part of graceful shutdown of a node.
The default level is 0.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for statistical events such as number of primary key reads, number of updates, number of inserts, information relating to buffer usage, and so on.
The default level is 0.
Restart Type | initial, node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for events generated by local and global checkpoints.
The default level is 0.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for events generated during node restart.
The default level is 0.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for events generated by connections between cluster nodes.
The default level is 0.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for events generated by errors and warnings by the cluster as a whole. These errors do not cause any node failure but are still considered worth reporting.
The default level is 0.
Version Introduced | 5.0.0 | |
Restart Type | node | |
Permitted Values (>= 5.0.0) | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for events generated by congestion. These errors do not cause node failure but are still considered worth reporting.
The default level is 0.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-15 |
The reporting level for events generated for information about the general state of the cluster.
The default level is 0.
Backup parameters.
The [ndbd]
parameters discussed in this
section define memory buffers set aside for execution of
online backups.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 2M |
|
Range | 0-4G |
In creating a backup, there are two buffers used for sending
data to the disk. The backup data buffer is used to fill in
data recorded by scanning a node's tables. Once this buffer
has been filled to the level specified as
BackupWriteSize
(see below), the pages
are sent to disk. While flushing data to disk, the backup
process can continue filling this buffer until it runs out
of space. When this happens, the backup process pauses the
scan and waits until some disk writes have completed freed
up memory so that scanning may continue.
The default value is 2MB.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 2M |
|
Range | 0-4G |
The backup log buffer fulfills a role similar to that played by the backup data buffer, except that it is used for generating a log of all table writes made during execution of the backup. The same principles apply for writing these pages as with the backup data buffer, except that when there is no more space in the backup log buffer, the backup fails. For that reason, the size of the backup log buffer must be large enough to handle the load caused by write activities while the backup is being made. See Section 17.5.3.3, “Configuration for MySQL Cluster Backups”.
The default value for this parameter should be sufficient for most applications. In fact, it is more likely for a backup failure to be caused by insufficient disk write speed than it is for the backup log buffer to become full. If the disk subsystem is not configured for the write load caused by applications, the cluster is unlikely to be able to perform the desired operations.
It is preferable to configure cluster nodes in such a manner that the processor becomes the bottleneck rather than the disks or the network connections.
The default value is 2MB.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 4M |
|
Range | 0-4G |
This parameter is simply the sum of
BackupDataBufferSize
and
BackupLogBufferSize
.
The default value is 2MB + 2MB = 4MB.
If BackupDataBufferSize
and
BackupLogBufferSize
taken together
exceed 4MB, then this parameter must be set explicitly in
the config.ini
file to their sum.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 32K |
|
Range | 2K-4G |
This parameter specifies the default size of messages written to disk by the backup log and backup data buffers.
The default value is 32KB.
Restart Type | node | |
Permitted Values | ||
Type | numeric |
|
Default | 256K |
|
Range | 2K-4G |
This parameter specifies the maximum size of messages written to disk by the backup log and backup data buffers.
The default value is 256KB.
When specifying these parameters, the following relationships must hold true. Otherwise, the data node will be unable to start.
BackupDataBufferSize >= BackupWriteSize +
188KB
BackupLogBufferSize >= BackupWriteSize +
16KB
BackupMaxWriteSize >= BackupWriteSize
To add new data nodes to a MySQL Cluster, it is necessary to
shut down the cluster completely, update the
config.ini
file, and then restart the
cluster (that is, you must perform a system restart). All data
node processes must be started with the
--initial
option.
Beginning with MySQL Cluster NDB 7.0, it is possible to add new data node groups to a running cluster online; however, we do not plan to implement this change in MySQL 5.0.
User Comments
Add your own comment.