[+/-]
The next few sections contain information about mysqld options and server variables that are used in replication and for controlling the binary log. Options and variables for use on replication masters and replication slaves are covered separately, as are options and variables relating to binary logging. A set of quick-reference tables providing basic information about these options and variables is also included (in the next section following this one).
Of particular importance is the
--server-id
option.
Command-Line Format | --server-id=# |
|
Config-File Format | server-id |
|
Option Sets Variable | Yes, server_id
|
|
Variable Name | server_id |
|
Variable Scope | Global | |
Dynamic Variable | Yes | |
Permitted Values | ||
Type | numeric |
|
Default | 0 |
|
Range | 0-4294967295 |
This option is common to both master and slave replication servers, and is used in replication to enable master and slave servers to identify themselves uniquely. For additional information, see Section 16.1.2.2, “Replication Master Options and Variables”, and Section 16.1.2.3, “Replication Slave Options and Variables”.
On the master and each slave, you must use the
--server-id
option to establish a
unique replication ID in the range from 1 to
232 – 1. “Unique”,
means that each ID must be different from every other ID in use by
any other replication master or slave. Example:
server-id=3
.
If you omit --server-id
, the default
ID is 0, in which case a master refuses connections from all slaves,
and a slave refuses to connect to a master. For more information,
see Section 16.1.1.2, “Setting the Replication Slave Configuration”.
User Comments
If you're attempting to use both
replicate-do-db=from_name
and
replicate-rewrite-db=from_name->to_name
be aware that you need to actually say
replicate-do-db=to_name
because the rewrite rule apparently happens before
the do-db rule.
thanks to Therion on opn/freenode for
troubleshooting this with me.
I was about to post the same comment, but as it
applies to replicate-wild-do-table.
replicate-wild-do-table = LocalTableName.%
replicate-rewrite-db = RemoteTableName ->
LocalTableName
Be really careful with the use of the
replicate-wild-do-table=db_name.% configuration
option. In 4.0.4, this option caused updates to
any specified tables to not work for me.
I had read in the documentation that this was
needed for cross database updates, but it was
causing my same database updates to fail.
I had the following options set in my slave my.cnf:
server-id = 16
master-host = 64.xx.xx.xx
master-user = replicator
master-password = *****
replicate-wild-do-table = banner.%
replicate-do-db = banner
report-host = 64.xx.xx.xx
Also, worth mentioning is that there seems to be
some limit in the server-id's, initially i set my
server-id to 15001 and this caused replication to
fail silently to even start up. Changed it to 16,
and it works perfectly, all this despite the
alleged limit of 2^32-1.
"daisy-chain" means to connect one to another, then that one to yet another, and so on. For example, 1 connects to 2, 2 connects to 3, 3 connects to 4...
Paul
I have this setup working :
A -> B -> A
I got in running with mysql 4.0.13-max, using MyISAM and InnoDB tables.
Here's how I do it on A:
- enable bin-log (just add log-bin in /etc/my.cnf. Restart mysqld if necessary.)
- create a replication user on A (I give it all privileges. You probably shouldn't do that).
- execute query
FLUSH TABLES WITH READ LOCK;
- do
tar -cvf /tmp/mysql-snapshot.tar /path/to/data-dir
- execute query
SHOW MASTER STATUS;
write down the result for
- modify /etc/my.cnf to include
server-id=<number-of-your-choice>
- shutdown mysqld on A (my root is password-protected, and I do it from another terminal)
mysqladmin -uroot -p shutdown
- start it back up
on B (make sure there are NO update queries on B at this point):
- make sure mysqld is dead
- copy and untar mysql-snapshot.tar created earlier
- copy my.cnf from A, put DIFFERENT number in server_id.
- start mysqld (make sure binary log is enabled)
- execute queries (this is where you put the values you got earlier from SHOW MASTER STATUS on A):
STOP SLAVE;
CHANGE MASTER TO MASTER_HOST='<A host name>',
MASTER_USER='<replication user name>',
MASTER_PASSWORD='<replication password>',
MASTER_LOG_FILE='<recorded log file name>',
MASTER_LOG_POS=<recorded log offset>;
START SLAVE;
- execute query
SHOW MASTER STATUS;
write down the values
At this point you got A->B replication
on A again:
- copy B's *.bin.* (binary logs), put it in A's data dir
- execute queries (this is where you put the values you got earlier from SHOW MASTER STATUS on B):
STOP SLAVE;
CHANGE MASTER TO MASTER_HOST='<B host name>',
MASTER_USER='<replication user name>',
MASTER_PASSWORD='<replication password>',
MASTER_LOG_FILE='<recorded log file name>',
MASTER_LOG_POS=<recorded log offset>;
START SLAVE;
And you're done! If you do what I do, you will have the same user on both A and B, and this replication setup :
A -> B -> A
You can now execute any query on any of them, and it will appear on both. You can even call it a mysql cluster.
Very nice, but remember that in my experience, setting up a working replication is the EASY part. The hard part is always what to do after one machine fails, to reset both
and restart the replication properly.
With A->B replication this is easy -- either switch masters as described in the Replication FAQ, or copy the slave back to the master, reset all the logs, and start again.
With A->B->A replication I would never be certain that I had reset correctly, or even that all my last transactions before the failure were all on the same machine! So I wouldn't do it. It's a low-reliability system, which kind of defeats the purpose (for me) of replication.
Fajar Nugraha has a great tip a few comments above me, however, he is missing one important step. On B, you need to do another GRANT to create a user so that A can access B as a slave.
slave-skip-errors is _not_ a good idea on dual-masters. If you have a dual-master setup you must ensure that writes go to one master, or that you run version 5+ and use the auto_increment offset and increment options.
If you use the slave-skip-errors option suggested by a previous commenter you will end up with hopelessly inconsistent data. With the slave-skip-errors set as suggested there will be records on one machine with the same primary key id, but different column values.
It is also difficult to ascertain the proper log positions when trying to restore a failed master when both masters are written to.
The above ABA setup has a couple of shortcomings everyone has to be aware before using it. (I've been using it for years...)
Never use both sides for writes as statements need time to get to the slave. For example:
- Sending two update on the same value makes it unpredictable which one will be the final one (you may even end up with different values on the two sides)
- Doing queries that use auto-incremented fields may give different results depending on which node you are when you just incremented the field.
- If sync breaks you loose the executed but not replicated queries on one side. If the sync breaks because of connection error between A-B but they are reachable from clients, you may en up with a completely screwed up db! (example client->frontend, replication->backend)
This setup is more a kind of HA/switchover setup than clustering...
If you want HA and clustering and use only 'basic' mysql features do:
- ABA setup in failover setup
- add VRRP'ed IP seen by slave farm as master (and same users to A and B; you may as well fire up the ABA setup for the 'mysql' db)
- separate rw and ro operations in clients, use A(B) for writes and use the slave farm for ro
- loadbalance between slaves (choose you flavour for lb)
Add your own comment.