The mysqldump client is a backup program originally written by Igor Romanenko. It can be used to dump a database or a collection of databases for backup or transfer to another SQL server (not necessarily a MySQL server). The dump typically contains SQL statements to create the table, populate it, or both. However, mysqldump can also be used to generate files in CSV, other delimited text, or XML format.
If you are doing a backup on the server and your tables all are
MyISAM
tables, consider using the
mysqlhotcopy instead because it can
accomplish faster backups and faster restores. See
Section 4.6.8, “mysqlhotcopy — A Database Backup Program”.
There are three general ways to invoke mysqldump:
shell>mysqldump [
shell>options
]db_name
[tbl_name
...]mysqldump [
shell>options
] --databasesdb_name
...mysqldump [
options
] --all-databases
If you do not name any tables following
db_name
or if you use the
--databases
or
--all-databases
option, entire
databases are dumped.
To see a list of the options your version of mysqldump supports, execute mysqldump --help.
Some mysqldump options are shorthand for groups of other options:
Use of --opt
is the same
as specifying
--add-drop-table
,
--add-locks
,
--create-options
,
--disable-keys
,
--extended-insert
,
--lock-tables
,
--quick
, and
--set-charset
. As of MySQL
4.1, all of the options that
--opt
stands for also are
on by default because
--opt
is on by default.
Use of --compact
is the
same as specifying
--skip-add-drop-table
,
--skip-add-locks
,
--skip-comments
,
--skip-disable-keys
,
and
--skip-set-charset
options.
To reverse the effect of a group option, uses its
--skip-
form
(xxx
--skip-opt
or
--skip-compact
).
It is also possible to select only part of the effect of a group
option by following it with options that enable or disable
specific features. Here are some examples:
To select the effect of
--opt
except for some
features, use the --skip
option for each
feature. To disable extended inserts and memory buffering,
use --opt
--skip-extended-insert
--skip-quick
.
(As of MySQL 4.1,
--skip-extended-insert
--skip-quick
is sufficient because
--opt
is on by default.)
To reverse --opt
for all
features except index disabling and table locking, use
--skip-opt
--disable-keys
--lock-tables
.
When you selectively enable or disable the effect of a group
option, order is important because options are processed first
to last. For example,
--disable-keys
--lock-tables
--skip-opt
would not have the
intended effect; it is the same as
--skip-opt
by itself.
mysqldump can retrieve and dump table
contents row by row, or it can retrieve the entire content from
a table and buffer it in memory before dumping it. Buffering in
memory can be a problem if you are dumping large tables. To dump
tables row by row, use the
--quick
option (or
--opt
, which enables
--quick
). The
--opt
option (and hence
--quick
) is enabled by default
as of MySQL 4.1, so to enable memory buffering, use
--skip-quick
.
If you are using a recent version of
mysqldump to generate a dump to be reloaded
into a very old MySQL server, you should not use the
--opt
or
--extended-insert
option. Use
--skip-opt
instead.
Before MySQL 4.1.2, out-of-range numeric values such as
-inf
and inf
, as well as
NaN
(not-a-number) values are dumped by
mysqldump as NULL
. You can
see this using the following sample table:
mysql>CREATE TABLE t (f DOUBLE);
mysql>INSERT INTO t VALUES(1e+111111111111111111111);
mysql>INSERT INTO t VALUES(-1e111111111111111111111);
mysql>SELECT f FROM t;
+------+ | f | +------+ | inf | | -inf | +------+
For this table, mysqldump produces the following data output:
-- -- Dumping data for table `t` -- INSERT INTO t VALUES (NULL); INSERT INTO t VALUES (NULL);
The significance of this behavior is that if you dump and
restore the table, the new table has contents that differ from
the original contents. This problem is fixed as of MySQL 4.1.2;
you cannot insert inf
in the table, so this
mysqldump behavior is only relevant when you
deal with old servers.
For additional information about mysqldump, see Section 6.4, “Using mysqldump for Backups”.
mysqldump supports the following options,
which can be specified on the command line or in the
[mysqldump]
and [client]
option file groups. mysqldump also supports
the options for processing option files described at
Section 4.2.3.3.1, “Command-Line Options that Affect Option-File Handling”.
Table 4.5. mysqldump
Options
Format | Config File | Description | Introduction | Deprecated | Removed |
---|---|---|---|---|---|
--add-drop-database | add-drop-database | Add a DROP DATABASE statement before each CREATE DATABASE statement | 4.1.13 | ||
--add-drop-table | add-drop-table | Add a DROP TABLE statement before each CREATE TABLE statement | |||
--add-locks | add-locks | Surround each table dump with LOCK TABLES and UNLOCK TABLES statements | |||
--all-databases | all-databases | Dump all tables in all databases | |||
--allow-keywords | allow-keywords | Allow creation of column names that are keywords | |||
--comments | comments | Add comments to the dump file | |||
--compact | compact | Produce more compact output | |||
--compatible=name[,name,...] | compatible | Produce output that is more compatible with other database systems or with older MySQL servers | |||
--complete-insert | complete-insert | Use complete INSERT statements that include column names | |||
--create-options | create-options | Include all MySQL-specific table options in CREATE TABLE statements | |||
--databases | databases | Dump several databases | |||
--debug[=debug_options] | debug | Write a debugging log | |||
--default-character-set=charset_name | default-character-set | Use charset_name as the default character set | |||
--delayed-insert | delayed-insert | Write INSERT DELAYED statements rather than INSERT statements | |||
--delete-master-logs | delete-master-logs | On a master replication server, delete the binary logs after performing the dump operation | |||
--disable-keys | disable-keys | For each table, surround the INSERT statements with statements to disable and enable keys | |||
--extended-insert | extended-insert | Use multiple-row INSERT syntax that include several VALUES lists | |||
--fields-enclosed-by=string | fields-enclosed-by | This option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE | |||
--fields-escaped-by | fields-escaped-by | This option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE | |||
--fields-optionally-enclosed-by=string | fields-optionally-enclosed-by | This option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE | |||
--fields-terminated-by=string | fields-terminated-by | This option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE | |||
--first-slave | first-slave | Deprecated; use --lock-all-tables instead | |||
--flush-logs | flush-logs | Flush the MySQL server log files before starting the dump | |||
--help | Display help message and exit | ||||
--hex-blob | hex-blob | Dump binary columns using hexadecimal notation (for example, 'abc' becomes 0x616263) | |||
--host | host | Host to connect to (IP address or hostname) | |||
--ignore-table=db_name.tbl_name | ignore-table | Do not dump the given table | |||
--insert-ignore | insert-ignore | Write INSERT IGNORE statements rather than INSERT statements | 4.1.12 | ||
--lines-terminated-by=string | lines-terminated-by | This option is used with the --tab option and has the same meaning as the corresponding clause for LOAD DATA INFILE | |||
--lock-all-tables | lock-all-tables | Lock all tables across all databases | |||
--lock-tables | lock-tables | Lock all tables before dumping them | |||
--master-data[=value] | master-data | Write the binary log file name and position to the output | |||
--max_allowed_packet=value | max_allowed_packet | The maximum packet length to send to or receive from the server | |||
--net_buffer_length=value | net_buffer_length | The buffer size for TCP/IP and socket communication | |||
--no-autocommit | no-autocommit | Enclose the INSERT statements for each dumped table within SET autocommit = 0 and COMMIT statements | |||
--no-create-db | no-create-db | This option suppresses the CREATE DATABASE statements | |||
--no-create-info | no-create-info | Do not write CREATE TABLE statements that re-create each dumped table | |||
--no-data | no-data | Do not dump table contents | |||
--no-set-names | no-set-names | Same as --skip-set-charset | |||
--opt | opt | Shorthand for --add-drop-table --add-locks --create-options --disable-keys --extended-insert --lock-tables --quick --set-charset. | |||
--order-by-primary | order-by-primary | Dump each table's rows sorted by its primary key, or by its first unique index | |||
--password[=password] | password | The password to use when connecting to the server | |||
--pipe | On Windows, connect to server via a named pipe | ||||
--port=port_num | port | The TCP/IP port number to use for the connection | |||
--quick | quick | Retrieve rows for a table from the server a row at a time | |||
--quote-names | quote-names | Quote identifiers within backtick characters | |||
--result-file=file | result-file | Direct output to a given file | |||
--set-charset | set-charset | Add SET NAMES default_character_set to the output | |||
--single-transaction | single-transaction | This option issues a BEGIN SQL statement before dumping data from the server | |||
--skip-add-drop-table | skip-add-drop-table | Do not add a DROP TABLE statement before each CREATE TABLE statement | |||
--skip-add-locks | skip-add-locks | Do not add locks | |||
--skip-comments | skip-comments | Do not add comments to the dump file | |||
--skip-compact | skip-compact | Do not produce more compact output | |||
--skip-disable-keys | skip-disable-keys | Do not disable keys | |||
--skip-extended-insert | skip-extended-insert | Turn off extended-insert | |||
--skip-opt | skip-opt | Turn off the options set by --opt | |||
--skip-quick | skip-quick | Do not retrieve rows for a table from the server a row at a time | |||
--skip-quote-names | skip-quote-names | Do not quote identifiers | |||
--skip-set-charset | skip-set-charset | Suppress the SET NAMES statement | |||
--ssl-ca=file_name | ssl-ca | The path to a file that contains a list of trusted SSL CAs | |||
--ssl-capath=directory_name | ssl-capath | The path to a directory that contains trusted SSL CA certificates in PEM format | |||
--ssl-cert=file_name | ssl-cert | The name of the SSL certificate file to use for establishing a secure connection | |||
--ssl-cipher=cipher_list | ssl-cipher | A list of allowable ciphers to use for SSL encryption | |||
--ssl-key=file_name | ssl-key | The name of the SSL key file to use for establishing a secure connection | |||
--ssl-verify-server-cert | ssl-verify-server-cert | The server's Common Name value in its certificate is verified against the host name used when connecting to the server | |||
--tab=path | tab | Produce tab-separated data files | |||
--tables | tables | Override the --databases or -B option | |||
--verbose | Verbose mode | ||||
--version | Display version information and exit | ||||
--where='where_condition' | where | Dump only rows selected by the given WHERE condition | |||
--xml | xml | Produce XML output |
--help
,
-?
Display a help message and exit.
Add a DROP DATABASE
statement
before each CREATE DATABASE
statement. Added in MySQL 4.1.13.
Add a DROP TABLE
statement
before each CREATE TABLE
statement. This option is typically used in conjunction with
the --all-databases
or
--databases
option because
no CREATE DATABASE
statements
are written unless one of those options is specified.
Surround each table dump with LOCK
TABLES
and
UNLOCK
TABLES
statements. This results in faster inserts
when the dump file is reloaded. See
Section 7.2.14, “Speed of INSERT
Statements”.
--all-databases
,
-A
Dump all tables in all databases. This is the same as using
the --databases
option and
naming all the databases on the command line.
Allow creation of column names that are keywords. This works by prefixing each column name with the table name.
The directory where character sets are installed. See Section 9.6, “Character Set Configuration”.
--comments
,
-i
Write additional information in the dump file such as
program version, server version, and host. This option is
enabled by default. To suppress this additional information,
use --skip-comments
. This
option was added in MySQL 4.0.17.
Produce more compact output. This option enables the
--skip-add-drop-table
,
--skip-add-locks
,
--skip-comments
,
--skip-disable-keys
,
and
--skip-set-charset
options. Added in MySQL 4.1.2.
Produce output that is more compatible with other database
systems or with older MySQL servers. The value of
name
can be
ansi
, mysql323
,
mysql40
, postgresql
,
oracle
, mssql
,
db2
, maxdb
,
no_key_options
,
no_table_options
, or
no_field_options
. To use several values,
separate them by commas. These values have the same meaning
as the corresponding options for setting the server SQL
mode. See Section 5.1.7, “Server SQL Modes”.
This option does not guarantee compatibility with other
servers. It only enables those SQL mode values that are
currently available for making dump output more compatible.
For example,
--compatible=oracle
does
not map data types to Oracle types or use Oracle comment
syntax.
This option requires a server version of 4.1.0 or higher. With older servers, it does nothing.
Use complete INSERT
statements that include column names.
--compress
,
-C
Compress all information sent between the client and the server if both support compression.
Include all MySQL-specific table options in the
CREATE TABLE
statements.
Before MySQL 4.1.2, use --all
instead.
--databases
,
-B
Dump several databases. Normally,
mysqldump treats the first name argument
on the command line as a database name and following names
as table names. With this option, it treats all name
arguments as database names. CREATE
DATABASE
and USE
statements are included in the output before each new
database.
--debug[=
,
debug_options
]-#
[
debug_options
]
Write a debugging log. A typical
debug_options
string is
'd:t:o,
.
The default value is
file_name
''d:t:o,/tmp/mysqldump.trace'
.
--default-character-set=
charset_name
Use charset_name
as the default
character set. See Section 9.6, “Character Set Configuration”.
If no character set is specified,
mysqldump from MySQL 4.1.2 or later uses
utf8
, and earlier versions use
latin1
.
Write INSERT DELAYED
statements rather than INSERT
statements.
On a master replication server, delete the binary logs by
sending a RESET MASTER
statement to the server after performing the dump operation.
This option automatically enables
--first-slave
before MySQL
4.1.8 and enables
--master-data
thereafter.
It was added in MySQL 3.23.57 (for MySQL 3.23) and MySQL
4.0.13 (for MySQL 4.0).
--disable-keys
,
-K
For each table, surround the
INSERT
statements with
/*!40000 ALTER TABLE
and tbl_name
DISABLE KEYS
*/;/*!40000 ALTER TABLE
statements. This makes loading the dump file
into a MySQL 4.0 or newer server faster because the indexes
are created after all rows are inserted. This option is
effective only for nonunique indexes of
tbl_name
ENABLE KEYS
*/;MyISAM
tables. only.
Use multiple-row INSERT
syntax that include several VALUES
lists.
This results in a smaller dump file and speeds up inserts
when the file is reloaded.
--fields-terminated-by=...
,
--fields-enclosed-by=...
,
--fields-optionally-enclosed-by=...
,
--fields-escaped-by=...
These options are used with the
--tab
option and have the
same meaning as the corresponding FIELDS
clauses for LOAD
DATA INFILE
. See Section 12.2.5, “LOAD DATA INFILE
Syntax”.
Deprecated. Use
--lock-all-tables
instead
as of MySQL 4.1.8.
--flush-logs
,
-F
Flush the MySQL server log files before starting the dump.
This option requires the
RELOAD
privilege. If you use
this option in combination with the
--all-databases
option,
the logs are flushed for each database
dumped. The exception is when using
--lock-all-tables
or
--master-data
: In this
case, the logs are flushed only once, corresponding to the
moment that all tables are locked. If you want your dump and
the log flush to happen at exactly the same moment, you
should use --flush-logs
together with either
--lock-all-tables
or
--master-data
.
--force
,
-f
Continue even if an SQL error occurs during a table dump.
--host=
,
host_name
-h
host_name
Dump data from the MySQL server on the given host. The
default host is localhost
.
Dump binary columns using hexadecimal notation (for example,
'abc'
becomes
0x616263
). The affected data types are
BINARY
,
VARBINARY
, and the
BLOB
types in MySQL 4.1 and
up, and CHAR BINARY
, VARCHAR
BINARY
, and BLOB
in
MySQL 4.0. This option was added in MySQL 4.0.23 and 4.1.8.
--ignore-table=
db_name.tbl_name
Do not dump the given table, which must be specified using both the database and table names. To ignore multiple tables, use this option multiple times. This option was added in MySQL 4.1.9.
Write INSERT
IGNORE
statements rather than
INSERT
statements. This
option was added in MySQL 4.1.12.
This option is used with the
--tab
option and has the
same meaning as the corresponding LINES
clause for LOAD
DATA INFILE
. See Section 12.2.5, “LOAD DATA INFILE
Syntax”.
Lock all tables across all databases. This is achieved by
acquiring a global read lock for the duration of the whole
dump. This option automatically turns off
--single-transaction
and
--lock-tables
. Added in
MySQL 4.1.8.
--lock-tables
,
-l
For each dumped database, lock all tables to be dumped
before dumping them. The tables are locked with
READ LOCAL
to allow concurrent inserts in
the case of MyISAM
tables. For
transactional tables such as InnoDB
and
BDB
,
--single-transaction
is a
much better option than
--lock-tables
because it
does not need to lock the tables at all.
Because --lock-tables
locks tables for each database separately, this option does
not guarantee that the tables in the dump file are logically
consistent between databases. Tables in different databases
may be dumped in completely different states.
This option has no effect for output data files produced by
using the --tab
option.
See the description for that option.
Use this option to dump a master replication server to
produce a dump file that can be used to set up another
server as a slave of the master. It causes the dump output
to include a CHANGE MASTER TO
statement that indicates the binary log coordinates (file
name and position) of the dumped server. These are the
master server coordinates from which the slave should start
replicating after you load the dump file into the slave.
If the option value is 2, the CHANGE
MASTER TO
statement is written as an SQL comment,
and thus is informative only; it has no effect when the dump
file is reloaded. If the option value is 1, the statement is
not written as a comment and takes effect when the dump file
is reloaded. If no option value is specified, the default
value is 1. The value may be given as of MySQL 4.1.8; before
that, do not specify an option value.
This option requires the
RELOAD
privilege and the
binary log must be enabled.
The --master-data
option
automatically turns off
--lock-tables
. It also
turns on
--lock-all-tables
, unless
--single-transaction
also
is specified, in which case, a global read lock is acquired
only for a short time at the beginning of the dump (see the
description for
--single-transaction
). In
all cases, any action on logs happens at the exact moment of
the dump.
It is also possible to set up a slave by dumping an existing slave of the master. To do this, use the following procedure on the existing slave:
Stop the slave's SQL thread and get its current status:
mysql>STOP SLAVE SQL_THREAD;
mysql>SHOW SLAVE STATUS;
From the output of the SHOW SLAVE
STATUS
statement, the binary log coordinates
of the master server from which the new slave should
start replicating are the values of the
Relay_Master_Log_File
and
Exec_Master_Log_Pos
fields. Denote
those values as file_name
and
file_pos
.
Dump the slave server:
shell> mysqldump --master-data=2 --all-databases > dumpfile
Restart the slave:
mysql> START SLAVE;
On the new slave, load the dump file:
shell> mysql < dumpfile
On the new slave, set the replication coordinates to those of the master server obtained earlier:
mysql>CHANGE MASTER TO
->MASTER_LOG_FILE = '
file_name
', MASTER_LOG_POS =file_pos
;
The CHANGE MASTER TO
statement might also need other parameters, such as
MASTER_HOST
to point the slave to the
correct master server host. Add any such parameters as
necessary.
Enclose the INSERT
statements
for each dumped table within SET autocommit =
0
and COMMIT
statements.
--no-create-db
,
-n
This option suppresses the CREATE
DATABASE
statements that are otherwise included in
the output if the
--databases
or
--all-databases
option is
given.
--no-create-info
,
-t
Do not write CREATE TABLE
statements that re-create each dumped table.
--no-data
,
-d
Do not write any table row information (that is, do not dump
table contents). This is useful if you want to dump only the
CREATE TABLE
statement for
the table (for example, to create an empty copy of the table
by loading the dump file).
--no-set-names
,
-N
This has the same effect as
--skip-set-charset
.
This option is shorthand. It is the same as specifying
--add-drop-table
--add-locks
--create-options
--disable-keys
--extended-insert
--lock-tables
--quick
--set-charset
. It should
give you a fast dump operation and produce a dump file that
can be reloaded into a MySQL server quickly.
As of MySQL 4.1,
--opt
is enabled by
default. Use --skip-opt
to
disable it. See the discussion at the beginning
of this section for information about selectively enabling
or disabling a subset of the options affected by
--opt
.
Dump each table's rows sorted by its primary key, or by its
first unique index, if such an index exists. This is useful
when dumping a MyISAM
table to be loaded
into an InnoDB
table, but will make the
dump operation take considerably longer. This option was
added in MySQL 4.1.8.
--password[=
,
password
]-p[
password
]
The password to use when connecting to the server. If you
use the short option form (-p
), you
cannot have a space between the option
and the password. If you omit the
password
value following the
--password
or
-p
option on the command line,
mysqldump prompts for one.
Specifying a password on the command line should be considered insecure. See Section 5.4.2.2, “End-User Guidelines for Password Security”. You can use an option file to avoid giving the password on the command line.
--pipe
,
-W
On Windows, connect to the server via a named pipe. This option applies only if the server supports named-pipe connections.
--port=
,
port_num
-P
port_num
The TCP/IP port number to use for the connection.
--protocol={TCP|SOCKET|PIPE|MEMORY}
The connection protocol to use for connecting to the server. It is useful when the other connection parameters normally would cause a protocol to be used other than the one you want. For details on the allowable values, see Section 4.2.2, “Connecting to the MySQL Server”. This option was added in MySQL 4.1.
--quick
,
-q
This option is useful for dumping large tables. It forces mysqldump to retrieve rows for a table from the server a row at a time rather than retrieving the entire row set and buffering it in memory before writing it out.
--quote-names
,
-Q
Quote identifiers (such as database, table, and column
names) within “`
”
characters. If the
ANSI_QUOTES
SQL mode is
enabled, identifiers are quoted within
“"
” characters. As of MySQL
4.1.1, --quote-names
is
enabled by default. It can be disabled with
--skip-quote-names
,
but this option should be given after any option such as
--compatible
that may
enable --quote-names
.
--result-file=
,
file_name
-r
file_name
Direct output to a given file. This option should be used on
Windows to prevent newline
“\n
” characters from being
converted to “\r\n
” carriage
return/newline sequences. The result file is created and its
previous contents overwritten, even if an error occurs while
generating the dump.
Add SET NAMES
to the output. This option is enabled by default. To
suppress the default_character_set
SET NAMES
statement, use
--skip-set-charset
.
This option was added in MySQL 4.1.2.
This option sends a
START
TRANSACTION
SQL statement to the server before
dumping data. It is useful only with transactional tables
such as InnoDB
and
BDB
, because then it dumps the consistent
state of the database at the time when
BEGIN
was
issued without blocking any applications.
When using this option, you should keep in mind that only
InnoDB
tables are dumped in a consistent
state. For example, any MyISAM
or
MEMORY
tables dumped while using this
option may still change state.
While a
--single-transaction
dump
is in process, to ensure a valid dump file (correct table
contents and binary log coordinates), no other connection
should use the following statements:
ALTER TABLE
,
CREATE TABLE
,
DROP TABLE
,
RENAME TABLE
,
TRUNCATE TABLE
. A consistent
read is not isolated from those statements, so use of them
on a table to be dumped can cause the
SELECT
that is performed by
mysqldump to retrieve the table contents
to obtain incorrect contents or fail.
The --single-transaction
option was added in MySQL 4.0.2. This option is mutually
exclusive with the
--lock-tables
option
because LOCK TABLES
causes
any pending transactions to be committed implicitly.
This option is not supported for MySQL Cluster tables; the
results cannot be guaranteed to be consistent due to the
fact that the NDBCLUSTER
storage engine supports only the
READ_COMMITTED
transaction isolation
level. You should always use
NDB
backup and restore instead.
To dump large tables, you should combine the
--single-transaction
option with --quick
.
See the description for the
--comments
option.
See the description for the
--opt
option.
--socket=
,
path
-S
path
For connections to localhost
, the Unix
socket file to use, or, on Windows, the name of the named
pipe to use.
Options that begin with
--ssl
specify whether to
connect to the server via SSL and indicate where to find SSL
keys and certificates. See Section 5.6.6.3, “SSL Command Options”.
--tab=
,
path
-T
path
Produce tab-separated text-format data files. For each
dumped table, mysqldump creates a
file that contains the tbl_name
.sqlCREATE
TABLE
statement that creates the table, and the
server writes a
file that contains its data. The option value is the
directory in which to write the files.
tbl_name
.txt
This option should be used only when
mysqldump is run on the same machine as
the mysqld server. You must have the
FILE
privilege, and the
server must have permission to write files in the
directory that you specify.
By default, the .txt
data files are
formatted using tab characters between column values and a
newline at the end of each line. The format can be specified
explicitly using the
--fields-
and
xxx
--lines-terminated-by
options.
Column values are dumped using the binary
character set and the
--default-character-set
option is ignored. In effect, there is no character set
conversion. If a table contains columns in several character
sets, the output data file will as well and you may not be
able to reload the file correctly.
Override the --databases
or -B
option. mysqldump
regards all name arguments following the option as table
names.
--user=
,
user_name
-u
user_name
The MySQL user name to use when connecting to the server.
--verbose
,
-v
Verbose mode. Print more information about what the program does.
--version
,
-V
Display version information and exit.
--where='
,
where_condition
'-w
'
where_condition
'
Dump only rows selected by the given
WHERE
condition. Quotes around the
condition are mandatory if it contains spaces or other
characters that are special to your command interpreter.
Examples:
--where="user='jimf'" -w"userid>1" -w"userid<1"
--xml
, -X
Write dump output as well-formed XML.
You can also set the following variables by using
--
syntax:
var_name
=value
The maximum size of the buffer for client/server communication. The value of the variable can be up to 16MB before MySQL 4.0, and up to 1GB from MySQL 4.0 on.
The initial size of the buffer for client/server
communication. When creating multiple-row
INSERT
statements (as with
the --extended-insert
or
--opt
option),
mysqldump creates rows up to
net_buffer_length
length.
If you increase this variable, you should also ensure that
the net_buffer_length
variable in the MySQL server is at least this large.
It is also possible to set variables by using
--set-variable=
or var_name
=value
-O
syntax. However, this syntax is deprecated as of MySQL 4.0.
var_name
=value
A common use of mysqldump is for making a backup of an entire database:
shell> mysqldump db_name
> backup-file.sql
You can load the dump file back into the server like this:
shell> mysql db_name
< backup-file.sql
Or like this:
shell> mysql -e "source /path-to-backup/backup-file.sql
" db_name
mysqldump is also very useful for populating databases by copying data from one MySQL server to another:
shell> mysqldump --opt db_name
| mysql --host=remote_host
-C db_name
It is possible to dump several databases with one command:
shell> mysqldump --databases db_name1
[db_name2
...] > my_databases.sql
To dump all databases, use the
--all-databases
option:
shell> mysqldump --all-databases > all_databases.sql
For InnoDB
tables,
mysqldump provides a way of making an online
backup:
shell> mysqldump --all-databases --single-transaction > all_databases.sql
This backup acquires a global read lock on all tables (using
FLUSH TABLES WITH READ
LOCK
) at the beginning of the dump. As soon as this
lock has been acquired, the binary log coordinates are read and
the lock is released. If long updating statements are running
when the FLUSH
statement is
issued, the MySQL server may get stalled until those statements
finish. After that, the dump becomes lock free and does not
disturb reads and writes on the tables. If the update statements
that the MySQL server receives are short (in terms of execution
time), the initial lock period should not be noticeable, even
with many updates.
For point-in-time recovery (also known as “roll-forward,” when you need to restore an old backup and replay the changes that happened since that backup), it is often useful to rotate the binary log (see Section 5.3.4, “The Binary Log”) or at least know the binary log coordinates to which the dump corresponds:
shell> mysqldump --all-databases --master-data=2 > all_databases.sql
Or:
shell>mysqldump --all-databases --flush-logs --master-data=2
> all_databases.sql
The --master-data
and
--single-transaction
options
can be used simultaneously as of MySQL 4.1.8, which provides a
convenient way to make an online backup suitable for use prior
to point-in-time recovery if tables are stored using the
InnoDB
storage engine.
For more information on making backups, see Section 6.2, “Database Backup Methods”, and Section 6.3, “Example Backup and Recovery Strategy”.
MySQL Enterprise. MySQL Enterprise subscribers will find more information about mysqldump in the Knowledge Base article, How Can I Avoid Inserting Duplicate Rows From a Dump File?. Access to the MySQL Knowledge Base collection of articles is one of the advantages of subscribing to MySQL Enterprise. For more information, see http://www.mysql.com/products/enterprise/advisors.html.
User Comments
For a faster "mysqldump" innodb tables
1. mysqldump --opt --user=username --password database > filetosaveto.sql
2. open the dump file put this statement at the beginning of the sql dump text file:
SET FOREIGN_KEY_CHECKS=0;
3. mysql --user=username --password database < dumpfile.sql
Very fast.
After adding "SET FOREIGN_KEY_CHECKS=0;" remember to append the "SET FOREIGN_KEY_CHECKS=1;" at the end of the import file. The potential problem is that any data inconsistency that would've made the foreign key failed during import would have made it into the database even after the forieng keys are turned back on. This is especially true if the foreign keys aren't turned back on after a long period of time which can happen if the "SET FOREIGN_KEY_CHECKS=1;" was not appended to the import file in the first place.
You can even do your mysqldump backups with logrotate.
Simply put something like this into /etc/logrotate.conf:
/var/backups/mysql/dump.sql {
daily
rotate 14
missingok
compress
postrotate
/usr/bin/mysqldump --defaults-extra-file=/.../backup-credentials.cnf --opt --flush-logs --all-databases > /var/backups/mysql/dump.sql
endscript
}
Following mysqldump import example for InnoDB tables is at least 100x faster than previous examples.
1. mysqldump --opt --user=username --password database > dumbfile.sql
2. Edit the dump file and put these lines at the beginning:
SET AUTOCOMMIT = 0;
SET FOREIGN_KEY_CHECKS=0;
3. Put these lines at the end:
SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
SET AUTOCOMMIT = 1;
4. mysql --user=username --password database < dumpfile.sql
If you want to schedule a task on windows to backup and move your data somewhere, the lack of documentation and command-line tools in windows can make it a real beast. I hope this helps you keep your data safe.
First off, you will need a command line file compressor (or your should use one, anyway). I like GNU gzip. You can get it for windows here http://gnuwin32.sourceforge.net/packages/gzip.htm
Secondly, you will need to use windowsw FTP via command line. It took me all day to find documentation on this guy, so I hope this saves some time for somebody.
Anyway, you need two files -- the batch file and a script for your ftp client. The Batch file should look like this guy (it uses random numbers in the file name so that multiple backups are not overwritten):
@ECHO OFF
@REM Set dir variables. Use ~1 format in win2k
SET basedir=C:\BACKUP~1
SET workdir=c:\TEMP
SET mysqldir=c:\mysql\bin
SET gzipdir=c:\PROGRA~1\GnuWin32\bin
SET mysqlpassword=mygoodpassword
SET mysqluser=myrootuser
@REM Change to mysqldir
CD %mysqldir%
@REM dump database. This is all one line
mysqldump -u %mysqluser% -p%mysqlpassword% --all-databases >%workdir%\backup.sql
@REM Change to workdir
CD %workdir%
@REM Zip up database
%gzipdir%\gzip.exe backup.sql
@REM Move to random file name
MOVE backup.sql.gz backup.%random%.gz
@REM FTP file to repository
FTP -n -s:%basedir%\ftp-commands.txt
@REM Remove old backup files
del backup.sql
del backup.*.gz
@REM Change back to base dir
CD %basedir%
And your ftp script should look like this guy (and be named ftp-commands.txt so the above script can find it)
open
ftp.mybackuplocation.com
user
myusername
mypassword
bin
put backup.*.gz
quit
Make sure both of the above files are in whatever directory you set up as %basedir% and test it out and make sure everything works for you. Then schedule it to run every day to protect your data!
Corey's example is helpful, but I don't care for the random file name. Here is the manual script I use on Windows for kicking off a MYSQL backup.
You could easily add all the other bells and whistles of ZIP, FTP, and scheduling should you need it. Note that I didn't use a password or many of the other args for mysqldump, you can add those if ya need 'em.
@ECHO OFF
for /f "tokens=1-4 delims=/ " %%a in ('date/t') do (
set dw=%%a
set mm=%%b
set dd=%%c
set yy=%%d
)
SET bkupdir=C:\path\to\where\you\want\backups
SET mysqldir=D:\path\to\mysql
SET dbname=this_is_the_name_of_my_database
SET dbuser=this_is_my_user_name
@ECHO Beginning backup of %dbname%...
%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% > %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql
@ECHO Done! New File: dbBkup_%dbname%_%yy%%mm%%dd%.sql
pause
A little reformulation of the actions that occur during an online dump with log-point registration, i.e. a dump that does not unduly disturb clients using the database during the dump (N.B.: only from 4.1.8 on!) and that can be used to start a slave server from the correct point in the logs.
Use these options:
--single-transaction
--flush-logs
--master-data=1
--delete-master-logs
If you have several databases that are binary-logged and you want to keep a consistent binary log you may have to include all the databases instead of just some (is that really so?):
--all-databases
Now, these are the actions performed by the master server:
1) Acquire global read lock using FLUSH TABLES WITH READ LOCK. This also flushes the query cache and the query result cache. Caused by option --single-transaction.
2) All running and outstanding transactions terminate. MySQL server stalls for further updates.
3) Read lock on all tables acquired.
4) All the logs are flushed, in particular the binary log is closed and a new generation binary log is opened. Caused by option --flush-logs
5) Binary lock coordinates are read and written out so that the slave can position correctly in the binary log. Caused by --master-data=1
6) Read lock is released, MySQL server can proceed with updates. These updates will also go to the binary log and can thus be replayed by the slave. Meanwhile, the InnoDB tables are dumped in a consistent state, which is the state they were in in step 5. (Not guaranteed for MyISAM tables)
7) Dump terminates after a possibly long time.
8) Any old binary log files are deleted. Caused by --delete-master-logs.
Additionally, there are performance-influencing options:
--extended-insert: use multiple-row insert statements
--quick: do not do buffering of row data, good if tables are large
And there are format-influencing options:
--hex-blob: dump binary columns in hex
--complete-insert: use complete insert statements that include column names works nicely with --extended-insert
--add-drop-table: add a DROP TABLE statement before each CREATE TABLE statement.
Following Lon B helpful post:
You can pipe it to gzip to compress in windows. I didn't think it would work on windows, but apparently it does.
@ECHO Beginning backup of %dbname%...
%mysqldir%\bin\mysqldump -B %dbname% -u %dbuser% | gzip> %bkupdir%\dbBkup_%dbname%_%yy%%mm%%dd%.sql.gz
Of course,you need gng gzip in your path or directory.
When using mysqldump on a replication master, if you want the slave(s) to follow, you may want to avoid the --delete-master-logs option, because it can delete binary logs before the "CHANGE MASTER" is read by the slaves, therefore breaking the replication (then you have to issue manually the "CHANGE MASTER" on the slave(s)). If you want to get rid of old and useless binary logs, it is better to issue a "PURGE MASTER" SQL command on the master after the mysqldump.
I moved my MySQL installation from Linux to Windows 2003 and had to create a new backup script. I was using hotcopy but with windows it's not avaliable.
So, Inspired by Lon B and Corey Tisdale (above) I created a batch file that will create a mysqldump GZiped file for each database and put them into seperate folders. It also creates a log file. You will have to set the vars at the top to match your system.
You will also need GZip to do the compression...
It could still use some work (like no error trapping etc...) but it's in production for me now.
I used a utility "commail.exe" to send the log file to me after the backup is complete.
//--- Begin Batch File ---//
@echo off
:: Set some variables
set bkupdir=E:\MySQL\backup
set mysqldir=E:\MySQL
set datadir=E:\MySQL\data
set logdir=E:\MySQL\logs
set dbuser=username
set dbpass=password
set zip=C:\GZip\bin\gzip.exe
set endtime=0
:GETTIME
:: get the date and then parse it into variables
for /F "tokens=2-4 delims=/ " %%i in ('date /t') do (
set mm=%%i
set dd=%%j
set yy=%%k
)
:: get the time and then parse it into variables
for /F "tokens=5-8 delims=:. " %%i in ('echo.^| time ^| find "current" ') do (
set hh=%%i
set ii=%%j
set ss=%%k
)
:: If this is the second time through then go to the end of the file
if "%endtime%"=="1" goto END
:: Create the filename suffix
set fn=_%yy%%mm%%dd%_%hh%%mm%%ss%
:: Switch to the data directory to enumerate the folders
pushd %datadir%
:: Write to the log file
echo Beginning MySQLDump Process > %logdir%\LOG%fn%.txt
echo Start Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo --------------------------- >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt
:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (
:: Create the backup sub-directory is it does not exist
if not exist %bkupdir%\%%f\ (
echo Making Directory %%f
echo Making Directory %%f >> %logdir%\LOG%fn%.txt
mkdir %bkupdir%\%%f
) else (
echo Directory %%f Exists
echo Directory %%f Exists >> %logdir%\LOG%fn%.txt
)
:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %%f%fn%.sql.gz
echo Backing up database %%f%fn%.sql.gz >> %logdir%\LOG%fn%.txt
%mysqldir%\bin\mysqldump --user=%dbuser% --password=%dbpass% --databases %%f --opt --quote-names --allow-keywords --complete-insert | %zip% > %bkupdir%\%%f\%%f%fn%.sql.gz
echo Done...
echo Done... >> %logdir%\LOG%fn%.txt
)
:: Go back and get the end time for the script
set endtime=1
goto :GETTIME
:END
:: Write to the log file
echo. >> %logdir%\LOG%fn%.txt
echo --------------------------- >> %logdir%\LOG%fn%.txt
echo MySQLDump Process Finished >> %logdir%\LOG%fn%.txt
echo End Time = %yy%-%mm%-%dd% %hh%:%ii%:%ss% >> %logdir%\LOG%fn%.txt
echo. >> %logdir%\LOG%fn%.txt
:: Return to the scripts dir
popd
:: Send the log file in an e-mail
c:\commail\commail -host=smtp.yourcompany.com -from="server <server@yourcompany.com>" -to=serveradmins@yourcompany.com -subject="MySQL Backup" -msg=%logdir%\LOG%fn%.txt
//--- End Batch File ---//
Here's a bash wrapper for mysqldump I cron'd to run at night. It's not the sexiest thing but it's reliable.
It creates a folder for each day, a folder for each db & single bzip2'd files for each table. There are provisions for exclusions. See below where it skips the entire tmp & test db's and in all db's, tables tbl_session & tbl_parameter. It also cleans up files older than 5 days (by that time they've gone to tape).
Be sure to update <user> & <pwd>. Ideally these would be in constants but I couldn't get the bash escaping to work.
# setup
suffix=`date +%Y%m%d`
dest=/mirror/mysqldumps
cmd='/usr/bin/mysqldump'
databases=(`echo 'show databases;' | mysql -u <user> --password='<pwd>' | grep -v ^Database$`)
for d in "${databases[@]}"; do
if [[ $d != 'tmp' && $d != 'test' ]]
then
echo "DATABASE ${d}"
s="use ${d}; show tables;"
tables=(`echo ${s} | mysql -u <user> --password='<pwd>' | grep -v '^Tables_in_'`)
for t in "${tables[@]}"; do
if [[ $t != 'tbl_parameter' && $t != 'tbl_session' ]]
then
echo " TABLE ${t}"
path="${dest}/${suffix}/${d}"
mkdir -p ${path}
${cmd} --user=<user> --password='<pwd>' --quick --add-drop-table --all ${d} ${t} | bzip2 -c > ${path}/${t}.sql.bz2
fi
done
fi
done
# delete old dumps (retain 5 days)
find ${dest} -mtime +5 -exec rm {} \;
You always wanted to BACKUP your most important database somewhere in your Linux system, as well as send the dump by email, so that you can recover the entire content if the system crashes.
You can use these 2 scripts.
First Step:
-Install the mutt client that will transfer emails on the command-line : "apt-get install mutt" or "yum install mutt"
-Create the backup directory : "mkdir /home/backups"
Second Step:
- Copy these 2 scripts on your root directory or your user directory :
#!/bin/sh
# Script name : auto_mysql_dump.sh
# Backup the dbname database
dir=`date +%Y-%m-%d`
dbname=`mybase`
if [ -d /home/backups ]; then
mkdir /home/backups/$dir
mysqldump -B --user=user_of_my_base --password=pwd_of_my_base --host=host_of_my_base $dbname > /home/backups/$dir/$dbname.sql
if [ $?=0 ]; then
#Bzip2 the dump.sql
bzip2 -z9v /home/backups/$dir/$dbname.sql
#Remove the dump.sql from disk
rm -f /home/backups/$dir/$dbname.sql
fi
fi
# End of script auto_mysql_dump.sh
#!/bin/sh
# Script Name : auto_mail_dump.sh
# Sends an email with the dump realized before
dir=`date +%Y-%m-%d`
dbname=`mybase`
mutt -s "Today backup" -a /home/backups/$dir/$dbname.sql.bz2 user@tosend.com < /dev/null
# End of script auto_mail_dump.sh
-Don't forget to change the access to make them executable:
"chmod 700 auto_mysql_dump.sh"
"chmod 700 auto_mail_dump.sh"
Third step:
-Edit the CronTab to schedule the execution of the two scripts.
"crontab -e" (you will use the vi editor)
We consider that the 2 scripts are in the /root directory
-I want the dump to be executed at 8.30 everyday
-I want the mail to be sent at 9.00 everyday
Thus I add these 2 rows after the existing lines :
Hit the "i" to insert new characters...
30 8 * * * /root/auto_mysql_dump.sh > /dev/null
00 9 * * * /root/auto_mail_dump.sh > /dev/null
Save the crontab by hitting : "Esc" + ":wq" (means Write and Quit)
What you should do now :
Once you've written the scripts, test-them !
Enjoy the automatic backup from now on :-)
When you need to import the data from a mysqldump, instead of using "shell>mysql < dump.sql" using "mysql> source dump.sql" is much better.
This way of importing the data avoids problems with language specific characters being turned into garble.
Here's a DOS script that will backup all your databases to a seperate file in a new folder, zip the folder, encrypt the zip and email the encrypted zip to one or many adresses. If the backup is larger than a specified limit only the logfile is emailed. The unencrypted zipfile is left on your local machine.
The script is also available at http://www.jijenik.com/projects/mysqlbackup/
Many thanks to Wade Hedgren whose script formed the basis for this version.
//--- Begin Batch File ---//
::
:: Creates a backup of all databases in MySQL.
:: Zip, encrypts and emails the backup file.
::
:: Each database is saved to a seperate file in a new folder.
:: The folder is zipped and then deleted.
:: the zipped backup is encrypted and then emailed, unless the file exceeds the maximum filesize
:: In all cases the logfile is emailed.
:: The encrypted backup is deleted, leaving the unencrypted zipfile on your local machine.
::
:: Version 1.1
::
:: Changes in version 1.1 (released June 29th, 2006)
:: - backups are now sent to the address specified by the mailto variable
::
:: The initial version 1.0 was released on May 27th, 2006
::
::
:: This version of the script was written by Mathieu van Loon (mathieu-public@jijenik.com)
:: It is based heavily on the script by Wade Hedgren (see comments at http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html)
::
:: This script requires several freeware libraries:
:: - zipgenius (a compression tool), www.zipgenius.it
:: - blat (an emailer tool), www.blat.net
:: - doff (extracts datetime, ignores regional formatting), www.jfitz.com/dos/index.html
::
:: Some areas where this script could be improved:
:: - include error trapping and handling
:: - make steps such as encryption and email optional
:: - allow the user to specify a single database on the command line
::
@echo off
::
:: Configuration options
::
:: The threshold for emailing the backup file. If the backup is larger
:: it will not be emailed (the logfile is always sent).
set maxmailsize=10000000
:: The passphrase used to encrypt the zipfile. Longer is more secure.
set passphrase=secret
:: Name of the database user
set dbuser=root
:: Password for the database user
set dbpass=password
:: Recipients of database backup, comma seperated, enclosed in quotes
set mailto="backups@example.com,backups2@example.com"
:: From address for email
set mailfrom="MySQL Backup Service <noreply@example.com>"
:: Email server
set mailsmtp=localhost
:: Email subject
set mailsubject="MySQL Backup"
:: directory where logfiles are stored
set logdir=C:\DatabaseBackups\logs
:: directory where backup files are stored
set bkupdir=C:\DatabaseBackups
:: Install folder of MySQL
set mysqldir=C:\Program Files (x86)\MySQL\MySQL Server 4.1
:: Data directory of MySQL (only used to enumerate databases, we use mysqldump for backup)
set datadir=C:\Program Files (x86)\MySQL\MySQL Server 4.1\data
:: Path of zipgenius compression tool
set zip=C:\Program Files (x86)\ZipGenius 6\zg.exe
:: Path of blat mail tool
set mail=C:\DatabaseBackups\Backupscript\libraries\Blat250\full\blat.exe
:: Path of doff date tool (specify only the folder not the exe)
set doff=C:\DatabaseBackups\Backupscript\libraries\doff10
::
::
:: NO NEED TO CHANGE ANYTHING BELOW
::
::
:: get the date and then parse it into variables
pushd %doff%
for /f %%i in ('doff.exe yyyymmdd_hhmiss') do set fn=%%i
for /f %%i in ('doff.exe dd-mm-yyyy hh:mi:ss') do set nicedate=%%i
popd
set logfile="%logdir%\%fn%_Backuplog.txt"
:: Switch to the data directory to enumerate the folders
pushd "%datadir%"
:: Write to the log file
echo Beginning MySQLDump Process > %logfile%
echo Start Time = %nicedate% >> %logfile%
echo --------------------------- >> %logfile%
echo. >> %logfile%
:: Create the backup folder
if not exist "%bkupdir%\%fn%\" (
echo Making Directory %fn%
echo Making Directory %fn% >> %logfile%
mkdir "%bkupdir%\%fn%"
)
:: Loop through the data structure in the data dir to get the database names
for /d %%f in (*) do (
:: Run mysqldump on each database and compress the data by piping through gZip
echo Backing up database %fn%_%%f.sql
echo Backing up database %fn%_%%f.sql >> %logfile%
"%mysqldir%\bin\mysqldump" --user=%dbuser% --password=%dbpass% --databases %%f --opt --quote-names --allow-keywords --complete-insert > "%bkupdir%\%fn%\%fn%_%%f.sql"
echo Done... >> %logfile%
)
:: return from data dir
popd
pushd %bkupdir%
echo Zipping databases
echo Zipping databases >> %logfile%
REM C9 : maximum compression
REM AM : Delete source files
REM F1 : Store relative path
REM R1 : include subfolders
REM K0 : Do not display progress
"%zip%" -add "%fn%_MySQLBackup.zip" C9 AM F1 R1 K0 +"%bkupdir%\%fn%"
echo Crypting zipfile
echo Crypting zipfile >> %logfile%
REM C : Create non-executable zip
REM S : Do not delete after x tries
REM 3 : Use AES encryption
"%zip%" -encrypt "%fn%_MySQLBackup.zip" C S 3 "%passphrase%" %mailfrom%
echo Deleting directory %fn%
echo Deleting directory %fn% >> %logfile%
rmdir /s /q "%bkupdir%\%fn%"
:: Go back and get the end time for the script
set endtime=1
:: return from backup dir
popd
:: update the nicedate for the log
pushd %doff%
for /f %%i in ('doff.exe dd-mm-yyyy hh:mi:ss') do set nicedate=%%i
popd
:: Write to the log file
echo. >> %logfile%
echo --------------------------- >> %logfile%
echo MySQLDump Process Finished >> %logfile%
echo End Time = %nicedate% >> %logfile%
echo. >> %logfile%
:: Send the log file in an e-mail, include the backup file if it is not too large
:: We use the CALL Trick to enable determination of the filesize (type CALL /? at prompt for info)
:: note that you _must_ specify the full filename as the argument
pushd %bkupdir%
Call :MAILFILE "%bkupdir%\%fn%_MySQLBackup.czip"
echo Backup completed
goto :EOF
:MAILFILE
if /i %~z1 LSS %maxmailsize% (
echo Emailing backup file
"%mail%" %logfile% -q -attach %1 -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
) ELSE (
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B.
echo Emailing logfile only
echo Size of backup file %~z1 B exceeds configured email size %maxmailsize% B. only emailing logfile. >> %logfile%
"%mail%" %logfile% -q -serverSMTP %mailsmtp% -f %mailfrom% -to %mailto% -subject %mailsubject%
)
echo Deleting encrypted backup file
del %1
popd
//--- End Batch File ---//
RE: Mathieu van Loon
Excellent, I had this installed and configured in about 10 minutes. I do have one minor fix however.
You aren't getting the time portion of the DOFF command captured into your variable. It appears that the output formatting string MUST NOT CONTAIN ANY BLANKS so I changed mine to:
for /f %%i in ('doff.exe dd-mm-yyyy_at_hh:mi:ss') do set nicedate=%%i
This is terrific, wish I found it 10 hrs ago (darn mySQL Administrator Backup - such a waste!!!
***
Now the problem is that my backups won't restore.... I am backing up multiple instances of MediaWiki, Mantis, and Joomla. I'm playing around with the
--max_allowed_packet= nnn and that should fix it based on manual backups working. Now is that nnn bytes or an abbreviation? Hmmm.
I often get errors [MySQL 4.* and 5.*] on reloading a dump of databases having big blobs. I found the solution disabling the --extended-insert (that comes inside the multiple option --opt, enabled by default) with --skip-extended-insert. I think this way is safer, but it is also more more slow.
Here's a python script that does rolling WinRAR'd backups on Windows. It should be trivial to change to Linux, or another compression program.
Please note:
1) this was a quick hack, so please test thoroughly before using in production. Still, I hope it will be a useful basis for your own script.
2) the --single-transaction switch is used as I am backing up InnoDB tables.
3) mysqldump is run with the root user. It would be A Good Thing to make this more secure - eg. create a backup user with read-only permissions to the tables.
4) <tab> is the tab character. Indentation is significant in Python.
import glob
import os
import time
# configuration
baseBackupFileName = "backupName"
maxBackups = 3
mySqlDumpCommand = "d:\\programs\\mysql\\bin\\mysqldump --user=root --password=rootpass --single-transaction DBName Table1Name Table2Name Table3Name"
winRarPath = "\"c:\\Program Files\\WinRAR\\WinRAR.exe\"" # path is quoted as it contains spaces
print "--- START ---"
# create new backup
newBackupFileName = baseBackupFileName + time.strftime("_%Y%m%d_%H%M%S", time.localtime())
os.system(mySqlDumpCommand+" > "+newBackupFileName+".sql")
# compress new backup
os.system(winRarPath+" a "+newBackupFileName+" "+newBackupFileName+".sql")
os.remove(newBackupFileName+".sql")
print "Created new backup \""+newBackupFileName+".rar\""
# delete old backups
oldBackupFileNames = glob.glob(baseBackupFileName+"_*_*.rar")
oldBackupFileNames.sort()
if len(oldBackupFileNames) > maxBackups:
<tab>for fileName in oldBackupFileNames[0:len(oldBackupFileNames)-maxBackups]:
<tab><tab>os.remove(fileName)
<tab><tab>print "Deleted old backup \""+fileName+"\""
print "--- END ---"
It seems one needs to be careful when using --skip-opt with databases containing non-ascii latin1 characters, especially if one has not been paying much attention to character sets.
I am just using default character sets - normally latin1. However, the dump produced by mysqldump is, perhaps surprisingly, in utf8. This seems fine, but leads to trouble with the --skip-opt option to mysqldump, which turns off --set-charset but leaves the dump in utf8.
This seems to lead to a dump that will be silently incorrectly reloaded if strings in the database contain non-ascii latin1 characters.
(Is this a documentation flaw, a design flaw or a bug??)
Perhaps the fact that mysqldump uses utf8 by default, and the importance of the --set-charset option should be more prominently documented (see the documentation for the --default-character-set attribute for the current mention of the use of utf8)
I am fairly new to bash scripting, however I encountered the problem of all the databases going into one *.sql file. If you have a large amount of databases to backup it can take forever to restore your backup. This is a script I wrote to accomplish what I felt was needed. It grabs the name of the database and puts them in separate *.sql.bz2 files with a corresponding timetamp. Please let me know if this helps and perhaps if I can make it more elegant.
#!/bin/sh
TIME_1=`date +%s`
cd /backup/mysql
DBS="$(mysql --user=youruser --password=yourpass -Bse 'show databases')"
for db in ${DBS[@]}
do
echo ${db}-$(date +%m-%d-%y).sql.bz2 is being saved in /backup/mysql
# remember to add the options you need with your backups here.
mysqldump --user=youruser --password=yourpass $db --single-transaction -R | bzip2 -c > ${db}-$(date +%m-%d-%y).sql.bz2
done
TIME_2=`date +%s`
elapsed_time=$(( ( $TIME_2 - $TIME_1 ) / 60 ))
## just a sanity check to make sure i am not running a dump for 4 hours
echo "This mysql dump ran for a total of $elapsed_time minutes." > mysql_dump_runtime.txt
# delete old databases. I have it setup on a daily cron so
# anything older than 60 minutes is fine
for del in $(find /backup/mysql -name '*.sql.bz2' -mmin +60)
do
echo This directory is more than one day old and it is being removed: $del
rm $del
done
#!/bin/bash
# SEE : http://dev.mysql.com/doc/refman/5.0/en/mysqldump.html
# SEE : http://safari.oreilly.com/0596526784/date_and_time_string_formatting_with_strftime
#
# Improved by Bill Hernandez (Plano, Texas) on Tuesday, August 21, 2007 (12:55 AM)
# ( 1 ) Backs up all info to time stamped individual directories, which makes it easier to track
# ( 2 ) Now maintains a single log that contains additional information
# ( 3 ) Includes a file comment header inside each compressed file
# ( 4 ) Used more variables instead of hard-code to make routine easier to use for something else
# ( 5 ) Where I have mysql5, you may have to replace it with mysql
#
# Posted by Ryan Haynes on July 11 2007 6:29pm
# DO NOT DELETE AUTOMATICALLY FOR NOW, MAYBE LATER
DELETE_EXPIRED_AUTOMATICALLY="TRUE"
# DELETE EXPIRED BACKUPS THAT ARE MORE THAN
# expire_minutes=$(( 1 * 30 )) # 30 minutes old
# expire_minutes=$(( 60 * 24 )) # 1 day old
# expire_minutes=$(( 60 * 24 * 7 )) # 7 days old
# expire_minutes=$(( 60 * 24 * 30 )) # 30 days old
expire_minutes=$(( 60 * 24 * 7 )) # 7 days old
if [ $expire_minutes -gt 1440 ]; then
expire_days=$(( $expire_minutes /1440 ))
else
expire_days=0
fi
function pause(){
read -p "$*"
}
# pause "HIT RETURN, and then enter your sudo password..."
echo "Please enter your sudo password..."
sudo echo
mysql_username="your_mysql_username"
mysql_password="your_mysql_password"
current_dir=`pwd`
echo -n "Current working directory is : "
echo $current_dir
echo "------------------------------------------------------------------------"
TIME_1=`date +%s`
TS=$(date +%Y.%m.%d\-%I.%M.%p)
BASE_DIR=/backups/mysql
BACKUP_DIR=${BASE_DIR}/$TS
BACKUP_LOG_NAME=mysql_dump_runtime.log
BACKUP_LOG=${BASE_DIR}/${BACKUP_LOG_NAME}
sudo mkdir -p $BACKUP_DIR
sudo chown mysql:admin $BACKUP_DIR
sudo chmod 775 $BASE_DIR
sudo chmod -R 777 $BACKUP_DIR
cd $BACKUP_DIR
echo -n "Changed working directory to : "
pwd
echo "Saving the following backups..."
echo "------------------------------------------------------------------------"
DBS="$(mysql5 --user=${mysql_username} --password=${mysql_password} -Bse 'show databases')"
for db in ${DBS[@]}
do
normal_output_filename=${db}.sql
compressed_output_filename=${normal_output_filename}.bz2
echo $compressed_output_filename
# remember to add the options you need with your backups here.
echo "-- $compressed_output_filename - $TS" > $normal_output_filename
echo "-- Logname : `logname`" >> $normal_output_filename
# mysqldump5 --user=${mysql_username} --password=${mysql_password} $db --single-transaction -R | bzip2 -c > $compressed_output_filename
mysqldump5 --user=${mysql_username} --password=${mysql_password} $db --single-transaction -R >> $normal_output_filename
bzip2 -c $normal_output_filename > $compressed_output_filename
rm $normal_output_filename
done
echo "------------------------------------------------------------------------"
TIME_2=`date +%s`
elapsed_seconds=$(( ( $TIME_2 - $TIME_1 ) ))
elapsed_minutes=$(( ( $TIME_2 - $TIME_1 ) / 60 ))
# just a sanity check to make sure i am not running a dump for 4 hours
cd $BASE_DIR
echo -n "Changed working directory to : "
pwd
echo "Making log entries..."
if [ ! -f $BACKUP_LOG ]; then
echo "------------------------------------------------------------------------" > ${BACKUP_LOG_NAME}
echo "THIS IS A LOG OF THE MYSQL DUMPS..." >> ${BACKUP_LOG_NAME}
echo "DATE STARTED : [${TS}]" >> ${BACKUP_LOG_NAME}
echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME}
echo "[BACKUP DIRECTORY ] [ELAPSED TIME]" >> ${BACKUP_LOG_NAME}
echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME}
fi
echo "[${TS}] This mysql dump ran for a total of $elapsed_seconds seconds." >> ${BACKUP_LOG_NAME}
echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME}
# delete old databases. I have it setup on a daily cron so anything older than 60 minutes is fine
if [ $DELETE_EXPIRED_AUTOMATICALLY == "TRUE" ]; then
counter=0
for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes})
do
counter=$(( counter + 1 ))
echo "[${TS}] [Expired Backup - Deleted] $del" >> ${BACKUP_LOG_NAME}
done
echo "------------------------------------------------------------------------"
if [ $counter -lt 1 ]; then
if [ $expire_days -gt 0 ]; then
echo There were no backup directories that were more than ${expire_days} days old:
else
echo There were no backup directories that were more than ${expire_minutes} minutes old:
fi
else
echo "------------------------------------------------------------------------" >> ${BACKUP_LOG_NAME}
if [ $expire_days -gt 0 ]; then
echo These directories are more than ${expire_days} days old and they are being removed:
else
echo These directories are more than ${expire_minutes} minutes old and they are being removed:
fi
echo "------------------------------------------------------------------------"
echo "\${expire_minutes} = ${expire_minutes} minutes"
counter=0
for del in $(find $BASE_DIR -name '*-[0-9][0-9].[0-9][0-9].[AP]M' -mmin +${expire_minutes})
do
counter=$(( counter + 1 ))
echo $del
rm -R $del
done
fi
fi
echo "------------------------------------------------------------------------"
cd `echo $current_dir`
echo -n "Restored working directory to : "
pwd
if you want to check the result of mysqldump in a shell, you should pay attention. this will not work:
mysqldump --all-databases | gzip -c > dumpfile.sql.gz
if [ $? -eq 0 ]
...
fi
since the "|gzip" will always return 0.
instead you should use a pipe:
### create a pipe named "pipe"
mkfifo pipe
### compress the pipe in background
gzip < pipe > dumpfile.sql.gz &
### write directly to the pipe
mysqldump --all-databases > pipe
### get the real return code of mysqldump
result=$?
### wait until the gzip completes
wait
### now it is safe to remove the pipe
rm pipe
kind regards....
--
Ludovico
Hi all,
And thanks for the great script examples - I've taken a bit of the batch files and made a perl script for backing up mysql databases - it's pretty crude, but it's what I'm using right now to back up the servers nightly.
It's done on a win2003 server box that has perl and gzip (http://gnuwin32.sourceforge.net/packages.html)installed.
I chose perl 'cause if I need to port the script to 'nix it's trivial. (well, so's the script, really) And it's easy to bang out small scripts in!
Hope this helps someone!
=============
#!/usr/bin/perl
## DB info struct
##{ host, db, username, password}
@DBsToBackup =(
##host 1
['192.168.0.1', 'database1', 'username', 'password'],
['192.168.0.1', 'database2', 'username', 'password'],
['192.168.0.1', 'database3', 'username', 'password'],
['192.168.0.1', 'database4', 'username', 'password'],
##host 2
['192.168.0.2', 'database5', 'username', 'password']
);
$backupdir = 'D:\\Database\\DBdumps';
print "Starting dump of databases...\n\n";
foreach $dbinfo (@DBsToBackup){
($Second, $Minute, $Hour, $Day, $Month, $Year, $WeekDay, $DayOfYear, $IsDST) = localtime(time) ;
$Year += 1900 ; $Month += 1;
$outputFilename = sprintf("%04d%02d%02d[%02d%02d%02d]", $Year, $Month, $Day, $Hour, $Minute, $Second ) . "_$$dbinfo[1]_.sql.gz";
print "dumping: $$dbinfo[0] / $$dbinfo[1]...\n";
$ex = "\"C:\\Program Files\\MySQL\\MySQL\ Server\ 5.0\\bin\\mysqldump\" --user=$$dbinfo[2] --password=$$dbinfo[3] --host=$$dbinfo[0] --add-drop-database --add-drop-table $$dbinfo[1] | gzip > ${backupdir}\\$outputFilename";
print $ex ."\n\n";
print `$ex`;
}
print "DONE dumping\n";
print "deleting dumps older than 14 days...\n";
print `forfiles /P $backupdir /D -14 /C "cmd /c del @path\@file" /S`;
print "script finished\n";
==============
You can find this script and more at http://www.lancetek.com
Thanks,
Lance
Host on a shared server?
Then you will most probably not be able to create or retrieve a MySQL dump.
For a solution try this:
http://www.dwalker.co.uk/phpmysqlautobackup/
it will export your database and email to you. Its GPL open source.
This is a variation from the example above to deal with huge dump files that are too big to edit:
1. mysqldump --opt --user=username --password database > dumbfile.sql
2. Create file to execute prior to the dump file:
echo 'SET AUTOCOMMIT = 0;
SET FOREIGN_KEY_CHECKS=0;
' > pre.sql
3. Create a file to execute after the dump file is imported:
echo 'SET FOREIGN_KEY_CHECKS = 1;
COMMIT;
SET AUTOCOMMIT = 1;
' > post.sql
4. cat pre.sql dumpfile.sql post.sql | mysql --user=username --password database
In case this helps anyone. This backs up all databases and tables and keeps all files for a week, placing them in a directory structure of the format:
/backup_dir/db_name/day/table.sql.gz
Useful if you want to restore to a particular days data.
It checks new backups are different to last before overwriting files. This helps if you are rsyncing your filesystems as normally mysql writes a date into the dump so the files always appear to differ even if the data is the same.
Also saves a directory with just your schema in, checks and repairs tables where necessary and defrags tables on a Sunday.
<?php
// Change Me
$username = "root";
$password = "password";
$backup_dir = "/mnt/backup";
$dump = "/usr/bin/mysqldump";
$grep = "/bin/grep";
$gzip = "/bin/gzip";
// This should not need changing from here
function sql_dict($sql){
$x = mysql_query($sql);
if ($x) return mysql_fetch_assoc($x);
}
function cleanup($dir){
$d = dir($dir);
echo "Path: " . $d->path . "\n";
while (false !== ($entry = $d->read())) {
if ($entry=="." or $entry=="..") continue;
$e = str_replace(".sql.gz","",$entry);
$x = sql_dict("describe $e");
if (!$x) {
print "Removing old backup file [$entry]\n";
unlink("$dir/$entry");
}
}
$d->close();
}
function crc32_file($filename)
{
global $gzip;
$x = exec("$gzip --list --verbose $filename");
$x = explode(" ",$x);
return $x[1];
}
if (mysql_connect("localhost",$username,$password)) print "Connected.\n";
else die("Failed to connect to database.");
$dbs = mysql_query("show databases");
if ($dbs) while ($db = mysql_fetch_array($dbs, MYSQL_ASSOC)) {
$db = $db['Database'];
if ($db=="information_schema") continue;
if (mysql_select_db($db)) print "Selected [$db]\n";
else die("Failed to select db [$db]");
foreach (array("schema","data") as $pass){
$sql = mysql_query("show tables");
$day = date("l");
if ($pass=="schema") $dir = "/$backup_dir/$db/schema";
else $dir = "/$backup_dir/$db/$day";
if (!file_exists($dir)) system("mkdir -p $dir");
if (!file_exists($dir)) die("Couldn't Create $dir");
if ($pass=="data"){
$latest = "/$backup_dir/$db/latest";
unlink($latest);
system("/bin/ln -s \"$dir\" \"$latest\"");
}
cleanup($dir);
if ($sql) while ($s = mysql_fetch_assoc($sql)) {
if (!isset($s["Tables_in_{$db}"])) {
print "no result";
print_r($sql);
die();
}
$t = $s["Tables_in_{$db}"];
if (
$pass=="schema" ) $data = "--no-data";
else $data = "--lock-tables";
$tab = $t;
$lim = 30;
if (strlen($tab)>$lim) $tab = substr($tab,0,$lim-3)."...";
while (strlen($tab)<30) $tab .= " ";
print "BACKUP: $pass : $day : $db : $tab : ";
if ($pass=="data"){
print "Check : ";
$check = sql_dict("check table $t");
$check = $check['Msg_text'];
print "$check : ";
if ($check != "OK") {
print "Repair";
$repair = sql_dict("repair table $t");
$repair = $repair['Msg_text'];
print " : $repair : ";
}
if ($day=="Sunday"){
// optimize
print "Optimize : ";
$type = sql_dict("show table status like '$t'");
$type = $type['Engine'];
if ($type=="MyISAM") sql("optimize table $t");
if ($type=="InnoDB") sql("alter table $t engine='InnoDB'");
}
}
if (isset($argv[1])){
print "Skipping dump\n";
} else {
$temp = "/tmp/backup.$t.sql.gz";
$out = "$dir/$t.sql.gz";
print "Dump : ";
$cmd = "$dump -u$username -p$password $data --quick --add-drop-table $db $t | $grep -v 'Dump completed' | $gzip -n > $temp";
system($cmd);
print "CRC32 : ";
if (!file_exists($out)){
print "Saving : ";
$cmd = "/bin/mv $temp $out";
system($cmd);
} else {
$md5 = crc32_file($temp);
$nmd5 = crc32_file($out);
if ($md5!=$nmd5) {
print "Saving : ";
$cmd = "/bin/mv $temp $out";
system($cmd);
} else {
print "Skipped : ";
unlink($temp);
}
}
$size = filesize($out);
print "[$size]\n";
}
}
}
}
?>
Here is example of usage mysqldump with rdiff-backup inside of noe server with synchronization using rsync:
#!/bin/sh
# Incremental backup script using rdiff
# Author: Driantsov Alexander
# Requirements:
# * rdiff-backup - http://www.nongnu.org/rdiff-backup/
# * rsync
# * ssh ;)
####
BACKUP_ADMIN_EMAIL="yourmail@yourserver.com"
BACKUP_MAIL_SUBJECT="`hostname`: SQL Backup Synchronization Result"
BACKUP_LOG="/var/log/backup-sql"
BACKUP_PRIO="20" # Priority for the MySQL dump and rdiff-backup Min: 20 Max: -20
BACKUP_TMP_DIR="/var/backup/mysql_tmp" # New dumps will be stored here
BACKUP_DIFF_DIR="/var/backup/hosting/mysql" # Diffs of dumps will be stored there
SYNC_SRV="BAC.KUP.SER.VER" # Remote server for backup storage
SYNC_USER="backup_user" # User at remote storage
SYNC_SPEED="200" # Limit Synchronization Bandwidth to this number of KB/s
SYNC_DIR="/backup/hosting/mysql" #Directory on Remote server to synchronize backups in
MYSQL_USER="admin" # MySQL user
MYSQL_PASSWD=`cat /etc/psa/.psa.shadow` # Password for MySQL. You may obtain password from /etc/psa/.psa.shadow if you are using Plesk on your server.
#Implementaition
RSCONSTR="$SYNC_USER@$SYNC_SRV"
# Dump
echo "Backup Started at `date`" > $BACKUP_LOG
load_average=`uptime|awk '{print $10" "$11" "$12}'`
echo "Load overage at start: $load_average" >> $BACKUP_LOG
echo "\nBackingUP MySQL:" >> $BACKUP_LOG
for i in `mysql -u$MYSQL_USER -p$MYSQL_PASSWD -Bse "show databases"`; do echo "Backing up database $i..." >> $BACKUP_LOG ; nice -n $BACKUP_PRIO mysqldump --single-transaction --quick --skip-extended-insert -u$MYSQL_USER -p$MYSQL_PASSWD $i > $BACKUP_TMP_DIR/$i.dump; done
echo "MySQL dump completed at `date`" >> $BACKUP_LOG
load_average=`uptime|awk '{print $10" "$11" "$12}'`
echo "\nLoad overage after MySQL dumps: $load_average\n" >> $BACKUP_LOG
# Diff
echo "Looking for difference in database" >> $BACKUP_LOG
nice -n $BACKUP_PRIO rdiff-backup $BACKUP_TMP_DIR $BACKUP_DIFF_DIR/$HOSTNAME 2>&1 >> $BACKUP_LOG
echo "Done" >> $BACKUP_LOG
echo "Rdiff-backup completed at `date`" >> $BACKUP_LOG
load_average=`uptime|awk '{print $10" "$11" "$12}'`
echo "\nLoad overage after rdiff-backup: $load_average\n" >> $BACKUP_LOG
# Synchronize
sql_result=`rsync -avz --bwlimit $SYNC_SPEED $BACKUP_DIFF_DIR $RSCONSTR:$SYNC_DIR|tail -n 2`
free_space_info=`ssh $RSCONSTR df -h --sync -t ext3`
echo -en " MySQL backup synchronization:\n $sql_result \nSynchronization completed at `date` \n\nInformation on free space on remote backup server: \n $free_space_info \n\n Backup Log: \n\n`cat $BACKUP_LOG`"| mail -s "$BACKUP_MAIL_SUBJECT" $BACKUP_ADMIN_EMAIL
This is an example of a Windows batch script that implements a rotating archive of backups on a daily, weekly, and monthly basis. It also provides an installation option for the creation of the backup directories and an option to add a scheduled task to the system to run the batch file.
I started with what Lon B posted and many editions/revisions later this was produced. I hope you find it as useful as we have.
~~~ BEGIN FILE ~~~
@ECHO OFF
SET VERSIONMAJOR=10
SET VERSIONMINOR=6
FOR /f "tokens=1-4 delims=/ " %%a IN ('date/t') DO (
SET dw=%%a
SET mm=%%b
SET dd=%%c
SET yy=%%d
)
REM *** VERIFY AND UPDATE THESE SETTINGS BEFORE INITIAL RUN ***
REM *** mysqldir must point to the \bin directory! ***
SET bkupdir=C:\MySQL-Backups
SET mysqldir=C:\wamp\bin\mysql\mysql5.0.51b\bin
SET dbhost=localhost
SET dbuser=
SET dbpass=
REM *** END USER CONFIGURABLE SETTINGS ***
IF /i "%1" == "--INSTALL" GOTO INSTALLER
IF /i "%1" == "--CREATEDIRS" GOTO CREATEDIRS
IF /i "%1" == "--ADDSCHEDULEDTASK" GOTO TASKSCHED
IF ""%1"" == """" GOTO ALLDB
IF /i "%1" == "--ALL" GOTO ALLDB
IF /i "%1:~0,2%" == "--" GOTO PARAMERROR
SET ALLDBS=0
SET dbnames=%1
SET dbnamesf=%1
SHIFT
:setArgs
IF ""%1""=="""" GOTO BKUP
SET dbnames=%dbnames% %1
SET dbnamesf=%dbnamesf%_%1
SHIFT
GOTO setArgs
:ALLDB
SET ALLDBS=1
SET dbnames=ALL DATABASES
SET dbnamesf=ALL_DATABASES
:BKUP
@ECHO MySQLdump script for Windows v%VERSIONMAJOR%.%VERSIONMINOR% > %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO MySQLdump script for Windows v%VERSIONMAJOR%.%VERSIONMINOR%
@ECHO.
IF NOT EXIST %bkupdir%\INSTALLED.OK (
@ECHO DIRECTORY STRUCTURE NOT IN PLACE. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO PLEASE RUN %0 --INSTALL OR %0 --CREATEDIRS >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO FAILED TO BACKUP DATABASES. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO DIRECTORY STRUCTURE NOT IN PLACE.
@ECHO PLEASE RUN %0 --INSTALL OR %0 --CREATEDIRS
@ECHO FAILED TO BACKUP DATABASES.
GOTO BOTTOM
)
@ECHO Beginning backup of %dbnames%... >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Beginning backup of %dbnames%...
IF %ALLDBS% == 1 (
SET dumpparams=--host=%dbhost% -u %dbuser% -p%dbpass% -A -f -x -q --create-options --flush-privileges -r %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql
) ELSE (
SET dumpparams=--host=%dbhost% -u %dbuser% -p%dbpass% -f -x -q --create-options --flush-privileges -r %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql --databases %dbnames%
)
%mysqldir%\mysqldump %dumpparams% >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Done! New File: dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Done! New File: dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql
COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Daily\dbBkup_%dbnamesf%_%dw%.sql /A > NUL
@ECHO Created Daily Backup: Daily\dbBkup_%dbnamesf%_%dw%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Created Daily Backup: Daily\dbBkup_%dbnamesf%_%dw%.sql
REM Check to see if it's time for the Weekend backup
IF /i "%dw%" NEQ "Sat" GOTO SKIPWKBK
IF EXIST %bkupdir%\Weekly\safety_%dbnamesf%_%yy%%mm%%dd%.txt GOTO WKCUR
IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql GOTO WKCUR
IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql GOTO WKPRE
IF NOT EXIST %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql GOTO WKPR2
MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_3.sql > NUL
:WKPR2
MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous_2.sql > NUL
:WKPRE
MOVE /Y %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql %bkupdir%\Weekly\dbBkup_%dbnamesf%_Previous.sql > NUL
:WKCUR
COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Weekly\dbBkup_%dbnamesf%_Current.sql /A > NUL
@ECHO. > %bkupdir%\Weekly\safety_%dbnamesf%_%yy%%mm%%dd%.txt
@ECHO Created Weekly Backup: Weekly\dbBkup_%dbnamesf%_Current.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Created Weekly Backup: Weekly\dbBkup_%dbnamesf%_Current.sql
:SKIPWKBK
REM if (day >= 28) write EoM backup
IF %dd% GEQ 28 (
COPY /Y %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql /A %bkupdir%\Monthly\dbBkup_%dbnamesf%_%mm%.sql /A > NUL
@ECHO Created End of Month Backup: Monthly\dbBkup_%dbnamesf%_%mm%.sql >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Created End of Month Backup: Monthly\dbBkup_%dbnamesf%_%mm%.sql
)
DEL /q /f %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.sql
@ECHO Backup stored in rotating archives. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO. >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO End MySQLdump Script >> %bkupdir%\dbBkup_%dbnamesf%_%yy%%mm%%dd%.log
@ECHO Backup stored in rotating archives.
@ECHO.
@ECHO End MySQLdump Script
GOTO BOTTOM
:INSTALLER
@ECHO VERIFY: Path to mysqldump: %mysqldir%
@ECHO VERIFY: Path to backups: %bkupdir%
@ECHO VERIFY: MySQL User: %dbuser%
@ECHO VERIFY: MySQL Pass: %dbpass%
@ECHO VERIFY: MySQL Host: %dbhost%
IF NOT EXIST %bkupdir%\INSTALLED.OK (
@ECHO ALERT: Backup directory does not exist. Create base directory and subdirectories?
SET /p domkdir=[Y/N]:
IF /i "%domkdir%" == "N" (
@ECHO ALERT: CANNOT CONTINUE WITHOUT DIRECTORIES IN PLACE.
GOTO BOTTOM
)
)
:CREATEDIRS
IF NOT EXIST %bkupdir%\INSTALLED.OK (
MD "%bkupdir%" > NUL
MD "%bkupdir%\Daily" > NUL
MD "%bkupdir%\Weekly" > NUL
MD "%bkupdir%\Monthly" > NUL
@ECHO INSTALLED CORRECTLY > %bkupdir%\INSTALLED.OK
)
GOTO BOTTOM
:TASKSCHED
@ECHO Preparing add Scheduled Task...
:STUPIDUSER1
SET /p taskuser=Domain\User to run task:
IF /i ""%taskuser%"" == """" GOTO STUPIDUSER1
:STUPIDUSER2
SET /p taskpwd1=Password:
SET /p taskpwd2=Confirm Password:
IF %taskpwd1% NEQ %taskpwd2% GOTO STUPIDUSER2
:STUPIDUSER3
SET /p taskname=Task name:
IF /i ""%taskname%"" == """" GOTO STUPIDUSER3
SET /p taskparam=Parameters to pass to batch file:
SCHTASKS /Create /SC DAILY /ST 04:00:00 /TN "%taskname%" /TR "%~f0 %taskparam%" /RU "%taskuser%" /RP %taskpwd1%
GOTO BOTTOM
:PARAMERROR
@ECHO ERROR: Unknown Parameter Passed.
@ECHO Current supported parameters:
@ECHO --ALL - Backup all databases, same as passing nothing to batch file
@ECHO --ADDSCHEDULEDTASK - Adds a scheduled task for this process
@ECHO --CREATEDIRS - Creates Directory Structure
@ECHO --INSTALL - Creates directory structure and outputs configuration settings that need verification
:BOTTOM
To export a limited amount of records, you may use the following:
mysqldump -u [username] -p [dbname] {--opt} --where="true limit 100" > dump.sql
This adds the clause "where true limit 100" when creating the SELECT statement. (Adding "true" is a workaround: I ignore whether there is a way to add "limit 100" without adding the WHERE keyword).
Original post: http://www.geeknewz.com/board/index.php?automodule=blog&blogid=76&showentry=434
I wrote a perl script a that uses mysqldump to backup and compress an arbitrary list of databases. You can find the script and a rough explination here:
http://kc5vzm.com/cms/wiki/view/MySQL+-+Simple+Backup+Script
I have recently written a german article on how to backup the MySQL Databases of a server by using a simple script and a cronjob. The whole article can be found here:
http://www.lunar.lu/mysql-datenbank-backup-script/
Regards,
Claude
To drop ALL tables in your database (fill out the first line as appropriate):
MYSQL="mysql -h HOST -u USERNAME -pPASSWORD -D DB_NAME"
$MYSQL -BNe "show tables" | awk '{print "set foreign_key_checks=0; drop table `" $1 "`;"}' | $MYSQL
unset MYSQL
This can be useful if you need to empty a database in order to restore a backup made by mysqldump, but you couldn't use --add-drop-database because you don't have CREATE DATABASE privileges on the command line (e.g. you're on shared hosting). mysqldump adds DROP TABLE by default, but if tables may have been added or renamed since the time of your backup (e.g. by some sort of update process that you're trying to revert from), failing to drop those tables will likely cause serious headaches later on.
Of course this raises the question of why MySQL doesn't support "DROP TABLE *;" (in which case mysqldump could just insert that)?
we can also write sql to write sql to drop tables:
select concat('drop table ',TABLE_SCHEMA,'.',TABLE_NAME,';')
from information_schema.tables
where <TABLE_SCHEMA = db or what ever you want!)
If you need to split a dumpfile back into its component parts, for example to retrieve a particular table, you might find this little PHP script handy:
http://www.edmondscommerce.co.uk/blog/mysql/updated-mysql-dump-splitter-and-cleaner/
Here is a shell script for mysqldump extractor: "mydumpsplitter".
This shell script will be grabbing the tables you want and pass it to tablename.sql.
It’s capable to understand regular expressions as I’ve added sed -r option.
Also MyDumpSplitter can split the dump in to individual table dumps.
http://kedar.nitty-witty.com/blog/mydumpsplitter-extract-tables-from-mysql-dump-shell-script/
There is an irony of SQL injection here but this is handy when I am building small files to represent tables for unit tests:
mysqldump --skip-opt exp --tables foo --where "foo_parent_id = 72 limit 100" > ~/foo.dmp
(note the injection of the limit clause in the where)
I now have 100 insert statements. Foo itself has several million rows, so this is quite handy.
Add your own comment.