The effective maximum table size for MySQL databases is usually determined by operating system constraints on file sizes, not by MySQL internal limits. The following table lists some examples of operating system file-size limits. This is only a rough guide and is not intended to be definitive. For the most up-to-date information, be sure to check the documentation specific to your operating system.
Operating System | File-size Limit |
Win32 w/ FAT/FAT32 | 2GB/4GB |
Win32 w/ NTFS | 2TB (possibly larger) |
Linux 2.2-Intel 32-bit | 2GB (LFS: 4GB) |
Linux 2.4+ | (using ext3 file system) 4TB |
Solaris 9/10 | 16TB |
MacOS X w/ HFS+ | 2TB |
NetWare w/NSS file system | 8TB |
Windows users, please note that FAT and VFAT (FAT32) are not considered suitable for production use with MySQL. Use NTFS instead.
On Linux 2.2, you can get MyISAM
tables
larger than 2GB in size by using the Large File Support (LFS)
patch for the ext2 file system. Most current Linux
distributions are based on kernel 2.4 or higher and include
all the required LFS patches. On Linux 2.4, patches also exist
for ReiserFS to get support for big files (up to 2TB). With
JFS and XFS, petabyte and larger files are possible on Linux.
For a detailed overview about LFS in Linux, have a look at Andreas Jaeger's Large File Support in Linux page at http://www.suse.de/~aj/linux_lfs.html.
If you do encounter a full-table error, there are several reasons why it might have occurred:
The InnoDB
storage engine maintains
InnoDB
tables within a tablespace that
can be created from several files. This allows a table to
exceed the maximum individual file size. The tablespace
can include raw disk partitions, which allows extremely
large tables. The maximum tablespace size is 64TB.
If you are using InnoDB
tables and run
out of room in the InnoDB
tablespace.
In this case, the solution is to extend the
InnoDB
tablespace. See
Section 13.6.5, “Adding, Removing, or Resizing InnoDB
Data and Log
Files”.
You are using MyISAM
tables on an
operating system that supports files only up to 2GB in
size and you have hit this limit for the data file or
index file.
You are using a MyISAM
table and the
space required for the table exceeds what is allowed by
the internal pointer size. MyISAM
allows data and index files to grow up to 256TB by
default, but this limit can be changed up to the maximum
allowable size of 65,536TB
(2567 – 1 bytes).
If you need a MyISAM
table that is
larger than the default limit and your operating system
supports large files, the CREATE
TABLE
statement supports
AVG_ROW_LENGTH
and
MAX_ROWS
options. See
Section 12.1.14, “CREATE TABLE
Syntax”. The server uses these
options to determine how large a table to allow.
If the pointer size is too small for an existing table,
you can change the options with ALTER
TABLE
to increase a table's maximum allowable
size. See Section 12.1.6, “ALTER TABLE
Syntax”.
ALTER TABLEtbl_name
MAX_ROWS=1000000000 AVG_ROW_LENGTH=nnn
;
You have to specify AVG_ROW_LENGTH
only
for tables with BLOB
or
TEXT
columns; in this case,
MySQL can't optimize the space required based only on the
number of rows.
To change the default size limit for
MyISAM
tables, set the
myisam_data_pointer_size
,
which sets the number of bytes used for internal row
pointers. The value is used to set the pointer size for
new tables if you do not specify the
MAX_ROWS
option. The value of
myisam_data_pointer_size
can be from 2 to 7. A value of 4 allows tables up to 4GB;
a value of 6 allows tables up to 256TB.
You can check the maximum data and index sizes by using this statement:
SHOW TABLE STATUS FROMdb_name
LIKE 'tbl_name
';
You also can use myisamchk -dv
/path/to/table-index-file. See
Section 12.4.5, “SHOW
Syntax”, or Section 4.6.3, “myisamchk — MyISAM Table-Maintenance Utility”.
Other ways to work around file-size limits for
MyISAM
tables are as follows:
If your large table is read only, you can use myisampack to compress it. myisampack usually compresses a table by at least 50%, so you can have, in effect, much bigger tables. myisampack also can merge multiple tables into a single table. See Section 4.6.5, “myisampack — Generate Compressed, Read-Only MyISAM Tables”.
MySQL includes a MERGE
library that
allows you to handle a collection of
MyISAM
tables that have identical
structure as a single MERGE
table.
See Section 13.8, “The MERGE
Storage Engine”.
You are using the MEMORY
(HEAP
) storage engine; in this case you
need to increase the value of the
max_heap_table_size
system variable. See
Section 5.1.4, “Server System Variables”.
User Comments
If you reach the limit of 2Gb in data, use Delete command to delete information and if you are using MyIsam Engine, don't forget to use Optimize Table, to reclaim unused space and reduce file sizes.
Good information here: http://jeremy.zawodny.com/blog/archives/000796.html
If you get an error like The table '#sql-4e0b_462' is full when attempting to run an alter or optimize statement on an InnoDb table check your innodb_data_file_path setting. It is possible to set a Max on that file, in which case the InnoDb table can fill up even though you have plenty of diskspace and you will get the above error.
Since the error message is reported on a temporary table, I found this difficult to track down.
In my.cnf I had (this is by default on gentoo linux):
innodb_data_file_path = ibdata1:10M:autoextend:max:128M
Which meant that the innodb file size is 10M and it is being automatically extended but only until it reaches 128M and then we have a problem. I was trying to import some huge database dump, then the problem occured that said "The table 'foo' is full" and after that even my old databases stopped working with the same error when updating. After some searching I found this page, and since my max is 2G and I'm happy ;). Good luck to you too!
I didn't see mentioned what I would expect to be the most common type of reason for this error : the disk is full.
Note that in 5.0 (pre-5.0.85) and in 5.1 (pre-5.1.38), you could receive the "table is full" error for an InnoDB table if InnoDB runs out of undo slots (bug #18828).
and you can also get this error while importing data (say huge data ..)
Add your own comment.