Linked Questions
20 questions linked to/from How can Innodb ibdata1 file grows by 5X even with innodb_file_per_table set?
0 votes
1 answer
946 views
ibdata1 grows big again after using innodb_file_per_table and having shrunk it [duplicate]
After having set innodb_file_per_table=1, dumping and dropping all db's, deleting ibdata1, and restarting mysql ... My ibdata1 file is small at last... However, directly after reading in my dump, and ...
2 votes
1 answer
494 views
ibdata1 increases when firing update queries [duplicate]
I have read a lot of posts which explain that the only solution to stop the growth of an ever increasing ibdata1 file is to: Take a dump of all databases Set innodb_file_per_table in the mysqld ...
10 votes
4 answers
15k views
Bulk Delete for Large Table in MySQL
I have a Notification table contains about 100 million rows host in Amazon RDS with 1000 IOPS, and I want to delete those rows older than one month. If I do DELETE FROM NOTIFICATION WHERE CreatedAt &...
11 votes
1 answer
22k views
MySQL Index creation failing on table is full
UPDATE: tl;dr: The problem was MySQL uses the TMPDIR when creating indexes. And my TMPDIR was the one running out of disk space. Original Q: I'm trying to add an index to an InnoDB table, and ...
8 votes
3 answers
10k views
What can cause a rapid drop in RDS MySQL Database free storage space?
How could my MySQL database on Amazon RDS have recently gone from 10.5 GB free to "storage-full" status within about 1.5 hours? It's a 15GB MySQL 5.6.27 database running on a db.t2.micro instance. ...
3 votes
3 answers
10k views
MariaDB: how to reduce ibdata file size
I found that My MariaDB's ibdata file keep increasing. So, I've searched for this, and found that innodb_file_per_table should be set as 1. but, my DBMS's configuration has already set as 1; Why ...
4 votes
2 answers
5k views
Can I move the undo log outside of ibdata1 in MySQL 5.6 on an existing server?
I've been growing concerned about the large size of ibdata1 that can never shrink even when using file-per-table on innodb Moving the undo log files outside seemed logical but this procedure seems ...
1 vote
2 answers
4k views
Large ibdata file in MySQL
I am having MySQL database with all my tables as InnoDB with file_per_table config on. But Still I m seeing huge ibdata file ( ~50GB). All tables are having .ibd file as well. This is a machine I have ...
2 votes
1 answer
5k views
I have delete database from mysql but storage is not freed
I have deleted multiple databases (schema) from mysql on amazon RDS but the storage is not freed, is there anything else I should do to free up the storage? I have a mysql database that the total ...
3 votes
3 answers
2k views
Why Optimize Table does not shrink table size?
I implemented this solution for big database, only keep data for 14 days (remove data daily based on date). When I run optimize TABLE table1; The size supposes to decrease but in my case, it increases....
1 vote
2 answers
2k views
InnoDB's undo log start growing after simple SELECT with READ-UNCOMMITTED level
I have a percona mysql (5.7) server with 20K QPS (lots of inserts/updates/deletes). My question is: why issuing a simple, but long, select query (to any table) with trx isolation=READ-UNCOMMITTED ...
1 vote
2 answers
3k views
troubleshoot high value for mysql innodb pages written per second
Our database is writting about 600 pages a second to its innodb buffer pools. Usually this value is about 10 pages/sec, so the server is experiencing a very high IO utilization, around 50%. Is there ...
3 votes
1 answer
3k views
mysqldump freezing on a specific table
I dumped a database (sys_data) which is very big (800GB, all data in one ibdata file) from a remote server. But the dump was blocked at a table (tb_trade_376). My dump command: mysqldump -uxx -pxx -...
1 vote
1 answer
4k views
MySQL dynamically optimize innodb tables without "file per table" setting
We are getting a “too many connections” error once a week the same time a mysql procedure runs. The procedure runs “optimize table” on hundreds of tables and takes nearly ten hours to finish, taking ...
2 votes
1 answer
2k views
mysql directory grow to 246G after one query, which failed due to table is full
I was trying to run the following statement with the hope to create a join of two existing tables. create table CRS_PAIR select concat_ws(',', a.TESTING_ID, b.TRAINING_ID, a.TESTING_C) as k, ...