4

Converting a large schema to file-per-table and I will be performing a mysqldump/reload with --all-databases. I have edited the my.cnf and changed "innod_flush_log_at_trx_commit=2" to speed up the load. I am planning to "SET GLOBAL innodb_max_dirty_pages_pct=0;" at some point before the dump. I am curious to know which combination of settings will get me the fastest dump and reload times?

SCHEMA stats:

26 myisam tables 413 innodb ~240GB of data

[--opt= --disable-keys; --extended-insert; --quick, etc] --no-autocommit ??

vs prepending session vars like: "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;"

Are the mysqldump options equivalent or not really?

Thanks for your advice!

1 Answer 1

5

ASPECT #1

While setting innodb_max_dirty_pages_pct to 0 is good to do prior to a dump, you will have to wait until the dirty page count falls below 1% of the InnoDB Buffer Pool size. Here is how you can measure it:

SELECT ibp_dirty * 100 / ibp_blocks PercentageDirty FROM (SELECT variable_value ibp_blocks FROM information_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_pages_total') A, (SELECT variable_value ibp_dirty FROM information_schema.global_status WHERE variable_name = 'Innodb_buffer_pool_pages_dirty') B; 

Keep running this report until PercentageDirty reaches close to 1.00. Perhaps you could just set innodb_max_dirty_pages_pct to 0 one hour before the dump.

If you do not change innodb_max_dirty_pages_pct, a mysqldump will force a flush of dirty blocks involving the table you are dumping.

ASPECT #2

You should not have to prepend "SET autocommit=0; SET unique_checks=0; SET foreign_key_checks=0;" because a mysqldump includes them at the beginning of the dump. Here is a sample mysqldump's header (Please note the two lines after TIME_ZONE)

-- MySQL dump 10.11 -- -- Host: localhost Database: dbAccessData -- ------------------------------------------------------ -- Server version 5.0.51a-community-log /*!40101 SET @OLD_CHARACTER_SET_CLIENT=@@CHARACTER_SET_CLIENT */; /*!40101 SET @OLD_CHARACTER_SET_RESULTS=@@CHARACTER_SET_RESULTS */; /*!40101 SET @OLD_COLLATION_CONNECTION=@@COLLATION_CONNECTION */; /*!40101 SET NAMES utf8 */; /*!40103 SET @OLD_TIME_ZONE=@@TIME_ZONE */; /*!40103 SET TIME_ZONE='+00:00' */; /*!40014 SET @OLD_UNIQUE_CHECKS=@@UNIQUE_CHECKS, UNIQUE_CHECKS=0 */; /*!40014 SET @OLD_FOREIGN_KEY_CHECKS=@@FOREIGN_KEY_CHECKS, FOREIGN_KEY_CHECKS=0 */; /*!40101 SET @OLD_SQL_MODE=@@SQL_MODE, SQL_MODE='NO_AUTO_VALUE_ON_ZERO' */; /*!40111 SET @OLD_SQL_NOTES=@@SQL_NOTES, SQL_NOTES=0 */; -- -- Current Database: `dbAccessData` -- CREATE DATABASE /*!32312 IF NOT EXISTS*/ `dbAccessData` /*!40100 DEFAULT CHARACTER SET latin1 */; USE `dbAccessData`; 

ASPECT #3

Please run this query

SELECT engine,count(1) table_count FROM information_schema.tables WHERE table_schema='mysql' GROUP BY table_schema; 

I ran this and got 25 for MySQL 5.5.23. Since you have 26 you have only 1 tables outside the mysql schema. To find it, run this:

SELECT table_schema,count(1) table_count FROM information_schema.tables WHERE engine='MyISAM' GROUP BY table_schema; 

If you stop writing to the one lone table, you should be able to mysqldump all databases just fine.

ASPECT #4

All the needed options for --opt are adequate. No need to alter it.

ASPECT #5

You may want to dump the databases into different file: Please see my Apr 17, 2011 post How can I optimize a mysqldump of a large database? on how to script parallel mysqldumps.

4
  • mysql> select count(*) from tables where engine='myisam'; 59 Commented Jun 11, 2013 at 15:42
  • So I actually have 26 MyISAMs in my schema-- Thanks for your advice re: just using "--opt" and I will definitely flush the innodb pages! Commented Jun 11, 2013 at 15:45
  • ASPECT #2 - you mention autocommit=0 and 2 other things, but only those 2 things are handled by mysqldump while autocommit - the most important one of them that would provide the biggest speedup - isn't, yet you lump it in. Commented Dec 24, 2019 at 5:18
  • your command said "aria" engine for me, which is weird, then i checked and it seems to be the information_schema stuff, should i change that to innodb as well? Commented Feb 7, 2022 at 23:49

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.