5

I have a MySQL server running on Ubuntu. If I restart the MySQL, everything is fine. If I however restart the OS, my queries take anywhere from 10 times to 100 times as long. The only "solution" to the problem I have found is to run optimize on every table. Afterwards everything performs normal again. However, rebuilding the whole database after every OS reboot is obviously extremely painful and not a viable long term solution.

I reboot the OS by

  1. stopping all programs querying the database
  2. waiting for queries to finish executing
  3. running stop mysql
  4. making sure that the server stopped with ps -C mysql and ps -C mysqld
  5. running reboot

If I leave out the OS reboot and just start up MySQL again, everything is fine.

Additional information:

  • The error log doesn't contain anything pointing to the issue, nor do I get any errors. Everything is just a magnitude or two slower.
  • All tables are affected. I am using InnoDB, but the very same problem existed when I was using MyISAM.
  • Usually MySQL uses 80% of the 8GB RAM, but after the OS reboot only about 1GB is used. CPU usage goes down from 30-80% of one core to about 1%.
  • The optimize query makes normal use of the full 80% RAM and CPU goes up to normal values, too.

Specs:

  • MySQL 5.5.35-0ubuntu0.12.04.2
  • Ubuntu 12.04.4 LTS
  • 8GB RAM, 4 CPU cores

my.cnf file:

[client] port = 3306 socket = /var/run/mysqld/mysqld.sock [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] local-infile=0 user = mysql pid-file = /var/run/mysqld/mysqld.pid socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp lc-messages-dir = /usr/share/mysql skip-external-locking innodb_file_per_table innodb_autoinc_lock_mode = 0 innodb_thread_concurrency=8 innodb_buffer_pool_size=6G innodb_log_file_size=1600M innodb_additional_mem_pool_size=1M innodb_log_buffer_size=4M innodb_flush_log_at_trx_commit=0 innodb_read_io_threads=3000 innodb_write_io_threads=7000 innodb_io_capacity=10000 key_buffer_size = 1024M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 myisam-recover = BACKUP query_cache_limit = 1M query_cache_size = 16M log_error = /var/log/mysql/error.log expire_logs_days = 10 max_binlog_size = 100M [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] [isamchk] key_buffer = 16M !includedir /etc/mysql/conf.d/ 

key_buffer_size is 1GB, that is about as much RAM being used after the reboot. Could that be connected?

Edit: When logging the query execution times per table, I have noticed that after repairing three of the four tables, sometimes the query times of the three repaired tables still go very high. So I am not completely sure that all tables break. Maybe it's just one table that is pulling down the performance of the whole server? However that doesn't really match the low % of used resources.

0

2 Answers 2

6

When you reboot the OS you remove all of the disk reads that have previously put into operating system disk cache (RAM). Once you've rebooted, the operating system will have to read the MySQL data from disk, which is several orders of magnitude slower than reading from cache (RAM).

Optimise "fixes" this as it causes MySQL to read all of the table data from disk, enabling the OS to cache the data.

Linux uses most free RAM as disk cache. You can see this using the free command:

[oracle@ora12c1 ~]$ free total used free shared buffers cached Mem: 4050844 840072 3210772 0 97268 349716 -/+ buffers/cache: 393088 3657756 Swap: 4063228 0 4063228 [oracle@ora12c1 ~]$ 

Restarting MySQL has no affect on this cache, as it is the operating system that has cached the data, not MySQL.

To "fix" your problem you'll have to force the OS to read the data from disk so that it will cache it. You can use the Unix dd tool to do this. For example:

dd if=/var/mysql/data/mytable.idb of=/dev/null 

Tip: you can verify how much of a given file is in the disk cache with the vmtouch command. That command can also purge the file out of the filesystem cache, or force loading the file into the filesystem cache. See http://hoytech.com/vmtouch/

0
0

As Phil already suggested, the slowness is caused by the caches not being in the physical memory, so you've either warm up your caches or tell Linux to keep it in memory.

There is a tool vmtouch which Bill suggested, that can show you if your provided files or directory are already in memory, if not - it can map them into virtual memory.

To check if the table file is in memory:

$ vmtouch table.ibd Files: 1 Directories: 0 Resident Pages: 0/28 0/112K 0% 

To place it in memory:

$ vmtouch -vt table.ibd 

If you've enough memory, you can choose the whole directory:

$ vmtouch -dl /var/mysql/db-name 

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.