An instance of MySQL 5.6.20 running (mostly just) a database with InnoDB tables is exhibiting occasional stalls for all update operations for the duration of 1-4 minutes with all INSERT, UPDATE and DELETE queries remaining in "Query end" state. This obviously is most unfortunate. The MySQL slow query log is logging even the most trivial queries with insane query times, hundreds of them with the same timestamp corresponding to the point in time where the stall has been resolved: # Query_time: 101.743589 Lock_time: 0.000437 Rows_sent: 0 Rows_examined: 0 SET timestamp=1409573952; INSERT INTO sessions (redirect_login2, data, hostname, fk_users_primary, fk_users, id_sessions, timestamp) VALUES (NULL, NULL, '192.168.10.151', NULL, 'anonymous', '64ef367018099de4d4183ffa3bc0848a', '1409573850'); And the device statistics are showing increased, although not excessive I/O load in this time frame (in this case updates were stalling 14:17:30 - 14:19:12 according to the timestamps from the statement above): # sar -d [...] 02:15:01 PM DEV tps rd_sec/s wr_sec/s avgrq-sz avgqu-sz await svctm %util 02:16:01 PM dev8-0 41.53 207.43 1227.51 34.55 0.34 8.28 3.89 16.15 02:17:01 PM dev8-0 59.41 137.71 2240.32 40.02 0.39 6.53 4.04 24.00 02:18:01 PM dev8-0 122.08 2816.99 1633.44 36.45 3.84 31.46 1.21 2.88 02:19:01 PM dev8-0 253.29 5559.84 3888.03 37.30 6.61 26.08 1.85 6.73 02:20:01 PM dev8-0 101.74 1391.92 2786.41 41.07 1.69 16.57 3.55 36.17 As the nature of the stalls made me suspect log flushing activity to be the culprit and [this Percona article on log flushing performance issues with MySQL 5.5][1] is describing very similar symptoms, I decided to track the values of `Log sequence number` and `Pages flushed up to` from the *"LOG"* section outputs of `SHOW ENGINE INNODB STATUS` every 10 seconds. It indeed does look like flushing activity is ongoing during the stall as the spread between the two values is decreasing : Mon Sep 1 14:17:08 CEST 2014 LSN: 263992263703, Pages flushed: 263973405075, Difference: 18416 K Mon Sep 1 14:17:19 CEST 2014 LSN: 263992826715, Pages flushed: 263973811282, Difference: 18569 K Mon Sep 1 14:17:29 CEST 2014 LSN: 263993160647, Pages flushed: 263974544320, Difference: 18180 K Mon Sep 1 14:17:39 CEST 2014 LSN: 263993539171, Pages flushed: 263974784191, Difference: 18315 K Mon Sep 1 14:17:49 CEST 2014 LSN: 263993785507, Pages flushed: 263975990474, Difference: 17377 K Mon Sep 1 14:17:59 CEST 2014 LSN: 263994298172, Pages flushed: 263976855227, Difference: 17034 K Mon Sep 1 14:18:09 CEST 2014 LSN: 263994670794, Pages flushed: 263978062309, Difference: 16219 K Mon Sep 1 14:18:19 CEST 2014 LSN: 263995014722, Pages flushed: 263983319652, Difference: 11420 K Mon Sep 1 14:18:30 CEST 2014 LSN: 263995404674, Pages flushed: 263986138726, Difference: 9048 K Mon Sep 1 14:18:40 CEST 2014 LSN: 263995718244, Pages flushed: 263988558036, Difference: 6992 K Mon Sep 1 14:18:50 CEST 2014 LSN: 263996129424, Pages flushed: 263988808179, Difference: 7149 K Mon Sep 1 14:19:00 CEST 2014 LSN: 263996517064, Pages flushed: 263992009344, Difference: 4402 K Mon Sep 1 14:19:11 CEST 2014 LSN: 263996979188, Pages flushed: 263993364509, Difference: 3529 K Mon Sep 1 14:19:21 CEST 2014 LSN: 263998880477, Pages flushed: 263993558842, Difference: 5196 K Mon Sep 1 14:19:31 CEST 2014 LSN: 264001013381, Pages flushed: 263993568285, Difference: 7270 K Mon Sep 1 14:19:41 CEST 2014 LSN: 264001933489, Pages flushed: 263993578961, Difference: 8158 K Mon Sep 1 14:19:51 CEST 2014 LSN: 264004225438, Pages flushed: 263993585459, Difference: 10390 K And at 14:19:11 the spread has reached its minimum, so flushing activity seems to have ceased here, just coinciding with the end of the stall. But as I understand for the flushing operation to block all updates to the database it needs to be "synchronous", which means that 7/8 of the log space has to be occupied. And it would be preceded by an "asynchronous" flushing phase starting at `innodb_max_dirty_pages_pct` fill level - which I am not seeing as it seems. Also, the LSNs are increasing still even during the time of the stall, so log activity has not ceased completely. Additionally, the page_cleaner thread for adaptive flushing seems to do what it should and most of the time, even with a somewhat higher backlog to flush, updates are not blocking: ![LSN - PagesFlushed][2] <sup>(numbers are `([Log Sequence Number] - [Pages flushed up to]) / 1024` from `SHOW ENGINE INNODB STATUS`)</sup> The issue seems somewhat alleviated by setting `innodb_adaptive_flushing_lwm=1`, forcing the page cleaner to do more work than before. So, if this is not the classical "sharp checkpoint" issue, what is this, and, more importantly, how do I turn it off? Some configuration variables (I've tinkered with most of them without definite success): mysql> show global variables where variable_name like 'innodb_adaptive_flush%'; +------------------------------+-------+ | Variable_name | Value | +------------------------------+-------+ | innodb_adaptive_flushing | ON | | innodb_adaptive_flushing_lwm | 1 | +------------------------------+-------+ mysql> show global variables where variable_name like 'innodb_max_dirty_pages_pct%'; +--------------------------------+-------+ | Variable_name | Value | +--------------------------------+-------+ | innodb_max_dirty_pages_pct | 50 | | innodb_max_dirty_pages_pct_lwm | 10 | +--------------------------------+-------+ mysql> show global variables where variable_name like 'innodb_log_%'; +-----------------------------+-----------+ | Variable_name | Value | +-----------------------------+-----------+ | innodb_log_buffer_size | 8388608 | | innodb_log_compressed_pages | ON | | innodb_log_file_size | 268435456 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | +-----------------------------+-----------+ mysql> show global variables where variable_name like 'innodb_double%'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | innodb_doublewrite | ON | +--------------------+-------+ mysql> show global variables where variable_name like 'innodb_buffer_pool%'; +-------------------------------------+----------------+ | Variable_name | Value | +-------------------------------------+----------------+ | innodb_buffer_pool_dump_at_shutdown | OFF | | innodb_buffer_pool_dump_now | OFF | | innodb_buffer_pool_filename | ib_buffer_pool | | innodb_buffer_pool_instances | 8 | | innodb_buffer_pool_load_abort | OFF | | innodb_buffer_pool_load_at_startup | OFF | | innodb_buffer_pool_load_now | OFF | | innodb_buffer_pool_size | 29360128000 | +-------------------------------------+----------------+ [1]: http://www.percona.com/blog/2011/09/18/disaster-mysql-5-5-flushing/ [2]: https://i.sstatic.net/ruaQZ.png