474

I'm running the following MySQL UPDATE statement:

mysql> update customer set account_import_id = 1; ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction 

I'm not using a transaction, so why would I be getting this error? I even tried restarting my MySQL server and it didn't help.

The table has 406,733 rows.

2
  • 1
    so far - no answers for how to increase the timeout. this may be the only solution in some cases. Commented Oct 18, 2023 at 10:17
  • getting similar errors on GCP Cloud SQL (MySQL)! I've got regular updating queries and events both referencing a very same table, and each of which can take quite long to finish due to the table size; clearly they could overlap, and when they do, I'd get this error; yes, ideally, I should make my queries run faster; however, this isn't an option for now, so I increased the innodb_lock_wait_timeout to 100 sec (default: 50 sec), which helped stop them :) more here. Commented Sep 15, 2024 at 19:50

30 Answers 30

550

HOW TO FORCE UNLOCK for locked tables in MySQL:

Breaking locks like this may cause atomicity in the database to not be enforced on the sql statements that caused the lock.

This is hackish, and the proper solution is to fix your application that caused the locks. However, when dollars are on the line, a swift kick will get things moving again.

1) Enter MySQL

mysql -u your_user -p 

2) Let's see the list of locked tables

mysql> show open tables where in_use>0; 

3) Let's see the list of the current processes, one of them is locking your table(s)

mysql> show processlist; 

4) Kill one of these processes

mysql> kill <put_process_id_here>; 
Sign up to request clarification or add additional context in comments.

13 Comments

This is dangerous and hackish. A proper solution is to fix your application.
Nonsense, this lets you undo a messup and then fix the application. If I could give this guy 100 up votes for this issue which I had to fix NOW I would.
I agree with Lizardx. This was a very useful solution in the situation that I didn't have the privilege to call SHOW ENGINE INNODB STATUS
How is killing a long-running query this way dangerous? The client calling will just get an error.
guys just don't forget to index relevant columns, a lot of times this is what causes the lock
|
282

You are using a transaction; autocommit does not disable transactions, it just makes them automatically commit at the end of the statement.

What could be happening is, some other thread is holding a record lock on some record (you're updating every record in the table!) for too long, and your thread is being timed out. Or maybe running multiple (2+) UPDATE queries on the same row during a single transaction.

You can see more details of the event by issuing a

SHOW ENGINE INNODB STATUS 

after the event (in SQL editor). Ideally do this on a quiet test-machine.

5 Comments

Is there a way to save the output to a file? I tried SHOW ENGINE INNODB STATUS\G > innodb_stat.txt but doesn't work.
From command line: mysql [insert credentials] -e "SHOW ENGINE INNODB STATUS\G" > innodb_stat.txt
if many mysql thread(or process) are busy, e.g. some query need very long time, so you have to wait some process to be idle. If so you could get this error. Am I right?
Running multiple (2+) UPDATE queries on the same row during a single transaction will also cause this error.
For those using Python MySQL Connector use connection.commit() to commit the INSERT or UPDATE you've just shot through.
157
mysql> set innodb_lock_wait_timeout=100; Query OK, 0 rows affected (0.02 sec) mysql> show variables like 'innodb_lock_wait_timeout'; +--------------------------+-------+ | Variable_name | Value | +--------------------------+-------+ | innodb_lock_wait_timeout | 100 | +--------------------------+-------+ 

Now trigger the lock again. You have 100 seconds time to issue a SHOW ENGINE INNODB STATUS\G to the database and see which other transaction is locking yours.

3 Comments

This answer does not explain why the asker is getting their error. Could you elaborate on why besides just giving the answer?
+1 although this does not answers the question directly, for me it's a good reference to workaround this issue.
@ArtB dev.mysql.com/doc/refman/8.0/en/… In essence the OP is receiving the error because a lock was called on the table and the time elapsed before ending the transaction exceeded the lock_wait_timeout value
113

Take a look to see if your database is fine tuned, especially the transaction isolation. It isn't a good idea to increase the innodb_lock_wait_timeout variable.

Check your database transaction isolation level in MySQL:

mysql> SELECT @@GLOBAL.transaction_isolation, @@transaction_isolation, @@session.transaction_isolation; +--------------------------------+-------------------------+---------------------------------+ | @@GLOBAL.transaction_isolation | @@transaction_isolation | @@session.transaction_isolation | +--------------------------------+-------------------------+---------------------------------+ | REPEATABLE-READ | REPEATABLE-READ | REPEATABLE-READ | +--------------------------------+-------------------------+---------------------------------+ 1 row in set (0.00 sec) 

You could get improvements changing the isolation level. Use the Oracle-like READ COMMITTED instead of REPEATABLE READ. REPEATABLE READ is the InnoDB default.

mysql> SET transaction_isolation = 'READ-COMMITTED'; Query OK, 0 rows affected (0.00 sec) mysql> SET GLOBAL transaction_isolation = 'READ-COMMITTED'; Query OK, 0 rows affected (0.00 sec) 

Also, try to use SELECT FOR UPDATE only if necessary.

6 Comments

This is a great solution for locking issues.
Works great for me, the my.cnf version is [mysqld] transaction-isolation = READ-COMMITTED
Just a note: MySQL 8 has renamed tx_isolation variable to transaction_isolation.
Cares got to be taken to know what you're getting into when you change from read-isolation to READ COMMITTED though. You may end up with dirty data which you may want to avoid. Wikipedia Isoloation
This worked for me. Using python, SQL Alchemy was giving me a warning, which I always ignored, but on reading it, just maybe it was related: Warning: '@@tx_isolation' is deprecated and will be removed in a future release. Please use '@@transaction_isolation' instead cursor.execute('SELECT @@tx_isolation') - the isolation was set to READ-REPEATABLE, but after setting it to READ-COMMITTED, the locking issue was resolved. The process I had running was using about 8 threads writing to the db.
|
54

Something is blocking the execution of the query. Most likely another query updating, inserting or deleting from one of the tables in your query. You have to find out what that is:

SHOW PROCESSLIST; 

Once you locate the blocking process, find its id and run :

KILL {id}; 

Re-run your initial query.

2 Comments

I accidently KILLED all processes listed with SHOW PROCESSLIST; Now I am getting 500 error in phpmyadmin. Is that 500 error related to killing these processes ? If yes, how can I restart it.
I can see what you describe, however killing the process doesn't work. The process' command is "killed", but it remains in the process list.
21
mysql->SHOW PROCESSLIST; kill xxxx; 

and then kill which one in sleep. In my case it is 2156.

enter image description here

Comments

12

100% with what MarkR said. autocommit makes each statement a one statement transaction.

SHOW ENGINE INNODB STATUS should give you some clues as to the deadlock reason. Have a good look at your slow query log too to see what else is querying the table and try to remove anything that's doing a full tablescan. Row level locking works well but not when you're trying to lock all of the rows!

Comments

7

Try to update the below two parameters as they must be having default values.

innodb_lock_wait_timeout = 50

innodb_rollback_on_timeout = ON

For checking parameter value you can use the below SQL.

SHOW GLOBAL VARIABLES LIKE 'innodb_rollback_on_timeout';

1 Comment

This answer does not explain why this is the solution and what is the meaning of the change. Could you elaborate on why besides just giving the answer?
7

In our case the problem did not have much to do with the locks themselves.

The issue was that one of our application endpoints needed to open 2 connections in parallel to process a single request.

Example:

  1. Open 1st connection
  2. Start transaction 1
  3. Lock 1 row in table1
  4. Open 2nd connection
  5. Start transaction 2
  6. Lock 1 row in table2
  7. Commit transaction 2
  8. Release 2nd connection
  9. Commit transaction 1
  10. Release 1st connection

Our application had a connection pool limited to 10 connections.

Unfortunately, under load, as soon as all connections were used the application stopped working and we started having this problem. We had several requests that needed to open a second connection to complete, but could not due to the connection pool limit. As a consequence, those requests were keeping a lock on the table1 row for a long time leading the following requests that needed to lock the same row to throw this error.

Solution:

  • In the short term, we patched the problem by increasing the connection pool limit.
  • In the long term, we removed all nested connections, to fully solve the issue.

Tips:

You can easily check if you have nested connections by trying to lower your connection pool limit to 1 and test your application.

Comments

6

This is an edge case, but this exact error can occur if the volume the database reside on runs out of space. I found this page searching for the solution but nothing was working, I then noticed that other things on the server were acting strangely. A df -h showed 100% in use. I shut the instance down, increased the size of the volume and restarted the instance and the problem resolved itself. This is AWS EC2. YMMV.

Comments

5

Can you update any other record within this table, or is this table heavily used? What I am thinking is that while it is attempting to acquire a lock that it needs to update this record the timeout that was set has timed out. You may be able to increase the time which may help.

2 Comments

maybe innodb_lock_wait_timeout in my.cnf
I set in my.cnf:<br/>innodb_lock_wait_timeout=120<br/>The default is 50 for mysql 5.5. After this change I was not able to see this issue in my unit tests! This happened after switching from proxool to tomcat jdbc pool. Probably due to more transaction time with tomcat pool?!
4

If you've just killed a big query, it will take time to rollback. If you issue another query before the killed query is done rolling back, you might get a lock timeout error. That's what happened to me. The solution was just to wait a bit.

Details:

I had issued a DELETE query to remove about 900,000 out of about 1 million rows.

I ran this by mistake (removes only 10% of the rows): DELETE FROM table WHERE MOD(id,10) = 0

Instead of this (removes 90% of the rows): DELETE FROM table WHERE MOD(id,10) != 0

I wanted to remove 90% of the rows, not 10%. So I killed the process in the MySQL command line, knowing that it would roll back all the rows it had deleted so far.

Then I ran the correct command immediately, and got a lock timeout exceeded error soon after. I realized that the lock might actually be the rollback of the killed query still happening in the background. So I waited a few seconds and re-ran the query.

1 Comment

I just had the same thing when the server rebooted during a large update.
3

The number of rows is not huge... Create an index on account_import_id if its not the primary key.

CREATE INDEX idx_customer_account_import_id ON customer (account_import_id); 

1 Comment

OMG...this just saved me. I royally screwed a production DB by dropping an index and this fixed it. Thanks you.
2

I came from Google and I just wanted to add the solution that worked for me. My problem was I was trying to delete records of a huge table that had a lot of FK in cascade so I got the same error as the OP.

I disabled the autocommit and then it worked just adding COMMIT at the end of the SQL sentence. As far as I understood this releases the buffer bit by bit instead of waiting at the end of the command.

To keep with the example of the OP, this should have worked:

mysql> set autocommit=0;

mysql> update customer set account_import_id = 1; commit;

Do not forget to reactivate the autocommit again if you want to leave the MySQL config as before.

mysql> set autocommit=1;

Comments

1

Late to the party (as usual) however my issue was the fact that I wrote some bad SQL (being a novice) and several processes had a lock on the record(s) <-- not sure the appropriate verbiage. I ended up having to just: SHOW PROCESSLIST and then kill the IDs using KILL <id>

Comments

1

We ran into this issue yesterday and after slogging through just about every suggested solution here, and several others from other answers/forums we ended up resolving it once we realized the actual issue.

Due to some poor planning, our database was stored on a mounted volume that was also receiving our regular automated backups. That volume had reached max capacity.

Once we cleared up some space and restarted, this error was resolved.

Note that we did also manually kill several of the processes: kill <process_id>; so that may still be necessary.

Overall, our takeaway was that it was incredibly frustrating that none of our logs or warnings directly mentioned a lack of disk space, but that did seem to be the root cause.

1 Comment

In our case, restart of the DB was not needed. Just free up the space. Before I found it, I killed all stuck DB processes via show processlist and kill <db process id> in sql console. Not sure, if it is necessary
1

Make sure the database tables are using InnoDB storage engine and READ-COMMITTED transaction isolation level.

You can check it by SELECT @@GLOBAL.tx_isolation, @@tx_isolation; on mysql console.

If it is not set to be READ-COMMITTED then you must set it. Make sure before setting it that you have SUPER privileges in mysql.

You can take help from http://dev.mysql.com/doc/refman/5.0/en/set-transaction.html.

By setting this I think your problem will be get solved.


You might also want to check you aren't attempting to update this in two processes at once. Users ( @tala ) have encountered similar error messages in this context, maybe double-check that...

1 Comment

I think you mean transaction_isolation
0

This kind of thing happened to me when I was using php language construct exit; in middle of transaction. Then this transaction "hangs" and you need to kill mysql process (described above with processlist;)

Comments

0

In my instance, I was running an abnormal query to fix data. If you lock the tables in your query, then you won't have to deal with the Lock timeout:

LOCK TABLES `customer` WRITE; update customer set account_import_id = 1; UNLOCK TABLES; 

This is probably not a good idea for normal use.

For more info see: MySQL 8.0 Reference Manual

Comments

0

I ran into this having 2 Doctrine DBAL connections, one of those as non-transactional (for important logs), they are intended to run parallel not depending on each other.

CodeExecution( TransactionConnectionQuery() TransactionlessConnectionQuery() ) 

My integration tests were wrapped into transactions for data rollback after very test.

beginTransaction() CodeExecution( TransactionConnectionQuery() TransactionlessConnectionQuery() // CONFLICT ) rollBack() 

My solution was to disable the wrapping transaction in those tests and reset the db data in another way.

Comments

0

I had similar error when using python to access mysql database.
The python program was using a while and for loop.
Closing cursor and link at appropriate line solved problem
https://github.com/nishishailesh/sensa_host_com/blob/master/sensa_write.py see line 230
It appears that asking repeated link without closing previous link produced this error

Comments

0

I've faced a similar issue when doing some testing.

Reason - In my case transaction was not committed from my spring boot application because I killed the @transactional function during the execution(when the function was updating some rows). Due to which transaction was never committed to the database(MySQL).

Result - not able to update those rows from anywhere. But able to update other rows of the table.

mysql> update some_table set some_value = "Hello World" where id = 1; ERROR 1205 (HY000): Lock wait timeout exceeded; try restarting transaction 

Solution - killed all the MySQL processes using

  • sudo killall -9 mysqld

enter image description here

  • sudo killall -9 mysqld_safe (restarting the server when an error occurs and logging runtime information to an error log. Not required in my case)

Comments

0

Well in mysql 8 we can do following to prevent lock timeouts

START TRANSACTION; SELECT * FROM customer FOR UPDATE SKIP LOCKED; UPDATE customer SET account_import_id = 1; COMMIT; 

FOR UPDATE keyword will lock the selected rows, SKIP LOCKED on other hand will skip already locked rows. If we have this executed simultaneously no lock wait timeouts will happen and the first one will do the job without blocking the other.

This approach is perfect to avoid lock timeouts and needs to be applied carefully and understand how it works.

https://dev.mysql.com/doc/refman/8.0/en/innodb-locking-reads.html#:~:text=A%20locking%20read%20that%20uses,rows%20from%20the%20result%20set.&text=Queries%20that%20skip%20locked%20rows,suitable%20for%20general%20transactional%20work.

Comments

0

I solved my problem temporarily by setting innodb_buffer_pool_size=64MB on my.cnf file.

It's not recommended, but it's much needed when in need of a quick solution.

1 Comment

How is that not recommended ? I thought it helps to set that at higher values for large databases. I use 90G buffer pool size now for a 70GB database.
0

My scenario was failing to commit after an update or delete.

show processlist 

showed a long list of sleeping connections.

Comments

0
show variables like 'innodb_lock_wait_timeout'; SET global innodb_lock_wait_timeout = 1200; 

1 Comment

A code-only answer is not high quality😯. While this code may be useful, you can improve it by saying why it works, how it works, when it should be used, and what its limitations are 👍. Please edit✏️ your answer to include explanation and link to relevant documentation. Also, there are quite a few other answers including once mentioning innodb_lock_wait_timeout. Make sure your answer is unique/provides something new.
0

I assume that MySQL and MariaDB has equal database or at least compatible (as of 2025-03-10). Here's what I use to find the blocking PID:

select tr.trx_id as waiting_trx_id, tr.trx_mysql_thread_id as waiting_pid, tr.trx_query as waiting_query, tb.trx_id as blocking_trx_id, tb.trx_mysql_thread_id as blocking_pid, tb.trx_query as blocking_query from information_schema.INNODB_LOCK_WAITS lw join information_schema.INNODB_TRX tb on lw.blocking_trx_id = tb.trx_id join information_schema.INNODB_TRX tr on lw.requesting_trx_id = tr.trx_id 

This query will show you what process id that cause transaction blocking. After that, you can kill its process ID. Or do whatever you want with it.

Comments

-3

Running this command worked for me

sudo service mysql restart 

1 Comment

Someone might run it on production env. Please put a disclaimer
-4

Had this same error, even though I was only updating one table with one entry, but after restarting mysql, it was resolved.

Comments

-4

Simply restart the mysql server

sudo systemctl restart mysql.service 

1 Comment

Your answer could be improved with additional supporting information. Please edit to add further details, such as citations or documentation, so that others can confirm that your answer is correct. You can find more information on how to write good answers in the help center.

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.