0

In MySQL how to copy data from one table to another in the same database table?

I know insert into select, but it is taking forever to do this, especially on a live database, we can't take a risk.

Some conditions are there:
1. table1 is a source table and table1_archives is a destination table.
2. table1_archives already have data so we can only append.

My attempt:

time mysqldump --log-error=$logfile --complete-insert --insert-ignore --no-create-info --skip-triggers --user=$dbuser --host=$host $dbname table1 --where="created < now()-interval 10 month" > $filename 

But it has the name of table1, so I can't insert it into table1_archives.

Any guidance will be appreciated.

Thanks in advance.

4
  • How long is forever? How often do you archive records? How many records do you archive on average? Commented Oct 11, 2019 at 6:36
  • Archival happens every weekend. 10L total records every week. Commented Oct 11, 2019 at 6:41
  • can you try workbench migration mysql.com/products/workbench/migrate Commented Oct 11, 2019 at 6:51
  • If you check it closely, it has some condition(--where), and I will check this on one server if working fine. Then need to do the same on multiple servers. Commented Oct 11, 2019 at 6:58

2 Answers 2

1

In your output file, you need to change the table name table1 to table1_archives. Unfortunately mysqldump does not have any way to do this. You will have to do this on the fly using sed, which will rename everything in output file from table1 to table1_archives.

Since your columns can also contain the content like table1, its better to search and replace by enclosing them in backticks.

You can also use gzip to compress the output file.

Here is the command that worked for me

mysqldump -u USER -h HOST -p --skip-add-drop-table --no-create-info --skip-triggers --compact DB table1 |\ sed -e 's/`table1`/`table1_archives`/' | gzip > filename.sql.gz 
Sign up to request clarification or add additional context in comments.

Comments

1

"but it is taking forever to do this"

There is a small trick to avoid this and then insert into will work faster:

Insert into table1 select * from table2 

Trick:

step-1: drop all indices from table2 step-2: execute query step-3: create indices again 

4 Comments

Dear downvoter, I can bet on this. I have migrated so many huge databases using this trick. When I started it took a full day then I needed to cancel. (copying 1 million records). Then I did above, it took just 7 seconds.
"drop all indices from table2" : can you explain this : what is indices ?, drop = delete table in database , so you will delete all tables indices in table2 database ?
@Eric correct, I use to delete each index before insert and then after insert recreate them. Basically, if there is any index (other than PK obviousaly) then the insertion is slower becoming beyond our reach when the dataset is huge.
ok, : drop all indexes from myTable ( I did not understand the term "indices") , insert billion of rows, then recreate indexes.. :-) : ok :-)

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.