0

So i already complete a script that will insert data into mysql table and move those file into a directory until all files are none. There around 51 files and it took around 9 sec to complete the execution. So my question is . is there a better way to speed up the execution process?

the codes are

our $DIR="/home/aimanhalim/LOG"; our $FILENAME_REGEX = "server_performance_"; # mariaDB config hash our %db_config = ( "username"=>"root", "password"=> "", "db"=>"Top_Data", "ip" => "127.0.0.1", "port" => "3306"); main(); exit; sub main() { my $start = time(); print "Searching file $FILENAME_REGEX in $DIR...\n"; opendir (my $dr , $DIR) or die "<ERROR> Cannot open dir: $DIR \n"; while( my $file = readdir $dr ) { print "file in $DIR: [$file]\n"; next if (($file eq ".") || ($file eq "..") || ($file eq "DONE")); #Opening The File in the directory open(my $file_hndlr, "<$DIR/$file"); #Making Variables. my $line_count = 0; my %data = (); my $dataRef = \%data; my $move = "$DIR/$file"; print "$file\n"; while (<$file_hndlr>) { my $line = $_; chomp($line); print "line[$line_count] - [$line]\n"; if($line_count == 0) { # get load average from line 0 ($dataRef) = get_load_average($line,$dataRef); print Dumper($dataRef); } elsif ($line_count == 2) { ($dataRef) = get_Cpu($line,$dataRef); print Dumper($dataRef); } $line_count++; } #insert db my ($result) = insert_record($dataRef,\%db_config,$file); my $Done_File="/home/aimanhalim/LOG/DONE"; sub insert_record(){ my($data,$db_config,$file)=@_; my $result = -1; # -1 fail; 0 - succ # connect to db # connect to MySQL database my $dsn = "DBI:mysql:database=".$db_config->{'db'}.";host=".$db_config->{'ip'}.";port=".$db_config->{'port'}; my $username = $db_config->{'username'}; my $password = $db_config->{'password'}; my %attr = (PrintError=>0,RaiseError=>1 ); my $dbh = DBI->connect($dsn,$username,$password,\%attr) or die $DBI::errstr; print "We Have Successfully Connected To The Database \n"; $stmt->execute(@param_bind); ****this line is insert data statement*** $stmt->finish(); print "The Data Has Been Inserted Successfully\n"; $result = 0; return($result); # commit $dbh->commit(); # return succ / if fail rollback and return fail $dbh->disconnect(); } exit; 

editted

so pretty much this is my code with some sniping here and there.

i tried to put the 'insert_record' below the comment #insert db but i dont think that do anything :U

4
  • Yes, there is a way. Commented Aug 26, 2019 at 6:03
  • @ssr1012 sorry my bad i edit back the post Commented Aug 26, 2019 at 6:13
  • @Сухой27 if so how ? Commented Aug 26, 2019 at 6:13
  • 1
    In fact you're doing it the slowest way possible, opening a new database connection for every single row you insert. You might like my presentation Load Data Fast! Commented Aug 26, 2019 at 6:32

1 Answer 1

6

You are connecting to the database for every file that you want to insert (if I read your code correctly, there seems to be a closing curly brace missing, it won't actually compile). Opening new database connections is (comparably) slow.

Open the connection once, before inserting the first file and re-use it for subsequent inserts into the database. Close the connection after your last file was inserted into the database. This should give you a noticable speed up.

(Depending on the amount of data, 9 seconds might actually not be too bad; but since there is no information on that, it's hard to say).

Sign up to request clarification or add additional context in comments.

7 Comments

is it wierd to not have disconnect and commit in the script ? because when i have those thing it gives me an error of segemntation fault (core dump) but when i remove it ... it works fine
You should still be able to handle multiple transactions within a single connection. Depending on your requirements, it might be enough to commit only once after all the datae has been inserted
but the problem is that even without the commit and disconnect statement it works fine
Once your program terminates, any open resource (such as database connections, file handles) will be closed automatically. I assume without any errors, any open transaction will auto-commit.
@MiteMiteKyle It might also be, that the connection automatically creates a single transaction which can only be committed once. Consecutive commit probably create errors (or segfaults? Seems but bit weird, but well) If you want multiple transaction, you would have to create them manually (reusing the existing connection).
|

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.