0

I am using preparedstatement batch update/insert to store data into database. But it is taking 3 minutes to persist 1000 entries into SQL Server Database. Am wondering if there is a way to improve the performance. Please check my code below:

SQL UPDATE STATEMENT

String sqlUpdate = "UPDATE details set a=? , b=? , c=? , d=?, e=? where f=?"; updatePstmt = conn.prepareStatement(sqlUpdate); public static void updateCustomerDetailByBatch(HashMap<String, String []> updateCustDetails ) { final int BATCH_SIZE=1000; int batchCtr = 1; try{ conn.setAutoCommit(false); MAFLogger.info("Number of Customer details to be updated: "+ updateCustDetails.size()); for (Map.Entry<String, String []> custEntry: updateCustDetails.entrySet()) { String x = custEntry.getValue()[0]; String y = custEntry.getValue()[1]; String z = custEntry.getKey(); String a = custEntry.getValue()[2]; String b = custEntry.getValue()[3]; String c = custEntry.getValue()[4]; updatePstmt.setString(1, x); updatePstmt.setString(2, y); updatePstmt.setString(3, z); updatePstmt.setString(4, a); updatePstmt.setString(5, b); updatePstmt.setString(6, c); updatePstmt.addBatch(); if (batchCtr % BATCH_SIZE == 0) { MAFLogger.debug("Batch Ctr is : " + batchCtr+ " Updated Batch "); updatePstmt.executeBatch(); } batchCtr++; } if (batchCtr % BATCH_SIZE != 0 ) { MAFLogger.debug("Execute remaining batch update statement contents: "+ batchCtr); updatePstmt.executeBatch(); } conn.setAutoCommit(true); }catch (SQLException sqlE) { MAFLogger.error("Batch update statement problem : " + sqlE); } 

}

I have read different articles about adding and answers here in SO such as these link1 , link2 and link3 but there is no change. Appreciate if you can help out.

I am using Microsoft JDBC Driver downloadable on their website "sqljdbc_4.1.5605.100_enu.tar.gz"

Table Index

index_name index_description index_keys PK_CC_Details_TEMP clustered, unique, primary key located on PRIMARY f 
13
  • As I know executeBatch divides the execution automatically so you don't need to divide it by programmatically. At least jdbcTemplate divides automatically Commented Dec 16, 2016 at 18:04
  • I was following this example stackoverflow.com/questions/6892105/… Commented Dec 16, 2016 at 18:08
  • Maybe you can change the batch size but it can be experimental only Commented Dec 16, 2016 at 18:15
  • 1
    Have you checked the indexes on the table to ensure that you're not doing a full table scan for each UPDATE? Commented Dec 16, 2016 at 19:27
  • 1
    Specifically, column f should be indexed because that's the predicate used for selecting the correct row. Also, if you have triggers in the table, these might slow down each update. Commented Dec 16, 2016 at 19:50

1 Answer 1

0

Switch from the prepared statement to the jdbcOperations batchUpdate(String sql, List<Object[]> batchArgs, int[] argTypes) function. For some reason, batch in SQL Server is really slow if argTypes are not provided ahead of time. It seems to call the database to check the dataType for each value passed to the database. An example of batchUpdate:

import java.sql.Types; int[] argTypes = new int[9]; argTypes[0] = Types.INTEGER; argTypes[1] = Types.INTEGER; argTypes[2] = Types.VARCHAR; argTypes[3] = Types.DECIMAL; argTypes[4] = Types.DATE; List<Object[]> lineArgs = new ArrayList(); line[0] = ID; line[1] = lineNo; line[2] = location; line[3] = cost; line[4] = date; lineArgs.add(line); jdbcOperations.batchUpdate(INSERT_QUERY, args, argTypes); 
Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.