0

I have a database table (called Master) which has about 40 columns. 11 of them always contains the constant values for about every 100.000 rows.

The down side of this structure is that, when I need to update those 11 columns values, I need to go and update all 100.000 rows.

I could move all the constant data in a different table, and update it only one time, in one place, instead of 100.000 places.

However, if I do it like this, when I display the fields, I need to create INNER JOIN's between the two tables, which I know makes the SELECT statement slower.

I must say that updating the columns occurs more rarely than reading (displaying) the data.

How you suggest that I should store the data in database to obtain the best performances?

Thanks

1
  • I know this might not be useful but I think it would be prudent to run some form of performance analysis on both mechanisms. The performance impact might be negligible next to the gains in maintenance. See stackoverflow.com/questions/2726657/… for a discussion regarding joins. Commented Apr 22, 2013 at 15:42

1 Answer 1

0

I would have the 11 columns in a separate table

You are bloating the main table with 11 extra values that are static: this affects the density (rows per 8k page) and the memory footprint of the query.

That is, you simply read and process in a lot less data for the main table if you split out the static data

Sign up to request clarification or add additional context in comments.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.