Skip to main content
replaced http://superuser.com/ with https://superuser.com/
Source Link

Why would you need to create a table with even 20 columns, let alone 2000 ???

Granted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packetmax_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the main RDBMS. The company had a single table that had 270 columns, dozens of indexes, and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion dollar company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

Why would you need to create a table with even 20 columns, let alone 2000 ???

Granted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the main RDBMS. The company had a single table that had 270 columns, dozens of indexes, and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion dollar company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

Why would you need to create a table with even 20 columns, let alone 2000 ???

Granted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the main RDBMS. The company had a single table that had 270 columns, dozens of indexes, and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion dollar company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

added 32 characters in body
Source Link
RolandoMySQLDBA
  • 185.6k
  • 34
  • 327
  • 543

Why would you need to create a table with even 20 columns, let alone 2000 ???

Granted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the main RDBMS. The company had a single table that had 270 columns, dozens of indexes, and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion dollar company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

Why would you need to create a table with even 20 columns, let alone 2000 ???

Granted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the RDBMS. The company had a single table that had 270 columns and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

Why would you need to create a table with even 20 columns, let alone 2000 ???

Granted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the main RDBMS. The company had a single table that had 270 columns, dozens of indexes, and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion dollar company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

added 5 characters in body
Source Link
RolandoMySQLDBA
  • 185.6k
  • 34
  • 327
  • 543

Why would you need to create a table with even 20 columns, let alone 2000 ???Why would you need to create a table with even 20 columns, let alone 2000 ???

GrantredGranted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the RDBMS. The company had a single table that had 270 columns and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSIONCONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

Why would you need to create a table with even 20 columns, let alone 2000 ???

Grantred, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the RDBMS. The company had a single table that had 270 columns and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

Why would you need to create a table with even 20 columns, let alone 2000 ???

Granted, denormalized data can prevent having to do JOINs to retrieve many columns of data. However, if you have over 10 columns, you should stop and think about what would happen under the hood during data retrieval.

If a 2000 column table undergoes SELECT * FROM ... WHERE, you would generate large temp tables during the processing, fetching columns that are unnecessary, and creating many scenarios where communication packets (max_allowed_packet) would be pushed to the brink on every query.

In my earlier days as a developer, I worked at a company back in 1995 where DB2 was the RDBMS. The company had a single table that had 270 columns and had performance issues retrieving data. They contacted IBM and had consultants look over the architecture of their system, including this one monolithic table. The company was told "If you do not normalize this table in the next 2 years, DB2 will fail on queries doing Stage2 Processing (any queries requiring sorting on non-indexed columns)." This was told to a multi-trillion company, to normalize a 270 column table. How much more so a 2000 column table.

In terms of mysql, you would have to compensate for such bad design by setting options comparable to DB2 Stage2 Processing. In this case, those options would be

Tweeking these settings to make up for the presence of dozens, let alone hundreds, of columns works well if you have TBs of RAM.

This problem multiplies geometrically if you use InnoDB as you will have to deal with MVCC (Multiversion Concurrency Control) trying to protect tons of columns with each SELECT, UPDATE and DELETE through transaction isolation.

CONCLUSION

There is no substitute or band-aid that can make up for bad design. Please, for your sake of your sanity in the future, normalize that table today !!!

edited body
Source Link
RolandoMySQLDBA
  • 185.6k
  • 34
  • 327
  • 543
Loading
Source Link
RolandoMySQLDBA
  • 185.6k
  • 34
  • 327
  • 543
Loading