I have a stored procedure which creates a complementary trigger and audit table when a table name is passed into it. The SP filters out all the calculated columns and other stuff that stops you doing an 'SELECT *' for auditing and then writes a trigger that inserts those specific columns from table into table_audit upon DELETE/INSERT/UPDATE along with some audit data such as HOST_NAME() and COLUMNS_UPDATED(). This has been little used but has generally worked for the clients who've asked for it.
In a recent round of testing I was asked to set up the auditing on a test database. This caused inserts in our main table to fail because "string or binary data would be truncated", after investigation I found the column where the results of COLUMNS_UPDATED() was being stored was the issue.
The definition of the column was:
[UpdateColumns] [varbinary](16) NULL Changing the definition to this has made everything work again:
[UpdateColumns] [varbinary](24) NULL However what this highlights is that I don't understand the relationship between the number of columns in the table (95 in this case, 7 of which are calculated) and the size of the output from COLUMNS_UPDATED(). I thought [varbinary](16) = 128bits, which should be more than enough flags for 95 columns.
So my question is: what is the relationship?
Secondary question: Can I easily derive from the number of columns that will be audited a value for x in [UpdateColumns] [varbinary](x) when building the trigger or would I be better off just setting x to some larger number?