48

I don't know if these requirements are standard or not but I'm wondering is there a solution out there which can do the following:

  • For a specified set of tables keep a copy of a record prior to changing it in a audit version of the relevant table.

I rather not have to code this for every table. I'm wondering if there is a solution you can install on top of SQL Server which will do this for you?

3
  • This has been asked and answered here : stackoverflow.com/questions/8873335/… Commented Jul 9, 2013 at 11:09
  • 1
    Use Trigger option to your requirement... Commented Jul 9, 2013 at 11:11
  • 1
    If anyone is looking into Trigger based auditing. Have a look at this project. AutoAudit Commented Jan 10, 2021 at 14:13

8 Answers 8

38

There are many ways to do that; it depends which version of SQL Server you are using.

Here are few

Sign up to request clarification or add additional context in comments.

4 Comments

You can also use CLR triggers, which may or may not be better suited for your purpose over TSQL ones.
I would credit the second link to this post instead (since it was written 4 years before and they look almost the same): simple-talk.com/sql/database-administration/… Anyway, it is a quite old script that should be reviewed and updated to support schemas and unicode data types (nvarchar, nchar...)
Please visit following link for number of approaches as well as their pros and cons: stackoverflow.com/questions/10060408/…
Some of the suggested codes on the net, assume that you're auditing tables with singular column names (ie: field names without spaces like FirstName and not [First Name]). If you have tables like the later, most of the scripts won't work. I had to rename my columns to single worded versions.
16

I create trigger which does it for XML this way we can log all tables to same table, making it more flexible

CREATE TABLE [dbo].[AuditAll] ( AuditId int NOT NULL IDENTITY(1,1), [DateTime] datetime NOT NULL, TableName nvarchar(255) NOT NULL, AuditEntry xml NULL, CONSTRAINT [PK_AuditAll] PRIMARY KEY CLUSTERED ( AuditId ASC ) ) 

I needed only 'old' values, so I store deleted table only, inserted table can be seen in the table anyhow.

CREATE TRIGGER AuditSimple ON Simple AFTER INSERT,DELETE,UPDATE AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; IF (SELECT COUNT(*) FROM deleted) > 0 begin Declare @AuditMessage XML --set valut to all xml from deleted table set @AuditMessage = (select * from deleted for xml auto) insert into AuditAll( DateTime, TableName, AuditEntry ) values ( GetDate(), 'Simple', @AuditMessage ) end END GO 

I guess this could easily be called in sp_foreach to create it for each table in datatabase but we did not needed it at the moment, just remember to change your table names

cheers

1 Comment

This approach is great. I am stumbled at how can I convert audited XML rows back into a table by just providing name of table as parameter? I want to avoid manually writing column names with datatypes in OPENXML...WITH clause as I already have source table DDL. This will make it easy to query the audited rows.
16

Database tables

Let's say we have a Book table whose audit log information has to be stored in a BookAuditLog table, as illustrated by the following class diagram:

SQL Server audit logging using triggers

The BookAuditLog table is created like this:

CREATE TABLE BookAuditLog ( BookId bigint NOT NULL, OldRowData nvarchar(1000) CHECK(ISJSON(OldRowData) = 1), NewRowData nvarchar(1000) CHECK(ISJSON(NewRowData) = 1), DmlType varchar(10) NOT NULL CHECK (DmlType IN ('INSERT', 'UPDATE', 'DELETE')), DmlTimestamp datetime NOT NULL, DmlCreatedBy varchar(255) NOT NULL, TrxTimestamp datetime NOT NULL, PRIMARY KEY (BookId, DmlType, DmlTimestamp) ) 

The BookAuditLog table columns store the following data:

  • The BookId column stores the identifier of the associated Book row that this log event was created for.
  • The OldRowData stores the JSON representation of Book record state prior to executing an INSERT, UPDATE, or DELETE statement.
  • The NewRowData stores the JSON representation of Book record state after an INSERT, UPDATE, or DELETE statement is executed.
  • The DmlType is an enumeration column that stores the DML statement type that created, updated, or deleted a given Book row.
  • The DmlTimestamp stores the DML statement execution timestamp.
  • The DmlCreatedBy stores the user who issued the INSERT, UPDATE, or DELETE DML statement.
  • The TrxTimestamp stores the timestamp of the transaction that changed the Book record.

The INSERT, UPDATE and DELETE triggers

To capture the INSERT, UPDATE, and DELETE DML statements, we need to create three database triggers that are going to insert records in the BookAuditLog table.

To intercept the INSERT statements on the Book table, we will create the TR_Book_Insert_AuditLog trigger:

CREATE TRIGGER TR_Book_Insert_AuditLog ON Book FOR INSERT AS BEGIN DECLARE @loggedUser varchar(255) SELECT @loggedUser = CAST(SESSION_CONTEXT(N'loggedUser') AS varchar(255)) DECLARE @transactionTimestamp datetime = SYSUTCdatetime() INSERT INTO BookAuditLog ( BookId, OldRowData, NewRowData, DmlType, DmlTimestamp, DmlCreatedBy, TrxTimestamp ) VALUES( (SELECT id FROM Inserted), null, (SELECT * FROM Inserted FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), 'INSERT', CURRENT_TIMESTAMP, @loggedUser, @transactionTimestamp ); END 

To capture the UPDATE statements on the Book records, we will create the following TR_Book_Update_AuditLog trigger:

CREATE TRIGGER TR_Book_Update_AuditLog ON Book FOR UPDATE AS BEGIN DECLARE @loggedUser varchar(255) SELECT @loggedUser = CAST(SESSION_CONTEXT(N'loggedUser') AS varchar(255)) DECLARE @transactionTimestamp datetime = SYSUTCdatetime() INSERT INTO BookAuditLog ( BookId, OldRowData, NewRowData, DmlType, DmlTimestamp, DmlCreatedBy, TrxTimestamp ) VALUES( (SELECT id FROM Inserted), (SELECT * FROM Deleted FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), (SELECT * FROM Inserted FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), 'UPDATE', CURRENT_TIMESTAMP, @loggedUser, @transactionTimestamp ); END 

To intercept the DELETE statements on the Book table rows, we will create the following TR_Book_Delete_AuditLog trigger:

CREATE TRIGGER TR_Book_Delete_AuditLog ON Book FOR DELETE AS BEGIN DECLARE @loggedUser varchar(255) SELECT @loggedUser = CAST(SESSION_CONTEXT(N'loggedUser') AS varchar(255)) DECLARE @transactionTimestamp datetime = SYSUTCdatetime() INSERT INTO BookAuditLog ( BookId, OldRowData, NewRowData, DmlType, DmlTimestamp, DmlCreatedBy, TrxTimestamp ) VALUES( (SELECT id FROM Deleted), (SELECT * FROM Deleted FOR JSON PATH, WITHOUT_ARRAY_WRAPPER), null, 'DELETE', CURRENT_TIMESTAMP, @loggedUser, @transactionTimestamp ); END 

Demo time

When executing an INSERT statement on the Book table:

INSERT INTO Book ( Author, PriceInCents, Publisher, Title, Id ) VALUES ( 'Vlad Mihalcea', 3990, 'Amazon', 'High-Performance Java Persistence 1st edition', 1 ) 

We can see that a record is inserted in the BookAuditLog that captures the INSERT statement that was just executed on the Book table:

| BookId | OldRowData | NewRowData | DmlType | DmlTimestamp | DmlCreatedBy | TrxTimestamp | |--------|------------|------------------------------------------------------------------------------------------------------------------------------------|---------|-------------------------|---------------|-------------------------| | 1 | | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":3990,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | INSERT | 2020-11-08 08:40:28.343 | Vlad Mihalcea | 2020-11-08 06:40:28.347 | 

When updating the Book table row:

UPDATE Book SET PriceInCents = 4499 WHERE Id = 1 

We can see that a new record is going to be added to the BookAuditLog by the AFTER UPDATE trigger on the Book table:

| BookId | OldRowData | NewRowData | DmlType | DmlTimestamp | DmlCreatedBy | TrxTimestamp | |--------|------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|---------|-------------------------|---------------|-------------------------| | 1 | | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":3990,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | INSERT | 2020-11-08 08:40:28.343 | Vlad Mihalcea | 2020-11-08 06:40:28.347 | | 1 | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":3990,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":4499,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | UPDATE | 2020-11-08 08:43:22.803 | Vlad Mihalcea | 2020-11-08 06:43:22.807 | 

When deleting the Book table row:

DELETE FROM Book WHERE Id = 1 

A new record is added to the BookAuditLog by the AFTER DELETE trigger on the Book table:

| BookId | OldRowData | NewRowData | DmlType | DmlTimestamp | DmlCreatedBy | TrxTimestamp | |--------|------------------------------------------------------------------------------------------------------------------------------------|------------------------------------------------------------------------------------------------------------------------------------|---------|-------------------------|---------------|-------------------------| | 1 | | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":3990,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | INSERT | 2020-11-08 08:40:28.343 | Vlad Mihalcea | 2020-11-08 06:40:28.347 | | 1 | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":3990,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":4499,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | UPDATE | 2020-11-08 08:43:22.803 | Vlad Mihalcea | 2020-11-08 06:43:22.807 | | 1 | {"Id":1,"Author":"Vlad Mihalcea","PriceInCents":4499,"Publisher":"Amazon","Title":"High-Performance Java Persistence 1st edition"} | | DELETE | 2020-11-08 08:44:25.630 | Vlad Mihalcea | 2020-11-08 06:44:25.633 | 

3 Comments

Any reason to save complete record as json in single column over saving data column wise ? like having columns in audit like OldAuthor, NewAuthor, OldPublisher, NewPublisher etc.
Yes, of course. My design doesn't require changing the table every time you add a new column or altering it's name in the auditable table.
This is close to the solution I came up with a few years back for our auditing. Though it requires a trigger per table. Which was ideal for us as I can also control if the trigger should be deactivated/reactivated for any one table.
5

You can try out a 3rd party point-and-click trigger based solution such as ApexSQL Audit - an auditing tool for SQL Server databases, which captures data changes that have occurred on a database including the information on who made the change, which objects were affected, when it was made, as well as the information on the SQL login, application and host used to make the change. It stores all captured information in a central repository and exports them in print friendly formats

Disclaimer: I work as a Product Support Engineer at ApexSQL

1 Comment

I used ApexSQL in one of my apps. It does a good job of auditing but a poor job if you ever have to figure out what went wrong using its audit table / Querying App. I will not use it again.
0

Have a look at triggers. These can be used to implement something to fit your requirements.

1 Comment

Thanks Chris! I would rather avoid triggers - at least having my team implementing triggers - as this ends up pretty much as code. Kinda supprised there isn't a third party "plug-in" which would do this.
0

Take a look at this article - Auditing in SQL Server 2008 which beautifully takes advantage of the auditing features already present in SQL Server 2008.

I must also mention that @Microtechie answers points to some great article. Read them and decide which one is more easy to adapt.

Comments

0

I see 3 ways to achieve this:

  1. Triggers are the best solution at all.
  2. You can implement replication/log shipping for the table or the database which will always be a few mili seconds/seconds old copy of that table/database.
  3. Schedule differential backups as per the time duration old copy you need.

With option '2', in case of any problem immediately you can turn off replication/log shipping and get a few seconds previous copy of exact data. With option '3', for example you have differential backup frequency of every 5 minutes, then in case of any problem you can recover 5 minutes old copy of correct data.

Comments

-1

With Database Snapshots, you can keep a readonly copy of your data in that instant. Also with your backups of the logs you can restore your information at a specific period of time if necessary.

You can also read information from the log to retrieve the information changed.

The other solution that is not of your preference is to trace the changes using triggers, but it may require to work on each table. You can also enable the Change Data Capture feature to detect changes, this feature also needs to be enable for each table, but it requires less code than the triggers.

Finally, there are third party tools like Apex SQL Trigger that do this job automatically with few clicks and configurations.

Comments

Start asking to get answers

Find the answer to your question by asking.

Ask question

Explore related questions

See similar questions with these tags.