A new feature in SQL 2014 is called Delayed Durability. This allows transactions to be committed to the database asynchronously to the client. This is a very exciting change, but also one that will no doubt be abused and used in the wrong situations.

Firstly lets look at what normally happens when a transaction is committed to the database. When a commit is issued (or a statement is executed outside of a transaction that modifies data, ie implicit transaction) the change has to be written to the log file before the client receives an acknowledgement that the transaction has completed successfully. This is a core part of the ACID properties (note that D stands for Durability). This is why log performance is so key to the overall performance of the database. The actual data, or page, does not get written to the data file on disk until the lazy writer picks up the dirty page from memory and writes it to disk. This is why databases can sometimes take a while to recover during startup because the recovery process has to read through the log (since the last checkpoint) and ensure that the data on disk is correct – for example if the server crashed before the lazy writer updated the data file.

With this new delayed durability option we now have the option of also making the write to the log file after the client has been told that everything is completed successfully. So whilst this is wonderful for performance it now breaks the key ACID property of Durability. Should the server crash before the change has been written to disk then the change will only be held memory and will be lost. Truly lost – as in non recoverable – yet you have told the client it completed successfully.

So the important point to take from this is that this functionality is for writing data where some data loss is acceptable. For example, an application that logs activity of the users. The data maybe considered nice to have but non essential. You also would prefer that it ran as quickly as possible and relinquished control back to the application as soon as possible. This is perfect example of where delayed durability is useful.

My hope is that no one ever turns this on for transactions because it “makes things faster” rather than understanding the consequences. Only time will tell on that one.

BOL has been updated to show the syntax changes. There are two key components. Firstly the functionality must be enabled at database level:

ALTER DATABASE … SET DELAYED_DURABILITY = { DISABLED | ALLOWED | FORCED }

Disabled is the default and is the pre SQL 2014 behaviour. Allowed means that transactions can have delayed durability if specified on commit and FORCED means that all transactions will be delayed (scary option).

When the database is set to ALLOWED you need to use the option COMMIT TRANSACTION WITH (DELAYED_DURABILITY = ON). Equally when in FORCED mode you could set this to OFF. Omitting this option will fall back to the database default (disabled or forced).

Databases that see a high volume of writes will see performance improvements with this option enabled, even if it is just enabled for non essential writes. I’m not particularly keen on the FORCED option, the only usage I can think of is for a Staging database in warehouse environment or something similar where you can reprocess the data on request.  Generally I would recommend sticking with ALLOWED and making sure you are comfortable with the behaviour first.

Tagged with →  

One Response to Delayed Durability

  1. rule30 says:

    Hey there

    I recently did a small article on performance testing with Delayed Durability switched on. Any feedback welcome.

    http://rule30.wordpress.com/2014/03/09/delayed-durability-in-sql-server-2014-part-1-introduction/

    Best regards

Leave a Reply

Your email address will not be published. Required fields are marked *