Why? Because most shops hand crank their T-SQL release scripts. The devs work on a development server making changes and then prepare a release script manually. This sucks because it is prone to errors because databases are all about state.

State? If a web developer wants to release a new version of a web page they simply copy the dev version onto the production server and the job is done. If you want to add a column to a table you can’t just drop it and replace it with a new version because of the existing data. Therefore you care about the database state at the start of a deployment. In some cases you want to create an object from scratch and in some you want to alter it.

This becomes even more tricky with dependent objects where altering a datatype for example may involve dropping foreign keys or indexes first and ensuring they are re-created correctly afterwards.

So you manually write scripts that do things like IF EXISTS and spend hours making sure things are in the right order. Then, if you are in a smart shop, you script your objects into some form of source control. I’ve worked on countless projects where this is how it is done. Only for things to go horribly wrong on deployment day because Bob thought releasing a new table the day before the main release so he could start populating it early wouldn’t do any harm. It always does harm.

Manual deployments are often tricky to script in a transaction. We are database people, we like transactions because it means something is done or it isn’t. We don’t like half way houses. Resolving issues with a deployment script that failed half way through and continued running is not fun.

So what’s the answer. Well back in Visual Studio 2005 Microsoft created something called a database project for SQL developers to build their databases in. It wasn’t very good. Visual Studio 2008 came along, it was better, but still not really that great. Visual Studio 2010 came along, things were starting to look really good. Database projects allowed you to compile a build of all the objects in your database and check that the syntax was valid. Things like invalid columns and referencing objects that no longer exist became easy to deal with. All code was written as a CREATE, so as you wanted the database to look when complete. The project would then be turned into a model when built. When you were ready to deploy the model would be compared against the target database and it would work out what changes were needed and in what order. So it would create missing objects and alter existing objects. So Bob’s pre-deployed table would no longer be a problem. VS2010 had loads of features I loved. But it still wasn’t quite perfect, it was slow. It hogged memory creating a build on a large database with thousands of objects and references to other databases could be painful.

Then came SQL Server 2012, and a long with it something called SSDT or SQL Server Data Tools. For a long time it was known under the code name Juneau. It is basically addin to Visual Studio 2010 and I believe will be built into the next version.

The way Microsoft’s marketing team have spun it you would think this was a brand new idea and product coming to the market for the first time. But it’s not, it’s simply an evolution of the VS 2010 database projects. And it has been improved greatly. Builds are much faster and the models have moved to what is looking like becoming a standard for MS, the dacpac file. The interface is much improved, intellisense is still not to the level of RedGate SQL Prompt but it is better. You can work on an in-memory database which is actually quite cool, or you can work against a development SQL server (never connect it to a production database!). Pressing F5 will not execute your script (you will find this annoying at first) but it will deploy the change to your dev server (or in-memory database). So if you want to change the datatype of that primary key (which has lots of foreign keys referencing it) you simply just change the datatype in the object schema, press F5 and let VS figure out what to do.

Ok, I’ll be honest, it is STILL not perfect even in it’s fourth generation. It struggles with replication, CDC and referencing external databases is not perfect. It sometimes tries to rebuild a large table unexpectedly. But these are things as users we learn to work with and around. I’m yet to see an environment where SSDT will not work with some creative thinking. SSDT will save you hours of wasted time I promise.

Developers should spend time developing. Not working out everything that has to be done in order to release something. I’ll be blogging a lot on SSDT in the coming months, I have barley touched on what it can do here. But I encourage you to take the time to look at SSDT seriously. It can be a painful switch over. And it does take a while before you fully trust the scripts its generates before you run them freely on your production servers. But that day will come.

I did my first big sql release today using SSDT. The release was an incremental update to a database with 8 weeks worth of changes by 5 developers going live. The release script contained nearly 42,000 lines of code, modify hundreds of objects due to tables moving between schemas, and creating many new objects. The release went perfectly. Try hand cracking that release script – no thanks!

Tagged with →  

Leave a Reply

Your email address will not be published. Required fields are marked *