Aug 22

Who did what to my database and when…

One of the most popular questions on forums / SO etc is, “How can i find out who dropped a table, truncated a table, dropped a procedure….” etc.   Im sure we have all been there,  something changes ( maybe schema or data ) and we have no way of telling who did it and when.

SQL auditing can get you there, and i’m not suggesting you shouldn’t use that,  but what if that is not setup for monitoring the objects you are interested in ?

If you look back through my posts, I have been quite interested in ScriptDom and TSQL parsing,  so if you have a trace of all the actions that have taken place over a time period, you can parse the statements to find the activity.

Drawing inspiration ( cough, stealing ) from Ed Elliot’s dacpac explorer,  I have created a git repo for a simple parser that is created using T4 templates. T4 templates are an ideal use here as the full language domain can be exploded out automatically and then you as a developer can add in your code to cherry pick the parts you are interested in.

At this time the project has hooks to report on SchemaObjectName objects , so any time any object is referenced be that in a CREATE, SELECT , DROP, MERGE …  will fall into the function OnSchemaObjectName and  OnDropTableStatement that will be hit when DROP TABLE is used.

This is not intended to be a full end to end solution, not least as the SQL to be parsed could be coming from and number of sources,  but if you have the skills to run it as is , you probably have enough to tailor it for your requirements.

As ever,  let me know what you think and any comments gratefully received.

The github repo is   :   https://github.com/davebally/TSQLParse

 

Jan 05

A first look at Redgate Reference Data Manager

Over the past few years one of the big drives and innovations in the SQL Server world, has been to bring database schema changes into a full ALM (Application Lifecycle Maintenance Methodology).  There are many competing products in this space, but all of them ( well, that I know of )  only solve database schema change problems.  Databases are all about the data and without data applications are pretty useless.  Most systems have some form of standing reference data, countries, currencies, name prefixes etc and Reference Data Manager is a (private) beta product from Redgate and its aim is to treat data as a first class citizen in the ALM cycle.

Mostly,  even if this data is even ‘managed’,  they are managed by large MERGE ( or MERGE-esque ) scripts that validate the whole of the dataset at deploy time.  The problem here is that these are large by nature and the whole intent is unclear, its all or nothing and as human eyes do generally like to look over a deployment script before it happens, make the process longer and more error prone.

Essentially Reference Data Manager, is Redgate bringing the already mature technology of Data Compare into Microsoft’s SSDT environment using Deployment Contributors.  This, Redgate working within SSDT, is interesting in itself and I do hope that Redgate will bring more of their products into SSDT in the future.  Also, and more pertinently,  this means that the data is now maintainable with SSDT right next to the schema and is an integral part of the deployment, not an extra ‘cog’ somewhere.

So, lets take a look, as with all Redgate tools, its pretty simple.  Ive defined a table “COUNTRY”, which is nothing special:

1
2
3
4
5
6
CREATE TABLE [dbo].[Country]
(
	CountryId INT NOT NULL PRIMARY KEY,
	CountryName varchar(255) not null,
	Currency    char(3) not null 
)

Now, to add data I simply R-Click on “Country.Sql” in Solution Explorer and I see a new “Add Reference Data” option:

That adds a child file to Country.Sql called Country.refdata.sql,  this is the file in which you define your data.  Presently this is a simple SQL script with inserts:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (1,'UK','GBP');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (2,'USA','USA');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (3,'France','EUR');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (4,'Germany','EUR');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (5,'Austrlia','AUD');

In the fullness of time I would expect a grid UI here, but as I say it’s beta ATM.

So now when I deploy my solution, the Reference Data Manager kicks in and adds to the deployment script the relevant data changes.  As this is a new table this is obviously just inserts.

PostDeploy

What about adding a new country and modify ‘USA’ to be ‘U.S.A.’ and correcting the error on currency from ‘USA’ to ‘USD’. I modify the refdata.sql file to be :

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (1,'UK','GBP');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (2,'U.S.A.','USD');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (3,'France','EUR');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (4,'Germany','EUR');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (5,'Austrlia','AUD');
 
insert into [dbo].[Country](CountryID,CountryName,Currency)
Values (6,'New Zealand','NZD');

When I now deploy the solution, there is no schema change but the script will contain only the data changes.

Postusd

As you can see,  it only publishes the required changes,  not retest using MERGE or whatever at deployment time.  Obviously this is a really trivial example but how many reference data tables does your system have ? and how many rows in each of those ? Is the intent a lot clearer ?

Personally I think that now by treating data as a first class citizen the Reference Data Manager will not only enable data to be source controlled but also simplyfy the deployment pipeline by being very explicit about the changes that are to be made.

If you would like to get involved then you can sign up for the beta here : http://www.red-gate.com/ssdt

Mar 29

T4 Support in SSDT

Sometimes a piece of technology passes you by completely, T4 templating happens to have done that for me.  On tuesday night, at SQL Supper, Geoff Clark ( a uk MCM ) (t) done a fantastic presentation on end-to-end datawarehouse loading, a portion of this was on t4 templating.  The question arose about SSDT and T4 templating and as a coincidence literally the next day  the SSDT team put out the March 2014 release,  having my interest already piqued i thought id have a play 🙂

So… lets add a TSQL Template

sqltt

 

That will now add base TSQL template into you project, however there is a slight snag , its broken 🙁

broken

This is confirmed and will be fixed in the next release as stated in this msdn forum thread. Bugger!!!

However, until then, all is not lost,  although the DacFX portion is broken T4 templating still works so a more ‘generic’ solution is available.  The basis of this is remarkably similar to TSQL Smells,  iterate through a project adding the .sql files into a model and examining the DOM for interesting ‘stuff’.

After a bit of playing around, very much a T4 noob, this is the script I came up with:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
<#@ template language="C#" debug="true" hostspecific="true" #>
<#@ assembly name="Microsoft.VisualStudio.Shell.Interop.8.0" #>
<#@ assembly name="EnvDTE" #>
<#@ assembly name="EnvDTE80" #>
<#@ assembly name="VSLangProj" #>
<#@ assembly name="C:\Program Files (x86)\Microsoft SQL Server\120\SDK\Assemblies\Microsoft.SqlServer.TransactSql.ScriptDom.dll" #>
<#@ assembly name="C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\120\Microsoft.SqlServer.Dac.dll" #>
<#@ assembly name="C:\Program Files (x86)\Microsoft Visual Studio 11.0\Common7\IDE\Extensions\Microsoft\SQLDB\DAC\120\Microsoft.SqlServer.Dac.Extensions.dll" #>
<#@ import namespace="Microsoft.VisualStudio.Shell.Interop" #>
<#@ import namespace="EnvDTE" #>
<#@ import namespace="EnvDTE80" #>
<#@ import namespace="Microsoft.VisualStudio.TextTemplating" #>
<#@ import namespace="Microsoft.SqlServer.Dac" #>
<#@ import namespace="Microsoft.SqlServer.Dac.Model" #>
<#@ import namespace="System.IO" #>
<#@ import namespace="System.Collections.Generic" #>
<#@ output extension=".sql" #>
 
-- Dynamic File generated by db
<#
    var hostServiceProvider = (IServiceProvider)this.Host;
    var dte = (DTE)hostServiceProvider.GetService(typeof(DTE));
 
    using (TSqlModel model = new TSqlModel(SqlServerVersion.Sql110, new TSqlModelOptions { }))
    {
        foreach(Project project in dte.Solution)
        {
            IterateThroughProject(project.ProjectItems,model);
        }
 
        List<TSqlObject> allTables = GetAllTables(model);
        foreach (var table in allTables)
        {
#>
--				Table <#= table.Name.Parts[0] #>.<#= table.Name.Parts[1] #>
<#
 
        }
    }
 
#>
-- File Done
<#+
 
    public List<TSqlObject> GetAllTables(TSqlModel model)
    {
        List<TSqlObject> allTables = new List<TSqlObject>();
 
        var tables = model.GetObjects(DacQueryScopes.All, ModelSchema.Table);
        if (tables != null)
        {
            allTables.AddRange(tables);
        }
        return allTables;
    }
 
 
 
    private void IterateThroughProject(ProjectItems PrjItems,TSqlModel model)
    {
        foreach(ProjectItem PrjItem in  PrjItems)
        {
 
            if(PrjItem.Name.EndsWith(".tt", StringComparison.OrdinalIgnoreCase)){ // Dont Load the files we want to build
                continue;
 
            }
            if(PrjItem.ProjectItems!=null)
            {
                IterateThroughProject(PrjItem.ProjectItems,model);
            }
            if(//PrjItem.Object.GetType().ToString() == "Microsoft.VisualStudio.Data.Tools.Package.Project.DatabaseFileNode" && 
                PrjItem.Name.EndsWith(".sql", StringComparison.OrdinalIgnoreCase))
            {
#>
--				This is a sql file and will be processed
--				<#= PrjItem.FileNames[0] #>
<#+
                if (!PrjItem.Saved)
                {
                    PrjItem.Save();
                }
                StreamReader Reader = new StreamReader(PrjItem.FileNames[0]);
 
                string Script = Reader.ReadToEnd();
                model.AddObjects(Script);
            }
        }
    }
 
#>

This should all be fairly self explanatory, we iterate through the project in the member function IterateThroughProject adding any found .sql files to the model. Then use the model.GetObjects member to find all tables in the model, iterate over that list printing schema and table name.

I have seen a few clunky TSQL generation routines written in TSQL but i think porting those into T4 templating and having them built directly into a dacpac will be a big boon.

Have fun

Mar 04

SSDT – Cut n Paste substituting CMDVARs

Another thing that slows me down with SSDT is testing stored procedure that use CMDVARS.

Consider this proc :

Prc1

If I wish to test that in SSMS, I can cut-n-paste it in then use SQLCMD mode and add in :setvar statements or simply replace the $(CmdSwitch) and $(Otherdb) with the appropriate values.  Ok for one or two values but what if there are lots ?  What if there is a simpler way  ? Well now there is:

Introducing “CmdVar Cut”

cmdvarcut

Selecting this option will place the text with the values substituted directly into your clipboard ready for pasting to SSMS.  As with Fast Deploy, the values are defined in Tools->Options->TSQL Smells->Deploy CMDVars.

The download for this is available from here http://dataidol.com/davebally/ssdt-tsql-smells-add-in/

Have fun

 

 

 

Mar 01

SSDT – Fast deploy

Let me state first of that I do really like SSDT, integration of a source control system of your choice and all its extra features make its use a real boon.

However……..

On my past few assignments SSDT has been heavily used, and deployment is done via Build/Deploy powershell scripts.  This is all very well and good and makes for predictable,repeatable deployments  but it does slow you down rather a lot: You make a change to a SQL artefact, then build which takes a minute or two, then deploy which once again takes a minute or two.  This over the course of a day adds up to rather a lot of time,  it also means that mostly development is done in SSMS then cut-n-pasted into SSDT on completion which really negates a lot of the advantages of SSDT and makes it no more useful than a historic system using Source safe as source control.

 

This got me to thinking that there should be a faster way to deploy during development, so have now extended TSQL Smells to do exactly that.  Right Clicking  on a file ( or folder ) will now show you the “Fast Deploy” option

FastDeploy

 

 

This will deploy the selected scripts directly to your selected SQL host without the kerfuffle of a full build-deploy cycle.  User feedback is given in the Output->TSQL Smells window, if nothing appears to be happening this should tell you why.

CMDVars are supported in the Tools->options->TsqlSmells-> Deploy CMD Vars, just enter them as key-value pairs.  These are kept in the registry so will persist over SSDT sessions,  you also set the database to connect to here.

You can download the latest version from here : http://1drv.ms/MEOp6H

Install instructions etc are available here : http://dataidol.com/davebally/ssdt-tsql-smells-add-in/

Let me know how you get on : parser@clearskysql.co.uk

 

 

Jan 28

TSQL Smells in SSDT – The monster lives

What seems like many many moons ago, I released a powershell script that could detect many different “TSQL Smells“. Whilst it worked, I was never very happy with the interface, it was clunky and didnt feel quite ‘right’. One of my longer term aims was to do something about that and make TSQLSmells more accessible.
Static code analysis would have been mine prime choice, and that would make perfect sense, however after much blood, sweat and tears (and time wasted) i found that extending code analysis in SSDT is not supported, so i gave up disheartened.

However, poking around the other day I stumbled over a solution to my problems, and rekindled my interest, how about creating a visual studio add-in and dropping it straight into SSDT ?

So, without any further ado here it is “TSQL Smells in SSDT”, to run drop two files from here into your Visual Studio Addin directory, something like Documents\Visual Studio 2010\Addins and run SSDT. You may see an error saying it has failed to load (error number 80131515), in which case use the “Unblock” option in the file properties of the dll.

unblock

Now, load up your project and in the R-Click menu you should see “SQL Smells”.

SQLSmells

Click that and your error list will be populated with the Smells that you know and love.

smells

If that is to noisy for you, you can reduce the noise by unselecting smells in Tools->Options->TSQL Smells.

Have fun and let me know how you get on 🙂

Massive Hat Tip to Jamie Thomson for being a guinea pig for this process

If you are running VS2013, then the .addin needs a small tweak.
Please see http://bit.ly/1bRB1SN for details.