Welcome to the wonderful world of data audits. As anyone who has ever dealt with annoying managers and clueless users knows, the word "database" is thrown around to skirt responsibility and as a crutch. We all know managers who don't believe a report because they "know the numbers aren't right", and users who "don't know how the client's email got deleted". For us developers, audit data is a lifesaver. Why? Because data doesn't lie. It may not always tell us what we want to hear - it may tell us an application is broken or a user is doing something they shouldn't be - but that's not the data's fault.
So, how do we make data auditing dynamic, efficient and ruthlessly accurate? That's the subject for todays post. To accomplish this, we will use Tables, Triggers, Functions and Procedures.
As usual, all bold keywords, references and pertinent articles are linked to at the end of the post.
Let's start our scenario with a simple table called Orders, as represented by the following script:
In order to get audit data on each INSERT and UPDATE, we need to utilize triggers. However, to do this effectively we would need to handle each column and write data to an audit table. This may seem cumbersome, probably because it is. So, to make this scenario dynamic we are going to start with an additional table, as modeled below:
Each time we INSERT or UPDATE a record, the data will be added to this table. A trigger similar to the snippet below would be required:
As we can imagine, writing a trigger similar to this for each table, including each column would be a task and a half. So... why not have a procedure do it for us? This is taking the dynamic piece of the data audit model to the next level. But, before we write the procedure to do all of this for us, let's stop and think about the triggers.
Sure, writing them in the first place is a chore. But, what if we don't care about a column or two in our audits. Say for example we had 2 columns in our Orders table called Changed_At and Changed_By. These simply recorded the user and time stamp of the last change. We wouldn't need to audit these, since they are, in a way, an inline audit method. So, to make the column auditing dynamic, let's create 2 additional tables and a function.
The 2 tables are very simple and between them contain only 6 columns. They are modeled below:
The idea here is very simple, we can tell the trigger to audit or ignore any given column in a table we have. The creation and or updating of the trigger will be done inside the procedure we will write in a moment. Before that we will need a function. All this function does is return a table of all the columns we have decided to audit on a given table. The function could be written as follows:
This function is very simple, and joins the tables we just created on the INFORMATION_SCHEMA. The result: a table of the columns that a) actually exist in the table and b) are set to be audited for the passed in table name.
Now, it's time to move on to the actual procedure that does the manual labor for us. Broken into logical pieces, the first bit is as follows:
The procedure takes only 3 (at most, and 1 at minimum) input parameters and outputs 2. The table name is self-explanatory, and the @ForInsert and @ForUpdate bits will allow us further dynamic-ness by letting us choose if we want to audit one type or another or both.
The first few lines set up our output values to their defaults and do some basic validation. After all, if we do not want to use insert nor update triggers, what is there to do? And if the table doesn't exist - if there is no valid OBJECT_ID - then we can't rightly add triggers to it.
After that, some basic variables are declared that we will utilize in the following snippets.
This is the declaration of the try block that will allow us to catch fatal errors and report them to the user via the @Success and @Message outputs. The first order of business is to get the primary key for the passed table. This example will only work if there is a single PK on the table; no more; no less. For other scenarios, the code would need to be modified slightly.
If we are going to use insert triggers, this block constructs dynamic SQL and places it into the @ColumnTriggerSection variable using a WHILE loop. The loop is populated from the INFORMATION_SCHEMA and writes the trigger for all columns in the table. You may choose to exclude the PK field, and to do so would be as easy as adding an additional WHERE clause to the insert.
The setting of the trigger section using dynamic SQL is quite easy, since we are essentially using the code from the trigger example towards the beginning and just adding it in a loop. The final section for the insert just adds the trigger declaration and body and, most importantly, determines if we need to DROP/CREATE or just a CREATE.
The code that follows, for the update trigger portion is virtually identical. The differences are in the dynamic SQL that we generate in the loop for each column and the creation of the trigger itself. These need to be slightly different, as the name cannot be duplicated, and on update, we are going to record the old value and the new value.
Now that we have all of our objects created: audit data table, audit customization tables, audit column function and our dynamic trigger creation procedure - let's test her out.
Let's first execute our procedure for the Orders table like so:
Next, let's add some data into the Orders table via an insert, then change some of it via an update. For example, we forget to add the customer's email address and sign them up for a subscription.
The results, when we query our Audit_Data table would be similar to this:
As we can see, our original insert created values in the New_Value column only, since there technically was no old data. However, when we updated, we can see new rows with a different timestamp for the 2 columns we updated.
Summary: we now can track every change on every row - transparently, dynamically and mercilessly.
Tables
Triggers
Functions
Procedures
INSERT Statement
UPDATE Statement
INFORMATION_SCHEMA
OBJECT_ID
Primary Key Constraints
Dynamic SQL
WHILE Loop
WHERE Clause
Drop Trigger
Create Trigger
Wednesday, July 28, 2010
Tuesday, July 27, 2010
Converting Delimited Data Into a Table
In this post we will examine the powerful potential of Table-valued Functions!
As per usual, all bold keywords and pertinent MSDN articles are linked at the bottom of the post.
To setup the scenario for this example, you can probably think of several instances when you pass a comma or semi-colon delimited list of values into a Stored Procedure, and need to parse each value separately. This function will do just that, but in a re-usable, dynamic way.
To begin, lets open Microsoft SQL Server Management Studio. For this example, I am using SQL Server 2008, but the principles can be transferred to most any RDBMS. Let's begin by connecting to the appropriate server and database. Let's expand the Programmability tree and the Functions tree.
We can right-click the Table-valued Functions tree and select "New Multi-statement Table-valued Function..." from the dropdown menu. This will open a new query tab, with some templated content, as seen below.
Obviously, there are quite a few invalid statements and code, but we need to replace it all with our real content. Let's begin by removing the first 10 lines of comments, and optionally the Author comment lines. Now we need to give our new function a name, such as ParseSingleDelimitedString. I placed mine in the dbo schema, but you may need to specify another if your circumstance warrants it. And, of course, leaving the schema name blank in the function name will place it automatically in the dbo schema.
For a base essentials function, you could just specify one parameter, @String, that contains the delimited data to be parsed. For my function, I also included @Delimiter, @IgnoreBlankValues and @TreatBlankAsNull. These can be hard-coded, but I wanted to make the function as dynamic as possible.
In our function template, we also need to specify the table structure we will be returning. In my example, I included a Primary Key with an Identity value, but this is completely optional. So, up to now, my function looks like below.
Once the signature of the function has been completed, we can now move on to the body, where the real magic is done. The work that the body does is not complicated, but does require some thought. The logic of it is this: Loop through the string and select the left-most characters until we find the delimiter character. Add it to our table and remove the current iteration from the string data and repeat until the string is of 0 length. Naturally, there is a little more to it than that, but it gives us a base for what our function body should look like.
For my function, I put the @String parameter into a local variable and do all my manipulation on that one. I also created an additional variable, @CurrentValue, that will be used to contain the current iteration. Below is the entire function body, which we will step through.
The loop, a WHILE statement, simply makes sure that we are working with a valid string, not with empty data.
The next step in my function is to determine if a blank value was passed - signified by a delimiter character being the left-most character. If so, we can handle it as desired. In my function, how it is handled is based on a parameter.
The next step is to determine if our delimiter exists in the string data. If so, we can parse the left-most string data into our current value, or if it doesn't exist, we can assume that the string contains the last of the delimited values. Either way, we can set our current variable and proceed.
Our last step is to add the value to the table we are returning. My function does some blank value logic, but that is technically optional.
So, there you go. Execute the code, and then test it by calling it, as per the code below. The results are sweet, especially because you can use the function as part of a SELECT statement on in a WHERE clause!
If you liked this post, please feel free to forward it on or comment.
Table-valued Functions
Stored Procedure
Microsoft SQL Server Management Studio
SQL Server
RDBMS
dbo Schema
Primary Key
Identity Property
Variable
WHILE Loop
SELECT Statement
FROM Clause
As per usual, all bold keywords and pertinent MSDN articles are linked at the bottom of the post.
To setup the scenario for this example, you can probably think of several instances when you pass a comma or semi-colon delimited list of values into a Stored Procedure, and need to parse each value separately. This function will do just that, but in a re-usable, dynamic way.
To begin, lets open Microsoft SQL Server Management Studio. For this example, I am using SQL Server 2008, but the principles can be transferred to most any RDBMS. Let's begin by connecting to the appropriate server and database. Let's expand the Programmability tree and the Functions tree.
We can right-click the Table-valued Functions tree and select "New Multi-statement Table-valued Function..." from the dropdown menu. This will open a new query tab, with some templated content, as seen below.
Obviously, there are quite a few invalid statements and code, but we need to replace it all with our real content. Let's begin by removing the first 10 lines of comments, and optionally the Author comment lines. Now we need to give our new function a name, such as ParseSingleDelimitedString. I placed mine in the dbo schema, but you may need to specify another if your circumstance warrants it. And, of course, leaving the schema name blank in the function name will place it automatically in the dbo schema.
For a base essentials function, you could just specify one parameter, @String, that contains the delimited data to be parsed. For my function, I also included @Delimiter, @IgnoreBlankValues and @TreatBlankAsNull. These can be hard-coded, but I wanted to make the function as dynamic as possible.
In our function template, we also need to specify the table structure we will be returning. In my example, I included a Primary Key with an Identity value, but this is completely optional. So, up to now, my function looks like below.
Once the signature of the function has been completed, we can now move on to the body, where the real magic is done. The work that the body does is not complicated, but does require some thought. The logic of it is this: Loop through the string and select the left-most characters until we find the delimiter character. Add it to our table and remove the current iteration from the string data and repeat until the string is of 0 length. Naturally, there is a little more to it than that, but it gives us a base for what our function body should look like.
For my function, I put the @String parameter into a local variable and do all my manipulation on that one. I also created an additional variable, @CurrentValue, that will be used to contain the current iteration. Below is the entire function body, which we will step through.
The loop, a WHILE statement, simply makes sure that we are working with a valid string, not with empty data.
The next step in my function is to determine if a blank value was passed - signified by a delimiter character being the left-most character. If so, we can handle it as desired. In my function, how it is handled is based on a parameter.
The next step is to determine if our delimiter exists in the string data. If so, we can parse the left-most string data into our current value, or if it doesn't exist, we can assume that the string contains the last of the delimited values. Either way, we can set our current variable and proceed.
Our last step is to add the value to the table we are returning. My function does some blank value logic, but that is technically optional.
So, there you go. Execute the code, and then test it by calling it, as per the code below. The results are sweet, especially because you can use the function as part of a SELECT statement on in a WHERE clause!
If you liked this post, please feel free to forward it on or comment.
Table-valued Functions
Stored Procedure
Microsoft SQL Server Management Studio
SQL Server
RDBMS
dbo Schema
Primary Key
Identity Property
Variable
WHILE Loop
SELECT Statement
FROM Clause
Tags:
Function,
Sql Server,
T-SQL
Monday, July 26, 2010
How-To : Data Binding & Behaviors
Welcome to the Data Binding & Behaviors How-To!
All bold keywords and other pertinent MSDN articles and references appear at the end of the post.
For this example, we will utilize the power of Data Binding and the InvokeCommandAction behavior in a UserControl.
Let's imagine this scenario: we are creating a web application, and need to perform the same task when multiple controls receive the Click event. In my real-life solution, I am building a shell using Silverlight 4 and MEF. This shell will act something like an OS in that multiple applications can be open at once, using "windows" to hold different and unique data. The "windows" are a UserControl that contains, among other things, a close button. One of my goals in developing this application was to use the Xaml architecture to the fullest - include as little code-behind as possible. Not that code-behind is bad, or even unwanted - because it is a requirement - I just felt that what all I could do in the Xaml would keep the code as clean and easy to debug as possible. This theory was even more impressed when I thought of what all different types of events I would need to include in the code-behind for something as simple as our "windows".
The resulting solution was to use data binding and behaviors. So, without further ado, let's jump into Blend and sort this out. What we start with is a UserControl with a bunch of added content, but let's focus on the elementbuttonClose control below.
From the Assets menu, on the toolbar or the tab, select Behaviors and drag the InvokeCommandAction behavior to the control and drop it.
The result of this operation is that we have some new content added to our control and new properties become available on the Properties tab, as seen below.
Once we have this behavior attached to the control, we can now use data binding to perform the magic. However, there is some code to be created before we can entirely wire things up. Switching over to Visual Studio, we will need to create our command that will be executed. I created a folder called Objects and under that, another named Actions. This is where i created a new class called CloseModule. The actual class is quite simple and straight-forward, as can be seen from the snippet below.
As you can see, the class implements the ICommand interface. This is important because we could create a plain method, but that would prevent us from passing in parameters with as much ease. Implementing ICommand is as easy as Ctrl + . and selecting the appropriate option. Once the 2 methods are added, we can modify them to fit our functionality. For purposes of this command, the required method CanExecute always returns a true value, but depending on the circumstances you may want to add additional functionality or logic.
The execute method in my class will require the parameter value to be of type ModuleContainer, the "window" used to store data using MEF. The magic of this parameter is in the data binding, and for that let's go back to Blend. Our first order of business is to add a reference to our new ICommand class inthe App.xaml file. Optionally, you can add it as a resource in the Xaml page you will be calling it in, but I chose to make it a little more accessible. You can add it to any Xaml page like so:
Of course, you may need to add additional namespaces. After that, in the Properties tab of the Xaml page where our control resides, we can set our properties accordingly, using data binding.
Under the Trigger section, you can select the event, in this case Click. If you need to perform extended validation, the Conditions tab can also be used to bind to values of other controls, and based on the value, will enable the command or not. However, turning to the Common Properties section, we can use the Advanced options of the Command property to select the resource, as seen below.
Since, in this case, I want to close our "window" of content, I need some way to tell the close command which UIElement is actually being clicked on and closed. We can accomplish this by selecting Advanced options of the CommandParameter property and selecting "Element Property Binding..." This will change our cursor to be a mouse with a little target, and from there we can use the "Objects and Timeline" tab to select the root element, in my case "userControl".
The final step is to add the CommandName, in the case Execute. And there you have it. Your data binding and behavior control is ready to run.
If you liked this post, please forward it on or comment.
Tags:
Behaviors,
DataBinding,
How-To
Subscribe to:
Posts (Atom)