Xaml Workshop Pages

Tuesday, September 21, 2010

WCF Error: The underlying connection was closed: The connection was closed unexpectedly.

CommunicationException was unhandled
The underlying connection was closed: The connection was closed unexpectedly.

Once in a while I have come across this error while creating and testing a new WCF service.  And, after hours of searching for remedies, I have finally found it.

First let's start off with some "solutions" that didn't work.  While it was necessary for me to increase the limits on the Service and Client side configs, that was not the issue.  Nevertheless, for my uses, these were good add-ins:

In my Web.config file I added the following settings, the pertinent data in yellow:


<bindings>
<basicHttpBinding>
<binding name="Carbon.CarbonServiceBinding"
allowCookies="true"
maxBufferPoolSize="2147483647"
maxBufferSize="2147483647"
maxReceivedMessageSize="2147483647"
closeTimeout="00:10:00"
openTimeout="00:10:00"
receiveTimeout="00:10:00"
sendTimeout="00:10:00">
<readerQuotas maxDepth="64" 
maxStringContentLength="2147483647" 
maxArrayLength="2147483647" 
maxBytesPerRead="4096" 
maxNameTableCharCount="16384"/>
</binding>
</basicHttpBinding>
</bindings>


<behaviors>
<serviceBehaviors>
<behavior name="Carbon.CarbonServiceBehavior">
<dataContractSerializer maxItemsInObjectGraph="2147483647"/>
<serviceTimeouts transactionTimeout="00:10:00"/>
<serviceMetadata httpGetEnabled="true"/>
<serviceDebug includeExceptionDetailInFaults="true"/>
</behavior>
serviceBehaviors>
behaviors>

This did not fix the issue and I kept getting the closed connection exception.  The issue was discovered when I began looking into the class that was being returned by my web method and its Data Contract and Members.

My class, named ReturnContainer had a GUID, bool and some strings all marked as Data Member.  However there was one member of type object.  During some of the processing that gets done, an XElement is assigned to this object member.  And, apparently WCF does not know how to serialize an XElement.  The solution, for good measure, was 2 fold:

  1. Change object data type to string
  2. Change all methods that assign to the member to include a .ToString() when necessary
In a nutshell, this error can be caused by large data sets, but in my case, was caused by a failure to serialize certain data types.


Wednesday, August 25, 2010

Same-Old Apple Business Philosophy: iCrap

The other day, while watching television, I was treated to a newer Apple commercial promoting the iPad.  There was one glaring annoyance that I have not been able to stop from creeping into my anti-Apple thoughts.

Should we judge the quality of a product by the quality of add-on products?

Many of my friends, family and other acquaintances have been asking me "When are you getting an iPhone?"  Everyone seems to think that Apple has some sort of superior, amazing technology that someone like me, a software engineer, shouldn't be able to live without.  The reality of the matter is that Apple, principally via the iPhone, is dumbing things down so much that people think that it is because of Apple that they can now do all sorts of new and amazing things on their phone.

Wrong.

It has nothing to do whatsoever with Apple.  In fact, if people would get off the gimpy, lopsided Apple fad bandwagon they would see that it may have been the first, but no where near the best.  Sure, we could go around in circles on the Apple debate, but we wouldn't get anywhere because 99% of all iPhone consumers are not fans of even mildly technical subjects - much less the kind of technical knowledge that would be required to have an intelligent debate on the matter.

The fact is that people confuse the quality of an app with the quality of the device.  Sure, turn on your iPad or iPhone and you can do all sorts of cool and amazing things.  Thank software developers, not Apple.  You can get the same exact app for your Android device, with better performance and multi-tasking.  Of course the downside of all of this is that the general public is now having the ability to do new and complex things that before took more effort, patience and skill.

Now, as a software engineer, I shouldn't be against making user-friendly applications for end-users, but it is a paradox without any easy or apparent answers, much less a balanced compromise. Take for example MySpace.  Some would argue that MySpace gave the end-user the ability to totally customize their home page.  While that sort of customization may be handy as a feature, think about the end result: putting creative powers in incompetent hands.  How many people really know how to create a pleasing, aesthetic web page?  Not many; it's a skill that is learned.  But, if we give the general public absolute graphical power, we get a horrible mash of colors and animations and blinking text.  Why?  Because they can.  Just like Dr. Malcom said in Jurassic Park: "They were so preoccupied with whether they could, they never bothered to think about if they should."

Unfortunately, Apple has blinded their consumers to the mere existence of other smart phones.  Apple consumers will blindly choose an Apple device just because it has the Apple logo on it.  They see a commercial that portrays some massive touch screen and some neighborly mid-20s person doing something amazing and they just have to have it.  Would you like a better product?  No!  No!  Why not?

Because... it's not an iPhone.

Us software engineers and other good consumers of conscience have only one thing to look forward to: the day when Apple commits technological suicide - again.  Apple wasn't doing so bad a few decades ago, but they went so far proprietary that they almost ended up as a chapter in history.  The mindset of the Apple company is not to embrace application or open development standards.  They want everyone to adopt their ways.  Take Microsoft, for example.  While you can argue for hours over the actions and corporate ethics, look at what they have done for the development community.  You can get everything you need to build powerful, rich applications - free.  Want to build a website using traditional scripting and languages?  Want to build rich web applications using the latest (and greatest) software?  Want to design traditional applications?  Want to design web services?  Want to create your own database?  It doesn't matter because all of this is free.  And not just that, there is so much available to learn from: tutorials, videos, whitepapers, blogs, etc.

If Apple was smart, they would capitalize and learn to make friends of the development community.  Not ostracize them in exchange for a few minutes in the limelight.  But, they wont.  History will run its course, and Apple will once again go the way of the bell-bottom.  Almost gone for a few decades until some demented soul tries to resurrect it.

Wednesday, July 28, 2010

Building a Dynamic, Efficient Data Auditing Model

Welcome to the wonderful world of data audits.  As anyone who has ever dealt with annoying managers and clueless users knows, the word "database" is thrown around to skirt responsibility and as a crutch.  We all know managers who don't believe a report because they "know the numbers aren't right", and users who "don't know how the client's email got deleted".  For us developers, audit data is a lifesaver.  Why?  Because data doesn't lie.  It may not always tell us what we want to hear - it may tell us an application is broken or a user is doing something they shouldn't be - but that's not the data's fault.

So, how do we make data auditing dynamic, efficient and ruthlessly accurate?  That's the subject for todays post.  To accomplish this, we will use Tables, Triggers, Functions and Procedures.

As usual, all bold keywords, references and pertinent articles are linked to at the end of the post.

Let's start our scenario with a simple table called Orders, as represented by the following script:









In order to get audit data on each INSERT and UPDATE, we need to utilize triggers.  However, to do this effectively we would need to handle each column and write data to an audit table.  This may seem cumbersome, probably because it is.  So, to make this scenario dynamic we are going to start with an additional table, as modeled below:










Each time we INSERT or UPDATE a record, the data will be added to this table.  A trigger similar to the snippet below would be required:



























As we can imagine, writing a trigger similar to this for each table, including each column would be a task and a half.  So... why not have a procedure do it for us?  This is taking the dynamic piece of the data audit model to the next level. But, before we write the procedure to do all of this for us, let's stop and think about the triggers.

Sure, writing them in the first place is a chore.  But, what if we don't care about a column or two in our audits.  Say for example we had 2 columns in our Orders table called Changed_At and Changed_By.  These simply recorded the user and time stamp of the last change.  We wouldn't need to audit these, since they are, in a way, an inline audit method.  So, to make the column auditing dynamic, let's create 2 additional tables and a function.

The 2 tables are very simple and between them contain only 6 columns.  They are modeled below:

The idea here is very simple, we can tell the trigger to audit or ignore any given column in a table we have.  The creation and or updating of the trigger will be done inside the procedure we will write in a moment.  Before that we will need a function.  All this function does is return a table of all the columns we have decided to audit on a given table.  The function could be written as follows:





















This function is very simple, and joins the tables we just created on the INFORMATION_SCHEMA.  The result: a table of the columns that a) actually exist in the table and b) are set to be audited for the passed in table name.

Now, it's time to move on to the actual procedure that does the manual labor for us.  Broken into logical pieces, the first bit is as follows:





















The procedure takes only 3 (at most, and 1 at minimum) input parameters and outputs 2.  The table name is self-explanatory, and the @ForInsert and @ForUpdate bits will allow us further dynamic-ness by letting us choose if we want to audit one type or another or both.

The first few lines set up our output values to their defaults and do some basic validation.  After all, if we do not want to use insert nor update triggers, what is there to do?  And if the table doesn't exist - if there is no valid OBJECT_ID - then we can't rightly add triggers to it.

After that, some basic variables are declared that we will utilize in the following snippets.



This is the declaration of the try block that will allow us to catch fatal errors and report them to the user via the @Success and @Message outputs.  The first order of business is to get the primary key for the passed table.  This example will only work if there is a single PK on the table; no more; no less.  For other scenarios, the code would need to be modified slightly.

If we are going to use insert triggers, this block constructs dynamic SQL and places it into the @ColumnTriggerSection variable using a WHILE loop.  The loop is populated from the INFORMATION_SCHEMA and writes the trigger for all columns in the table.  You may choose to exclude the PK field, and to do so would be as easy as adding an additional WHERE clause to the insert.

The setting of the trigger section using dynamic SQL is quite easy, since we are essentially using the code from the trigger example towards the beginning and just adding it in a loop.  The final section for the insert just adds the trigger declaration and body and, most importantly, determines if we need to DROP/CREATE or just a CREATE.















The code that follows, for the update trigger portion is virtually identical.  The differences are in the dynamic SQL that we generate in the loop for each column and the creation of the trigger itself.  These need to be slightly different, as the name cannot be duplicated, and on update, we are going to record the old value and the new value.



















Now that we have all of our objects created: audit data table, audit customization tables, audit column function and our dynamic trigger creation procedure - let's test her out.

Let's first execute our procedure for the Orders table like so:





Next, let's add some data into the Orders table via an insert, then change some of it via an update.  For example, we forget to add the customer's email address and sign them up for a subscription.









The results, when we query our Audit_Data table would be similar to this:



As we can see, our original insert created values in the New_Value column only, since there technically was no old data.  However, when we updated, we can see new rows with a different timestamp for the 2 columns we updated.

Summary: we now can track every change on every row - transparently, dynamically and mercilessly.

Tables
Triggers
Functions
Procedures
INSERT Statement
UPDATE Statement
INFORMATION_SCHEMA
OBJECT_ID
Primary Key Constraints
Dynamic SQL
WHILE Loop
WHERE Clause
Drop Trigger
Create Trigger

Tuesday, July 27, 2010

Converting Delimited Data Into a Table

In this post we will examine the powerful potential of Table-valued Functions!


As per usual, all bold keywords and pertinent MSDN articles are linked at the bottom of the post.


To setup the scenario for this example, you can probably think of several instances when you pass a comma or semi-colon delimited list of values into a Stored Procedure, and need to parse each value separately.  This function will do just that, but in a re-usable, dynamic way.


To begin, lets open Microsoft SQL Server Management Studio.  For this example, I am using SQL Server 2008, but the principles can be transferred to most any RDBMS.  Let's begin by connecting to the appropriate server and database.  Let's expand the Programmability tree and the Functions tree.


We can right-click the Table-valued Functions tree and select "New Multi-statement Table-valued Function..." from the dropdown menu.  This will open a new query tab, with some templated content, as seen below.










































Obviously, there are quite a few invalid statements and code, but we need to replace it all with our real content.  Let's begin by removing the first 10 lines of comments, and optionally the Author comment lines.  Now we need to give our new function a name, such as ParseSingleDelimitedString.  I placed mine in the dbo schema, but you may need to specify another if your circumstance warrants it.  And, of course, leaving the schema name blank in the function name will place it automatically in the dbo schema.


For a base essentials function, you could just specify one parameter, @String, that contains the delimited data to be parsed.  For my function, I also included @Delimiter, @IgnoreBlankValues and @TreatBlankAsNull.  These can be hard-coded, but I wanted to make the function as dynamic as possible.


In our function template, we also need to specify the table structure we will be returning.  In my example, I included a Primary Key with an Identity value, but this is completely optional.  So, up to now, my function looks like below.
































Once the signature of the function has been completed, we can now move on to the body, where the real magic is done.  The work that the body does is not complicated, but does require some thought.  The logic of it is this: Loop through the string and select the left-most characters until we find the delimiter character.  Add it to our table and remove the current iteration from the string data and repeat until the string is of 0 length.  Naturally, there is a little more to it than that, but it gives us a base for what our function body should look like.


For my function, I put the @String parameter into a local variable and do all my manipulation on that one.  I also created an additional variable, @CurrentValue, that will be used to contain the current iteration.  Below is the entire function body, which we will step through.






The loop, a WHILE statement, simply makes sure that we are working with a valid string, not with empty data.


The next step in my function is to determine if a blank value was passed - signified by a delimiter character being the left-most character.  If so, we can handle it as desired.  In my function, how it is handled is based on a parameter.


The next step is to determine if our delimiter exists in the string data.  If so, we can parse the left-most string data into our current value, or if it doesn't exist, we can assume that the string contains the last of the delimited values.  Either way, we can set our current variable and proceed.


Our last step is to add the value to the table we are returning.  My function does some blank value logic, but that is technically optional.


So, there you go.  Execute the code, and then test it by calling it, as per the code below.  The results are sweet, especially because you can use the function as part of a SELECT statement on in a WHERE clause!
















If you liked this post, please feel free to forward it on or comment.


Table-valued Functions
Stored Procedure
Microsoft SQL Server Management Studio
SQL Server
RDBMS
dbo Schema
Primary Key
Identity Property
Variable
WHILE Loop
SELECT Statement
FROM Clause

Monday, July 26, 2010

How-To : Data Binding & Behaviors

Welcome to the Data Binding & Behaviors How-To!

All bold keywords and other pertinent MSDN articles and references appear at the end of the post.

For this example, we will utilize the power of Data Binding and the InvokeCommandAction behavior in a UserControl.  

Let's imagine this scenario: we are creating a web application, and need to perform the same task when multiple controls receive the Click event.  In my real-life solution, I am building a shell using Silverlight 4 and MEF.  This shell will act something like an OS in that multiple applications can be open at once, using "windows" to hold different and unique data.  The "windows" are a UserControl that contains, among other things, a close button.  One of my goals in developing this application was to use the Xaml architecture to the fullest - include as little code-behind as possible.  Not that code-behind is bad, or even unwanted - because it is a requirement - I just felt that what all I could do in the Xaml would keep the code as clean and easy to debug as possible.  This theory was even more impressed when I thought of what all different types of events I would need to include in the code-behind for something as simple as our "windows".

The resulting solution was to use data binding and behaviors.  So, without further ado, let's jump into Blend and sort this out.  What we start with is a UserControl with a bunch of added content, but let's focus on the elementbuttonClose control below.









From the Assets menu, on the toolbar or the tab, select Behaviors and drag the InvokeCommandAction behavior to the control and drop it.










The result of this operation is that we have some new content added to our control and new properties become available on the Properties tab, as seen below.






































Once we have this behavior attached to the control, we can now use data binding to perform the magic.  However, there is some code to be created before we can entirely wire things up.  Switching over to Visual Studio, we will need to create our command that will be executed.  I created a folder called Objects and under that, another named Actions.  This is where i created a new class called CloseModule.  The actual class is quite simple and straight-forward, as can be seen from the snippet below.


















As you can see, the class implements the ICommand interface.  This is important because we could create a plain method, but that would prevent us from passing in parameters with as much ease.  Implementing ICommand is as easy as Ctrl + . and selecting the appropriate option.  Once the 2 methods are added, we can modify them to fit our functionality.  For purposes of this command, the required method CanExecute always returns a true value, but depending on the circumstances you may want to add additional functionality or logic.

The execute method in my class will require the parameter value to be of type ModuleContainer, the "window" used to store  data using MEF.  The magic of this parameter is in the data binding, and for that let's go back to Blend.  Our first order of business is to add a reference to our new ICommand class inthe  App.xaml file.  Optionally, you can add it as a resource in the Xaml page you will be calling it in, but I chose to make it a little more accessible.  You can add it to any Xaml page like so:





Of course, you may need to add additional namespaces.  After that, in the Properties tab of the Xaml page where our control resides, we can set our properties accordingly, using data binding.

Under the Trigger section, you can select the event, in this case Click.  If you need to perform extended validation, the Conditions tab can also be used to bind to values of other controls, and based on the value, will enable the command or not.  However, turning to the Common Properties section, we can use the Advanced options of the Command property to select the resource, as seen below.

























Since, in this case, I want to close our "window" of content, I need some way to tell the close command which UIElement is actually being clicked on and closed.  We can accomplish this by selecting Advanced options of the CommandParameter property and selecting "Element Property Binding..."  This will change our cursor to be a mouse with a little target, and from there we can use the "Objects and Timeline" tab to select the root element, in my case "userControl".









































The final step is to add the CommandName, in the case Execute.  And there you have it.  Your data binding and behavior control is ready to run.

If you liked this post, please forward it on or comment.

What tutorial should be next?