My diary of software development

Archive for December, 2009

So what exactly *is* the difference between the logical architecture and the physical?

In IT, we’ve all read a technical document and have come across some diagram labeled ‘Logical Architecture’:

Some random logical diagram I pulled off of a Google search
And then later on in the same document, we have come across a diagram labeled ‘Physical Architecture’:

Some random physical diagram I pulled off of a Google search

What exactly is the difference between the logical and the physical design? There are a plethora of logical and physical architecture diagrams throughout the IT world, so many in fact that we expect to find them in our literature. Just as we expect to see a Wal-Mart, a Cracker Barrell, and a Baptist Church at every exit on Interstate 40, we expect to find logical and phsyical designs in most IT documents. I don’t know when the idea of logical vs. physical first started, maybe it started with the a team of peasants in the 12th century who were tasked with building a moat around some castle but I was sitting and reading yet another Systems Design Document for yet another project here at work, when the answer to this question occurred to me.
The logical design is what we’re supposed to do and the physical design is what we’re doing. I plan to look at every logical and physical design I see from now on with this revelation.

Does Kerberos authentication affect SQL Server connection pooling?

The Question Submitted To Me

A manager at my company asked me this question. I told him that I didn’t know for sure but I didn’t *think* Kerberos authentication would affect connection pooling. I put on my thinking cap at this point and told him that the connection pool in an ADO.Net application (he was concerned with only .Net apps) was keyed on the connection string so, I told him, if the connection string didn’t change between users then a single connection pool would be used for all users.   

I went on to tell this manager that with Kerberos authenticated users, the connection string would be a trusted connection string without specific credentials for each user. Since the connection string would be the same trusted connection string for all users, there would be one connection pool for all of the Kerberos authenticated users.   

After telling him the above information, he asked me to put together a test to document my findings so I set about to create a test which could prove this one way or the other. However, at this point I was pretty sure I was right because it *seemed* right given everything I knew.   

Boy, was I in for an eye opener.   

Overview Of My Test

The test I came up with was to write a test client which connected to a web service on another machine. This web service would then connect to another web service on a second machine. This second web service would then connect to a SQL Server database on a third machine:   


I setup these machines in my trusty hyper-v environment I used for the Sharepoint 2010 farm.   

To validate my theory, I decided to create 5 domain users: user1 – user5 and run each of them through my test one by one. I figured at the end of the test, the SQL Server would have either 5 seperate connections for each user or it would have one session which was shared by each user in the connection pool.   

At this point, I had been doing a lot of reading on Kerberos and I ran across a document from Microsoft which flat out told me that Kerberos authentication would defeat the connection pool. However I still thought I was right and decided that I’d ignore that document, maybe they were taking about an issue that didn’t apply to this environment. Besides, I was in the midde of setting up this test and I wanted to finish it.   

Running The Test

My test client presented a menu and gave the user an option of calling one of two operations:

WhoAmI()  – Returns the identity of the caller. Returns both the thread identity and current windows identity.

Here is a screen shot of the WhoAmI() test:

WhoAmI() Test Results

ExecuteSQLServerDBCommand()  – Accepts 2 paramaters: Database Connection String and a SQL string. This method connects using  the database connection string, executes the SQL, and returns the SPID of the current DB connection and the number of rows affected by the executed SQL.
Here is a screen shot of the ExecuteSQLServerDBCommand() test:

ExecuteSQLServerDBCommand() Test Resuts


SQL Server Configuration

I’ll start with the SQL Server endpoint of my test because its going to be easier to explain each endpoint by starting at the end and working my way back to the test client.   

My SQL Server was a SQL Server 2008 instance running on Windows Server Standard 2008R2. I created a simple database named KERBTEST and gave all 5 domain users access to it:   

KERBTEST SQL Server Database

WCF Web Service 2

I wrote this web service as a WCF web service that was configured for Kerberos authentication. This web service was the final endpoint before the SQL Server and it contains the two operations WhoAmI() and ExecuteSQLServerDBCommand(). A partial block of the code in the WCF Web Service 2 is shown below:

        #region IService2 Members

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService2.WhoAmI()
            string result = String.Format("{3}System.Security.Principal.WindowsIdentity.GetCurrent().Name = {1}{0}System.Threading.Thread.CurrentPrincipal.Identity.Name = {2}{0}"
                , Environment.NewLine
                , System.Security.Principal.WindowsIdentity.GetCurrent().Name
                , System.Threading.Thread.CurrentPrincipal.Identity.Name
                , GetDecoratedFunctionName("WhoAmI"));

            return result;

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService2.ExecuteSQLServerDBCommand(string dbcs, string commandSQL)
            StringBuilder result = new StringBuilder();

                using (SqlConnection conn = new SqlConnection(dbcs))

                    SqlCommand cmd = new SqlCommand("select @@SPID", conn);
                    string spid = cmd.ExecuteScalar().ToString();

                    cmd = new SqlCommand(commandSQL, conn);
                    int rowsEffected = cmd.ExecuteNonQuery();
                    result.Append(String.Format("SQL Server connection established:{0}\tSPID = {1}{0}\tCommand executed. Rows effected = {2}{0}"
                        , Environment.NewLine
                        , spid
                        , rowsEffected));


            catch (Exception ex)
                result.Append(String.Format("{0}{1}", Environment.NewLine, ex.ToString()));

            return result.ToString();


WCF Web Service 1

I wrote this web service as a WCF web service also that was configured for Kerberos authentication. This web service had the same operations as Web Service 2 (WhoAmI() and ExecuteSQLServerDBCommand()).

The WhoAmI() operation did the same as the WhoAmI() operation in the second web service and then called WhoAmI() on the second web service.

The ExecuteSQLServerDBCommand() operation did nothing in this web service except call the ExecuteSQLServerDBCommand() operatin in the second web service.

A partial block of the code in the WCF Web Service 1 is shown below: 


        #region IService1 Members 

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService1.WhoAmI()
            ServiceReference2.Service2Client svc = new WcfService1.ServiceReference2.Service2Client();
            svc.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; 

            string result = String.Format("{3}System.Security.Principal.WindowsIdentity.GetCurrent().Name = {1}{0}System.Threading.Thread.CurrentPrincipal.Identity.Name = {2}{0}{4}"
                , Environment.NewLine
                , System.Security.Principal.WindowsIdentity.GetCurrent().Name
                , System.Threading.Thread.CurrentPrincipal.Identity.Name
                , GetDecoratedFunctionName("WhoAmI")
                , svc.WhoAmI()

            return result;

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService1.ExecuteSQLServerDBCommand(string dbcs, string commandSQL)
            StringBuilder result = new StringBuilder();
                ServiceReference2.Service2Client svc = new WcfService1.ServiceReference2.Service2Client();
                svc.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
                result.Append(svc.ExecuteSQLServerDBCommand(dbcs, commandSQL));
            catch (Exception ex)
                result.Append(String.Format("{0}{1}", Environment.NewLine, ex.ToString()));

            return result.ToString();



I ran my test and executed 2 ExecuteSQLServerDBCommand() operations. One for user1 and one for user2. Remember that ExecuteSQLServerDBCommand() would return the SPID of the current connection so if I was right and Kerberos authentication did not affect the connection pool, then the same SPID would be returned for each user in my test.

Here is a screen shot of that test:

Full Test Results

The test proved that Kerberos authentication affected the ADO.Net connection pool. I have a friend who often uses the single word phrase “sigh…” to appropriately sum up most situations. So that’s what I said at this point. Sigh…

The reason Kerberos authentication affects the connection pool is that ADO.Net keys the connection pool on not just the connection string, but also the user identity. So if 100 users connect to the database with a secure connection string, then 100 pools will get created, each with a single connection string.
I told the manager who tasked me with settling this issue my findings. He and I then presented them to a VP who promptly told us that in Oracle, this would not happen. I didn’t believe the VP but didn’t tell him. After all, I was zero and one in my competitions to prove my theories over someone more knowledgable.
I wont go into my research on the Oracle issue, but I did quite a bit of research and found an answer. Unfortunately after finding the answer, I moved from zero and one to zero and two. Oracle has the ability to allow User2 to connect through User1’s connection and perform operations on it so there is no need for 2 seperate connections.

Setting up a Sharepoint 2010 Hyper-V Lab

Recently Microsoft released Sharepoint 2010 beta and quite a few other pieces of software: Visual Studio 2010 beta, 2008 R2 RTM, and Windows 7 RTM which made it possible to deploy an SP2010 farm and do some development against it with the latest server and application technology. So the time is right to setup Sharepoint 2010 and see what it’s got. My plan was to setup a Sharepoint 2010 farm so I could get down into the details of a farm deployment and see what was new there and I wanted to do some development against the farm with the new Visual Studio and Win7 to see what changed or had been introduced there.

 In order to setup this farm and development workstation, I figured I’d setup a hyper-v lab. In order to do this, I knew I’d need two machines: a hyper-v server and a Vista workstation which I would use to continue the project work I am tasked with at my company. I had a Dell Precision workstation with a decent amount of memory, disk space, and CPU horsepower so I decided use it as my hyper-v server, and I had a Vista laptop for my daily project work. 

2008 R2 Core Or Full?

My first question was: should I install the core or the full installation of 2008 R2 for my hyper-v host? At first I chose core because I didn’t want to waste any of the resources on my Precision workstation which could be used for the farm or for development so I went ahead and installed core and setup my vista laptop to manage the core hyper-v installation. This all went fairly straightforward until I got the bright idea to use the System Center Virtual Machine Manager to help me manage my virtual lab. I wanted to give the SCVMM a spin because in the past I had used Virtual Server 2005 and SCVMM’s predecessor: Virtual Machine Manager and so I wanted to see what was new in virtual machine management in hyper-v.

I figured I could run the SCVMM management console on my Vista workstation and use it from there, but after downloading the SCVMM and attempting install it on my Vista workstation, I found that I could not. I found out that the management console has to be installed on server 2008 which meant that I could either deploy a guest VM for the SCVMM or I could re-install my hyper-v server as a full installation and use it for the SCVMM. I chose to re-install my hyper-v server as a full installation since that seemed like less of a waste of my Precision workstation’s physical resources.

How To Back It All Up?

 The next item I wanted to address was how to backup my guest VMs. I knew that since I would be installing beta software and working through some hotfixes and what not, that I’d need a good backp strategy. After all, the only way I knew of to learn something really well was to break it and then have to fix it and restoring from a backup made fixing things that much easier. There are 2basic ways in which a hyper-v guest VM can be backed up:

  1. Windows Server Backup (WSB)
  2. Data Protection Manager (DPM)


Checkpoints are kind of a light weight backup solution so I didn’t see checkpoints as a long term backup solution, I seen them as a way to make a quick save of the VM before attempting something which may break it. In addition, my experience with VS2005 checkpoints left me with two bad experiences: you couldn’t restore a checkpoint without losing all checkpoints after the restored one and constantly checkpointing/restoring a domain member server would inevitably result in that member server’s domain association getting corrupted.

However, when using the new SCVMM checkpoints, I realized that I could not only restore to a checkpoint without losing all checkpoints later in time but that I could also fork the checkpoints. I could create child checkpoints and sibling checkpoints as well:

SCVMM Checkpoints

So checkpoints for me became a good and quick backup tool but still didn’t satisfy my need for a full backup.

Windows Server Backup (WSB)

WSB is the successor of NTBackup and can do what you’d expect of it: it backs up and restores folders and files. But it can also operate at a higher level and use an application’s Volume Snapshot Service (VSS) to backup/restore at the application level. Hyper-v has a VSS which through a registry edit, can be used by WSB.I had an external 500GB drive laying around that I was using as the SCVMM library for my ISO and other installation images so I figured I would also use it for my backup media. After I got WSB configured with the hyper-v VSS writer I got to the place in the WSB backup wizard to select the backup destination, it didn’t show my external drive:

WSB Wizard

I went hunting online to see what I could do and found some articles about backing up to an external drive with WSB  but it seemed I had to reformat the disk first. Since this external drive also hosted the VM library of ISOs and other binaries I needed for my hyper-v lab, formatting it was out of the question and so this ended my backup effort with WSB.

Data Protection Manager (DPM)

The next backup method was DPM which is a more enterprise level backup solution from Microsoft. I downloaded and installed DPM 2007 service pack 1 which meant that I had to also install SQL 2005 along with it’s necessary service packs to get it running on 2008 R2.

I didn’t have too much trouble getting DPM installed and configured, it was all pretty much what I would expect of a backup solution. However, when I got to the point where I was adding disks to the storage pool to be used for the backup destination, I got a pleasant surprise: DPM did not show me my external drive so I could add it to the storage pool:

DPM Disk Selection

At this point, I went back to the Internet to see what was going on and found out that DPM does not support backing up to external USB drives.

So at that point, I had no backup solution for my hyper-v installation. I do have plans to find another external drive and use WSB to backup to it.

Clones, Templates, and Virtual Machines In the SCVMM

My virtual Sharepoint lab will eventually consist of 7 guest VMs:

  1. Domain Controller
  2. SQL Server
  3. Index Server
  4. Web Server
  5. Web Server
  6. Development workstation
  7. Development workstation

Guest VMs


When I began to create my guest VMs, I wanted the ability to store a base 2008 R2 server in my SCVMM library that was fully patched and ready to deploy. I envisioned being able to deploy multiple guest VMs from this one copy in the library. After installing and patching my first server guest VM, I didn’t store it directly to my library. Instead I used the ‘clone’ option on the context menu to create 4 additional servers: IDX1, SQL1, WEB1, and WEB2. I knew that this probably wasn’t right, that it couldn’t be that easy but I did it anyway and I was correct, it wasn’t right. Each of my 5 guest VM servers had the same SID which I verified by using the psgetsid utility. I think clones are useful only for backing up the machine and storing it in the SCVMM library.

Storing a VM in the SCVMM library

The next thing I tried to do was to take that first fully installed and patched server and run sysprep on it and store it into the SCVMM library. I figured that having a sysprepped machine in my library would allow me to deploy it into a running guest VM multiple times and each of those guest VMs would have a different SID. But when I went to pull the VM which I stored in the library and deploy it as a guest VM, I found 2 problems:
  1. I couldn’t rename the machine. I had it named something like ‘Base 2008 R2  x64’  so I couldn’t rename it SQL1, WEB1, etc.
  2. When deploying the VM from the library, it was not copied out of the library, it was moved. So this was a one time operation and it pretty much defeated storing a single copy in my library so I didn’t have to install and patch 5 seperate servers.

 Creating a template in the SCVMM library

My 3rd and last attempt at accomplishing my vision turned out to be the charm. I took that first server which was fully patched and would serve as my base server and told SCVMM to create a template from it. Before creating the template, I figured I needed to run sysprep on it since that seemed correct but while watching the SCVMM jobs to create the template, I noticed that one of the steps was to sysprep the machine:

Sysprepping The Template

So creating the template also sysprepped the machine. Very nice, I now had the ability to store a 2008 R2 server template in my library which I could pull and deploy mutliple times as needed.