My diary of software development

Archive for the ‘Sys Admin’ Category

Cisco and Microsoft NPS

My church uses a Cisco ISA570 as our firewall and runs Windows Server 2012 R2 servers on the LAN. The ISA by default uses its local database for authenticating management users and VPN users but I wanted to use the Network Policy Server and Active Directory on our 2012 R2 servers for authentication and authorization. It turns out that getting this setup and working was much easier than I’d thought.

NPS Configuration

First I added the ISA-570 to the NPS as a standard RADIUS client:

image

I wanted to use NPS to authenticate 3 groups of users:

  1. Those that have admin management access to the ISA.
  2. Those that have read-only management access to the ISA.
  3. Those who can VPN into the ISA.

I wanted each user authenticated against AD and to use their AD Security Group to match the group above. This meant I needed to create 3 Network Policies on the NPS and set each policy’s condition to match an AD Security Group I had previously created:

Firewall admin access policy

image

 

FIREWALL READONLY ACCESS POLICY

image

 

VPN ACCESS POLICY

image

 

There are 2 things to note about all 3 network policies:

  1. The Filter-Id value must match the name of the group (not created yet) on the ISA570.
  2. The Authentication Method is PAP and SPAP which as it says on the screen shots is an unencrypted authentication method. And this bothered me.

Both of these are clearly stated on page 394 of the ISA500 Admin Guide. I plan to open a support request with Cisco and ask if there is another Authentication Method I can use besides PAP, SPAP. I did try a few different methods but none worked except PAP, SPAP.

isa570 configuration

First I needed to point the ISA at the NPS (RADIUS):

image

 

Next I created 3 new groups which matched the Filter-Id setting on my NPS Network Policies:

image

 

And that was it. Everything is working as I’d wanted.

Advertisements

Moving to the Clouds Part III–Getting IP Phones Working With TMG

 

Part I of this series is here

Part II of this series is here

 

By monitoring the TMG log while trying to make a call, I figured out what additional rules and network entities I needed:

 

An address range to PHONE.COM

PhoneCom

A domain name set to SIP.PHONE.COM

SIP Phone Com

A new protocol I named RTP Pope

RTP Pope

A new protocol I named UDP 6060

UDP 6060

Two new firewall rules (Phone Init and NTP)

FirewallRules

 

The VOIP phones have been working over Microsoft TMG for about 3 months now without problems.

Moving to the Clouds Part II–Getting IP Phones Working With TMG

Part I

In Part I I talked about moving my church’s office and network resources to the cloud because they decided to sell their building to build a new site and had dispersed from working in their old office to a temporary office as well as work from their homes.

My church decided to move their phone numbers to Phone.Com to use IP phones and soft phones for their communications after their move from their old office. In this post, I’m going to talk about my struggle to get their IP phones working through Microsoft TMG.

Why Microsoft TMG?

I’ve been asking myself that question a lot lately. At the office I had them setup with a Cisco PIX for their firewall but I chose to move them to TMG since it’s a lot easier for me to manage. In addition I believe it will help me with the Office 365 identity integration with their Active Directory. But right now I’m just trying to get their IP phones working.

The First Experience

I didn’t know how IP phones worked when I started this, since at the church office, they had a phone provider who brought in the lines to a PBX and then to the offices. There was nothing for me to manage.

At the temporary office, everything worked for the IP phones as long as I used the Verizon FIOS router but stopped as soon as I switched over to TMG. Nice. Time for me to go and study how IP phones are supposed to work.

After poking around on the Internet’s resources reading about the SIP and RTP protocols, I discovered a really slick wizard in TMG which allowed me to create rules for VOIP. I went through the wizard and ended up with three new firewall policy rules:

Rules

This made sense so I grabbed an IP phone and dialed someone. The call went through (yippee!!!) but I couldn’t hear them and they couldn’t hear me (sigh).

So that would mean the SIP protocol went through fine but the RTP protocol which would handle the voice didn’t go through.

What The TMG Log Showed

I started a log query on TMG restricted to just the IP phone’s IP address and made another prank call. This is what the log showed me:

Log 1

There are two things I see wrong here. The first and obvious is that the connection was denied. The second is that the protocol name is ‘Phones 6060’ which was an old protocol I had defined earlier when I was working through my ignorance of how IP phones worked but have since deleted.

I see that the destination port is an even port number. According to the RTP protocol, the data is sent over an even port number and the corresponding data would be carried over the next higher port number which would be an odd number. So this log is telling me that the phone was trying to perform RTP communications over the even port number 32746 but the attempt was denied and that would be why I don’t see communications over port 32747 which would be data coming back.

But why is the protocol ‘Phones 6060’ showing up in the logs? I know I deleted it and have even rebooted the TMG server since then so there’s no reason for it to show up in the logs. The fact that this deleted protocol is showing up and the fact that TMG is not allowing the IP phones out even after I created the proper firewall rules makes me less confident that what I see in the TMG UI is not actually what TMG is doing under the covers. Thanks Microsoft.

Well, it’s a rainy and stormy day today so this is as good a day as any to work through this…

Moving to the Clouds Part I

 

The decision

My church has decided to sell their land and buildings in order to build and move into a new location about 20 miles away. This new site will not be completed for about a year so in the mean time, they’ll be working out of temporary offices and their homes in a much more loosely coupled fashion.

Since I handle their email, file shares, networking, and various other IT stuff on the side, I decided it was time to look at moving their data to the cloud.

 

Where to put all those files?

They easily have over 400GB of files on their network shares. Google Docs was the first place I looked and found that I’d have to convert all those files to GD format and the docs could not be cached locally so the user would have to be online to get to their documents. That was a non-starter for me so I nixed GD.

Next I took a look at Amazon S3. Amazon makes it very easy to get this many documents up to S3, I just send them a USB drive with all of the docs on it. And the pricing was perfect because at $0.14/GB plus the estimated number of requests and data transfer size, they would be looking at about $100/month.

 

What about the email?

I began to prepare the files for shipping to Amazon by locking their file shares down to read-only access and copying the files to a removable drive over a period of several days. As I was in the middle of doing this, I began to hear and read about Microsoft’s Office 365 and realized that would be perfect for their email. I wouldn’t have to manage a mail server any longer or be a liaison to their current mail provider.

For way less than the cost they were paying their current provider they could have their email in Exchange and their working office files in SharePoint in the cloud. I researched the O365 plans and set them up with the E4 plan. With this plan, they’ll be able to use and upgrade to the latest version of Office with each release, have multiple GB of mail storage, and have tens of GB of SharePoint storage.

 

To be continued…

I’ll continue this series with information on moving their email, setting up their contacts, and moving their files to Office 365 and Amazon.

Does Kerberos authentication affect SQL Server connection pooling?

The Question Submitted To Me

A manager at my company asked me this question. I told him that I didn’t know for sure but I didn’t *think* Kerberos authentication would affect connection pooling. I put on my thinking cap at this point and told him that the connection pool in an ADO.Net application (he was concerned with only .Net apps) was keyed on the connection string so, I told him, if the connection string didn’t change between users then a single connection pool would be used for all users.   

I went on to tell this manager that with Kerberos authenticated users, the connection string would be a trusted connection string without specific credentials for each user. Since the connection string would be the same trusted connection string for all users, there would be one connection pool for all of the Kerberos authenticated users.   

After telling him the above information, he asked me to put together a test to document my findings so I set about to create a test which could prove this one way or the other. However, at this point I was pretty sure I was right because it *seemed* right given everything I knew.   

Boy, was I in for an eye opener.   

Overview Of My Test

The test I came up with was to write a test client which connected to a web service on another machine. This web service would then connect to another web service on a second machine. This second web service would then connect to a SQL Server database on a third machine:   

   

I setup these machines in my trusty hyper-v environment I used for the Sharepoint 2010 farm.   

To validate my theory, I decided to create 5 domain users: user1 – user5 and run each of them through my test one by one. I figured at the end of the test, the SQL Server would have either 5 seperate connections for each user or it would have one session which was shared by each user in the connection pool.   

At this point, I had been doing a lot of reading on Kerberos and I ran across a document from Microsoft which flat out told me that Kerberos authentication would defeat the connection pool. However I still thought I was right and decided that I’d ignore that document, maybe they were taking about an issue that didn’t apply to this environment. Besides, I was in the midde of setting up this test and I wanted to finish it.   

Running The Test

My test client presented a menu and gave the user an option of calling one of two operations:

WhoAmI()  – Returns the identity of the caller. Returns both the thread identity and current windows identity.

Here is a screen shot of the WhoAmI() test:

WhoAmI() Test Results

ExecuteSQLServerDBCommand()  – Accepts 2 paramaters: Database Connection String and a SQL string. This method connects using  the database connection string, executes the SQL, and returns the SPID of the current DB connection and the number of rows affected by the executed SQL.
Here is a screen shot of the ExecuteSQLServerDBCommand() test:

ExecuteSQLServerDBCommand() Test Resuts

 

SQL Server Configuration

I’ll start with the SQL Server endpoint of my test because its going to be easier to explain each endpoint by starting at the end and working my way back to the test client.   

My SQL Server was a SQL Server 2008 instance running on Windows Server Standard 2008R2. I created a simple database named KERBTEST and gave all 5 domain users access to it:   

KERBTEST SQL Server Database

WCF Web Service 2

I wrote this web service as a WCF web service that was configured for Kerberos authentication. This web service was the final endpoint before the SQL Server and it contains the two operations WhoAmI() and ExecuteSQLServerDBCommand(). A partial block of the code in the WCF Web Service 2 is shown below:


        #region IService2 Members

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService2.WhoAmI()
        {
            string result = String.Format("{3}System.Security.Principal.WindowsIdentity.GetCurrent().Name = {1}{0}System.Threading.Thread.CurrentPrincipal.Identity.Name = {2}{0}"
                , Environment.NewLine
                , System.Security.Principal.WindowsIdentity.GetCurrent().Name
                , System.Threading.Thread.CurrentPrincipal.Identity.Name
                , GetDecoratedFunctionName("WhoAmI"));

            return result;
        }

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService2.ExecuteSQLServerDBCommand(string dbcs, string commandSQL)
        {
            StringBuilder result = new StringBuilder();

            result.Append(GetDecoratedFunctionName("ExecuteDBCommand"));
            try
            {
                using (SqlConnection conn = new SqlConnection(dbcs))
                {
                    conn.Open();

                    SqlCommand cmd = new SqlCommand("select @@SPID", conn);
                    string spid = cmd.ExecuteScalar().ToString();

                    cmd = new SqlCommand(commandSQL, conn);
                    int rowsEffected = cmd.ExecuteNonQuery();
                    result.Append(String.Format("SQL Server connection established:{0}\tSPID = {1}{0}\tCommand executed. Rows effected = {2}{0}"
                        , Environment.NewLine
                        , spid
                        , rowsEffected));

                    conn.Close();
                }

            }
            catch (Exception ex)
            {
                result.Append(String.Format("{0}{1}", Environment.NewLine, ex.ToString()));
            }

            return result.ToString();
        }

        #endregion

WCF Web Service 1

I wrote this web service as a WCF web service also that was configured for Kerberos authentication. This web service had the same operations as Web Service 2 (WhoAmI() and ExecuteSQLServerDBCommand()).

The WhoAmI() operation did the same as the WhoAmI() operation in the second web service and then called WhoAmI() on the second web service.

The ExecuteSQLServerDBCommand() operation did nothing in this web service except call the ExecuteSQLServerDBCommand() operatin in the second web service.

A partial block of the code in the WCF Web Service 1 is shown below: 

 

        #region IService1 Members 

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService1.WhoAmI()
        {
            ServiceReference2.Service2Client svc = new WcfService1.ServiceReference2.Service2Client();
            svc.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation; 

            string result = String.Format("{3}System.Security.Principal.WindowsIdentity.GetCurrent().Name = {1}{0}System.Threading.Thread.CurrentPrincipal.Identity.Name = {2}{0}{4}"
                , Environment.NewLine
                , System.Security.Principal.WindowsIdentity.GetCurrent().Name
                , System.Threading.Thread.CurrentPrincipal.Identity.Name
                , GetDecoratedFunctionName("WhoAmI")
                , svc.WhoAmI()
                ); 

            return result;
        } 

        [OperationBehavior(Impersonation = ImpersonationOption.Required)]
        string IService1.ExecuteSQLServerDBCommand(string dbcs, string commandSQL)
        {
            StringBuilder result = new StringBuilder();
            result.Append(GetDecoratedFunctionName("ExecuteSQLServerDBCommand"));
            try
            {
                ServiceReference2.Service2Client svc = new WcfService1.ServiceReference2.Service2Client();
                svc.ClientCredentials.Windows.AllowedImpersonationLevel = TokenImpersonationLevel.Impersonation;
                result.Append(svc.ExecuteSQLServerDBCommand(dbcs, commandSQL));
            }
            catch (Exception ex)
            {
                result.Append(String.Format("{0}{1}", Environment.NewLine, ex.ToString()));
            } 

            return result.ToString();
        } 

        #endregion 

Results

I ran my test and executed 2 ExecuteSQLServerDBCommand() operations. One for user1 and one for user2. Remember that ExecuteSQLServerDBCommand() would return the SPID of the current connection so if I was right and Kerberos authentication did not affect the connection pool, then the same SPID would be returned for each user in my test.

Here is a screen shot of that test:

Full Test Results

The test proved that Kerberos authentication affected the ADO.Net connection pool. I have a friend who often uses the single word phrase “sigh…” to appropriately sum up most situations. So that’s what I said at this point. Sigh…

The reason Kerberos authentication affects the connection pool is that ADO.Net keys the connection pool on not just the connection string, but also the user identity. So if 100 users connect to the database with a secure connection string, then 100 pools will get created, each with a single connection string.
I told the manager who tasked me with settling this issue my findings. He and I then presented them to a VP who promptly told us that in Oracle, this would not happen. I didn’t believe the VP but didn’t tell him. After all, I was zero and one in my competitions to prove my theories over someone more knowledgable.
I wont go into my research on the Oracle issue, but I did quite a bit of research and found an answer. Unfortunately after finding the answer, I moved from zero and one to zero and two. Oracle has the ability to allow User2 to connect through User1’s connection and perform operations on it so there is no need for 2 seperate connections.

Setting up a Sharepoint 2010 Hyper-V Lab

Recently Microsoft released Sharepoint 2010 beta and quite a few other pieces of software: Visual Studio 2010 beta, 2008 R2 RTM, and Windows 7 RTM which made it possible to deploy an SP2010 farm and do some development against it with the latest server and application technology. So the time is right to setup Sharepoint 2010 and see what it’s got. My plan was to setup a Sharepoint 2010 farm so I could get down into the details of a farm deployment and see what was new there and I wanted to do some development against the farm with the new Visual Studio and Win7 to see what changed or had been introduced there.

 In order to setup this farm and development workstation, I figured I’d setup a hyper-v lab. In order to do this, I knew I’d need two machines: a hyper-v server and a Vista workstation which I would use to continue the project work I am tasked with at my company. I had a Dell Precision workstation with a decent amount of memory, disk space, and CPU horsepower so I decided use it as my hyper-v server, and I had a Vista laptop for my daily project work. 

2008 R2 Core Or Full?

My first question was: should I install the core or the full installation of 2008 R2 for my hyper-v host? At first I chose core because I didn’t want to waste any of the resources on my Precision workstation which could be used for the farm or for development so I went ahead and installed core and setup my vista laptop to manage the core hyper-v installation. This all went fairly straightforward until I got the bright idea to use the System Center Virtual Machine Manager to help me manage my virtual lab. I wanted to give the SCVMM a spin because in the past I had used Virtual Server 2005 and SCVMM’s predecessor: Virtual Machine Manager and so I wanted to see what was new in virtual machine management in hyper-v.

I figured I could run the SCVMM management console on my Vista workstation and use it from there, but after downloading the SCVMM and attempting install it on my Vista workstation, I found that I could not. I found out that the management console has to be installed on server 2008 which meant that I could either deploy a guest VM for the SCVMM or I could re-install my hyper-v server as a full installation and use it for the SCVMM. I chose to re-install my hyper-v server as a full installation since that seemed like less of a waste of my Precision workstation’s physical resources.

How To Back It All Up?

 The next item I wanted to address was how to backup my guest VMs. I knew that since I would be installing beta software and working through some hotfixes and what not, that I’d need a good backp strategy. After all, the only way I knew of to learn something really well was to break it and then have to fix it and restoring from a backup made fixing things that much easier. There are 2basic ways in which a hyper-v guest VM can be backed up:

  1. Windows Server Backup (WSB)
  2. Data Protection Manager (DPM)

Checkpoints

Checkpoints are kind of a light weight backup solution so I didn’t see checkpoints as a long term backup solution, I seen them as a way to make a quick save of the VM before attempting something which may break it. In addition, my experience with VS2005 checkpoints left me with two bad experiences: you couldn’t restore a checkpoint without losing all checkpoints after the restored one and constantly checkpointing/restoring a domain member server would inevitably result in that member server’s domain association getting corrupted.

However, when using the new SCVMM checkpoints, I realized that I could not only restore to a checkpoint without losing all checkpoints later in time but that I could also fork the checkpoints. I could create child checkpoints and sibling checkpoints as well:

SCVMM Checkpoints

So checkpoints for me became a good and quick backup tool but still didn’t satisfy my need for a full backup.

Windows Server Backup (WSB)

WSB is the successor of NTBackup and can do what you’d expect of it: it backs up and restores folders and files. But it can also operate at a higher level and use an application’s Volume Snapshot Service (VSS) to backup/restore at the application level. Hyper-v has a VSS which through a registry edit, can be used by WSB.I had an external 500GB drive laying around that I was using as the SCVMM library for my ISO and other installation images so I figured I would also use it for my backup media. After I got WSB configured with the hyper-v VSS writer I got to the place in the WSB backup wizard to select the backup destination, it didn’t show my external drive:

WSB Wizard

I went hunting online to see what I could do and found some articles about backing up to an external drive with WSB  but it seemed I had to reformat the disk first. Since this external drive also hosted the VM library of ISOs and other binaries I needed for my hyper-v lab, formatting it was out of the question and so this ended my backup effort with WSB.

Data Protection Manager (DPM)

The next backup method was DPM which is a more enterprise level backup solution from Microsoft. I downloaded and installed DPM 2007 service pack 1 which meant that I had to also install SQL 2005 along with it’s necessary service packs to get it running on 2008 R2.

I didn’t have too much trouble getting DPM installed and configured, it was all pretty much what I would expect of a backup solution. However, when I got to the point where I was adding disks to the storage pool to be used for the backup destination, I got a pleasant surprise: DPM did not show me my external drive so I could add it to the storage pool:

DPM Disk Selection

At this point, I went back to the Internet to see what was going on and found out that DPM does not support backing up to external USB drives.

So at that point, I had no backup solution for my hyper-v installation. I do have plans to find another external drive and use WSB to backup to it.

Clones, Templates, and Virtual Machines In the SCVMM

My virtual Sharepoint lab will eventually consist of 7 guest VMs:

  1. Domain Controller
  2. SQL Server
  3. Index Server
  4. Web Server
  5. Web Server
  6. Development workstation
  7. Development workstation

Guest VMs

Cloning

When I began to create my guest VMs, I wanted the ability to store a base 2008 R2 server in my SCVMM library that was fully patched and ready to deploy. I envisioned being able to deploy multiple guest VMs from this one copy in the library. After installing and patching my first server guest VM, I didn’t store it directly to my library. Instead I used the ‘clone’ option on the context menu to create 4 additional servers: IDX1, SQL1, WEB1, and WEB2. I knew that this probably wasn’t right, that it couldn’t be that easy but I did it anyway and I was correct, it wasn’t right. Each of my 5 guest VM servers had the same SID which I verified by using the psgetsid utility. I think clones are useful only for backing up the machine and storing it in the SCVMM library.

Storing a VM in the SCVMM library

The next thing I tried to do was to take that first fully installed and patched server and run sysprep on it and store it into the SCVMM library. I figured that having a sysprepped machine in my library would allow me to deploy it into a running guest VM multiple times and each of those guest VMs would have a different SID. But when I went to pull the VM which I stored in the library and deploy it as a guest VM, I found 2 problems:
  1. I couldn’t rename the machine. I had it named something like ‘Base 2008 R2  x64’  so I couldn’t rename it SQL1, WEB1, etc.
  2. When deploying the VM from the library, it was not copied out of the library, it was moved. So this was a one time operation and it pretty much defeated storing a single copy in my library so I didn’t have to install and patch 5 seperate servers.

 Creating a template in the SCVMM library

My 3rd and last attempt at accomplishing my vision turned out to be the charm. I took that first server which was fully patched and would serve as my base server and told SCVMM to create a template from it. Before creating the template, I figured I needed to run sysprep on it since that seemed correct but while watching the SCVMM jobs to create the template, I noticed that one of the steps was to sysprep the machine:

Sysprepping The Template

So creating the template also sysprepped the machine. Very nice, I now had the ability to store a 2008 R2 server template in my library which I could pull and deploy mutliple times as needed.

Setting up Wireless 802.1x

I am in the process of setting up 802.1x wireless security at my church. I manage my church’s network as a personal charitable effort and until recently, the church had a single flat network segment with servers and wired/wireless workstations on the same segment. They had wireless setup without requiring any type of WPA or other security so that visiting ministers could jump on and get out to the Internet.

I definitely didn’t like the fact that guest users could be on the same network as the servers and the church staff’s workstations so I sat down to think about the best way to work this out. The church wanted to keep the guest wireless because it was much easier for them when guests needed access to the Internet. However they also listened to me when I told them how much of a bad idea it was to mix their private and guest networks.

The first thing I did was to physically seperate the networks. This phase is almost done, I still need to run some cabling from the sanctuary over to the offices which are in a seperate building so that I can install guest and private wireless in the sanctuary. But the office and Sunday School areas are seperated now and I’ve placed WPA2 security on the private network to secure it.

My next phase was to implement wireless 802.1x, so  I went through the purchase requisitions to get the Windows OS upgrades (they were still at Windows 2000) and implement a 2008 domain and PKI. I then read this article: http://technet.microsoft.com/en-us/library/cc771455(WS.10).aspx, followed the instructions, and after completing them, thought I was ready to test and see if it worked. But of course it didn’t. I have never done anything this complicated that actually worked the first time, no matter how methodically I followed the instructions. It seems like I’d know by now but I really thought this would work the first time.

Since then, my work has increased and this became too much to juggle. I knew that at least we had a seperate guest and private network so I chose to put this on hold for a bit.

In the last week, I’ve had some relief from the work projects so that I have extra time in the evening and my attention has been turning back to this 802.1x project at my church. I figured this time I’d setup a Vista workstation as a sort of lab for testing this wireless auto-connecting project.