My diary of software development

Archive for the ‘Software Development’ Category

Async.js with TypeScript

I became an instant fan of the Async JavaScript Library the first time I used it. I am currently working on an HTML5 project at work which contains a lot of asynchronous code along to access IndexedDB as well as multiple web service calls. I was talking to another developer at my company about the ‘ugliness’ and maintenance nightmare of all this async code when he introduced me to the Async library.

The code below is an example of the asynchronous code in our application. As you can tell we’re using the async.waterfall() control flow:

   1:              var deferredResult = $.Deferred<String>();
   2:              var self = this;
   3:              async.waterfall([
   4:                  (callback: any) =>
   5:                  {
   6:                      self.dbContext.ReadWorkUnitSummary(groupId)
   7:                          .then((summary: Interfaces.Models.IWorkUnitSummary) =>
   8:                          {
   9:                              self.WrapUnits(summary);
  10:                              callback(null, summary);
  11:                          })
  12:                          .fail(error => callback(error));
  13:                  },
  14:                  (summary: Interfaces.Models.IWorkUnitSummary, callback: any) =>
  15:                  {
  16:                      self.dbContext.CollapseDataRecords(summary.DataRecords)
  17:                          .then((dataRecord: Interfaces.Models.IDataRecord) => callback(null, dataRecord))
  18:                          .fail((error) => callback(error));
  19:                  },
  20:                  (dataRecord: Interfaces.Models.IDataRecord, callback: any) =>
  21:                  {
  22:                      self.dataWebService.GetSiteRule(dataRecord.DataRecordId)
  23:                          .then((siteRule: Interfaces.Models.IDataSiteRule) =>
  24:                          {
  25:                              self.ProcessSiteRule(siteRule);
  26:                              callback(null, siteRule);
  27:                          })
  28:                          .fail(error => callback(error));
  29:                  },
  30:                  (siteRule: Interfaces.Models.IDataSiteRule, callback: any) =>
  31:                  {
  32:                      self.dataWebService.CreateDataSite(siteRule)
  33:                          .then((url: string) =>
  34:                          {
  35:                              deferredResult.resolve(url);
  36:                              callback();
  37:                          })
  38:                          .fail(error => callback(error));
  39:                  }
  40:              ],
  41:              (possibleError: any) =>
  42:              {
  43:                  if (possibleError)
  44:                  {
  45:                      deferredResult.reject(Error);
  46:                  }
  47:              });
  48:   
  49:              return deferredResult;

Readable and approachable

I believe the TypeScript  language and the Async library makes the above code both readable and approachable. It is readable because a developer can parse it in her head without needing outside documentation or the author standing there explaining it. The code is approachable because there is an obvious simple recurring pattern in it so the reader feels comfortable and doesn’t run away screaming “we’ve got to rewrite this! I have no idea how it works!”.

I feel that when I write readable and approachable code it becomes easier for another developer to pick up the code and modify it than it is if I wrote the code to be ‘amazing’ and understandable by only myself (and actually I’ll forget most of it within 6 months anyway). So kudos to TypeScript and Async for helping me to deliver maintainable code to my customer.

BECOMING Less readable and approachable

As I continued working with asynchronous code I began to realize that when debugging it wasn’t easy to tell how I got to where I was in the debugger. Often blocks such as the above would call another block and then it would be difficult at best to sift through the call stack to find block of calling asynchronous code.

For example here is what the call stack looks like in the Chrome debugger when stopped inside the ProcessSiteRule() method called on line 20 above:

image

Another problem I was running into was what happened when an exception was thrown from one of the anonymous methods in the Async call chain. If an exception was thrown from within an anonymous method, the rest of the Async call chain would not be called, the promise would never be resolved, and the caller would be left hanging forever. This type of bug is really difficult to track down.

I could fix some of that by placing a try/catch into each anonymous task but then what text would I give the rejection? For example if I added lines 5and 6 to the async step below, I would get a runtime exception. I could then catch the exception and reject the promise on line 17. The information I use to reject the promise would make it difficult to track down this block of code when reading the logs.

   1:  (dataRecord: Interfaces.Models.IDataRecord, callback: any) =>
   2:  {
   3:      try
   4:      {
   5:          var cow: any = {};
   6:          cow.bark();
   7:          self.dataWebService.GetSiteRule(dataRecord.DataRecordId)
   8:              .then((siteRule: Interfaces.Models.IDataSiteRule) =>
   9:              {
  10:                  self.ProcessSiteRule(siteRule);
  11:                  callback(null, siteRule);
  12:              })
  13:              .fail(error => callback(error));
  14:      }
  15:      catch (e)
  16:      {
  17:          callback("Exception when getting the site rule for DataRecord. -\n" + JSON.stringify(e, null, 4));
  18:      } 
  19:  },

 

In addition placing these draconian try/catch blocks in each anonymous Async task would make them a simple stack of asynchronous tasks much heavier and less readable than before.

Asynchronous job

I wanted to find a way to do the following with our asynchronous code:

  1. Provide specific information on which task in the anonymous Async call chain that an exception was thrown.
  2. Do not cause an unresolved promise hang when an exception was thrown in the asynchronous task.
  3. Be readable and approachable.

As I sat back and studied the code I realized that most of my code was using the async.waterfall() or async.series() control flows. I started thinking of our async code such as the above block as an Async Job. Those multiple asynchronous tasks handed to the Async control flow would be steps in an AsyncJob.

Here is the class I came up with which does the same thing as the async.waterfall at the beginning of this article. I called the job an AsyncJob and my specific class is named GetDataSiteUrlJob. Here is the basic structure without implementation of my class:

   1:  export class GetDataSiteUrlJob
   2:  extends Helpers.AsyncJob.BaseAsyncJob
   3:  implements Interfaces.Helpers.IAsyncJob
   4:  {
   5:  constructor(groupId: number,
   6:      dataWebService: Interfaces.WebService.IDataWebService,
   7:      dbContext: Interfaces.Db.IDbContext)
   8:   
   9:   
  10:  public Run(): JQueryPromise<string>
  11:            
  12:  private Initialize()
  13:   
  14:  private ReadWorkUnitSummary(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  15:   
  16:  private WrapUnits(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  17:   
  18:  private CollapseDataRecords(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  19:        
  20:  private ProcessSummarySiteRule(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  21:   
  22:  private CreateDatasite(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  23:  }

I’ll explain each element of this class, its base class, its interface, and the delegates in more detail later but first I want to show the the information that can be provided when an exception is thrown in one of the Async tasks.

I added this breaking code to the GetDataSiteUrlJob.ProcessSummarySiteRule() method:

   1:  private ProcessSummarySiteRule(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
   2:  {
   3:      var cow: any = {};
   4:      cow.bark();
   5:   
   6:      callback();
   7:  }
 
When I ran the GetDataSiteUrlJob the information below is what it was able to provide when the cow could not bark in ProcessSummarySiteRule():
 

image

interface explanation

   1:  export interface IAsyncJob
   2:  {
   3:      Run(): JQueryPromise<any>;
   4:  }

 

The IAsyncJob interface is quite simple. There is one method to call: Run() which returns a promise.

 

child Class explanation

Here is the relevant code of the GetDataSiteUrlJob class:

   1:  export class GetDataSiteUrlJob
   2:  extends Helpers.AsyncJob.BaseAsyncJob
   3:  implements Interfaces.Helpers.IAsyncJob
   4:  {
   5:  private workUnitSummary: Interfaces.Models.IWorkUnitSummary;
   6:  private groupId: number;
   7:  private dataWebService: Interfaces.WebService.IDataWebService;
   8:  private dbContext: Interfaces.Db.IDbContext;
   9:  private siteRule: Interfaces.Models.IDataSiteRule;
  10:  private collapsedDataRecord: Interfaces.Models.IDataRecord;
  11:   
  12:  constructor(groupId: number,
  13:  dataWebService: Interfaces.WebService.IDataWebService,
  14:  dbContext: Interfaces.Db.IDbContext)
  15:  {
  16:      super($.Deferred<any>());
  17:   
  18:      this.groupId = groupId;
  19:      this.dataWebService = dataWebService;
  20:      this.dbContext = dbContext;
  21:   
  22:      this.Initialize();
  23:  }
  24:   
  25:  public Run(): JQueryPromise<string>
  26:  {
  27:      return this.PerformRun();
  28:  }
  29:            
  30:  private Initialize()
  31:  {
  32:      this.AppendJobStep(this.ReadWorkUnitSummary);
  33:      this.AppendJobStep(this.WrapUnits);
  34:      this.AppendJobStep(this.CollapseDataRecords);
  35:      this.AppendJobStep(this.ProcessSummarySiteRule);
  36:      this.AppendJobStep(this.CreateDatasite);
  37:  }
  38:   
  39:  private ReadWorkUnitSummary(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  40:   
  41:  private WrapUnits(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  42:   
  43:  private CollapseDataRecords(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  44:          
  45:  private ProcessSummarySiteRule(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  46:   
  47:  private CreateDatasite(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  48:  {
  49:      var self = this;
  50:      this.dataWebService.CreateDataSite(this.siteRule)
  51:          .then((url: string) =>
  52:          {
  53:              self.jobResult = url;
  54:              callback();
  55:          })
  56:          .fail((error) => callback(error));
  57:  }
  58:  }

 

On line 16 in the constructor, we pass in the deferred object which we want to use.

Line 25 is the implementation of IAsyncJob.Run(). It calls the base class method PerformRun() which executes each of our job steps.

Line 30 is the Initialize() method which was called from the constructor. This is where we append each job step to the job. These job steps are akin to the asynchronous methods passed to Async.

On line 53 the code is setting self.jobResult to a URL. The self.jobResult is defined in the base class and represents what will be passed into the deferred.resolve() method at the end of this job. Remember that the IAsyncJob.Run() returns a promise.

Each job step must be defined to take a CallbackDelegate parameter. This parameter is akin to the callback argument to each of the anonymous methods in the Async call:

   1:  (callback: any) =>
   2:  {
   3:      self.dbContext.ReadWorkUnitSummary(groupId)
   4:          .then((summary: Interfaces.Models.IWorkUnitSummary) =>
   5:          {
   6:              self.WrapUnits(summary);
   7:              callback(null, summary);
   8:          })
   9:          .fail(error => callback(error));
  10:  },

 

base class

   1:  export class BaseAsyncJob
   2:  {
   3:  public jobResult: any;
   4:  public deferredResult: JQueryDeferred<any>;
   5:  public jobSteps = new Array<Delegates.JobStepDelegate>();
   6:  private currentStepName: string;
   7:  private jobCallStack = new Array<String>();
   8:   
   9:   
  10:  constructor(deferredResult: JQueryDeferred<any>)
  11:  {
  12:  this.deferredResult = deferredResult;
  13:  }
  14:   
  15:  public PerformRun(): JQueryPromise<any>
  16:  {
  17:  try
  18:  {
  19:      var proxySteps = this.CreateProxySteps();
  20:      async.series(proxySteps, this.CompleteRun.bind(this));
  21:  }
  22:  catch (e)
  23:  {
  24:      this.deferredResult.reject(e);
  25:  }
  26:   
  27:  return this.deferredResult.promise();
  28:  }
  29:   
  30:  public HandleExceptionDuringJobStep(exception: any, callback: any): void
  31:  {
  32:  try
  33:  {
  34:      var exceptionDescription =
  35:          {
  36:              ExceptionInMethod: this.currentStepName,
  37:              Exception: exception,
  38:              JobCallStack: this.jobCallStack,
  39:          };
  40:   
  41:      callback(exceptionDescription);
  42:  }
  43:  catch (ex)
  44:  {
  45:      callback(ex);
  46:  }
  47:  }
  48:   
  49:  public CompleteRun(possibleError: any): void
  50:  {
  51:  if (possibleError)
  52:  {
  53:      this.deferredResult.reject(possibleError);
  54:  }
  55:  else
  56:  {
  57:      this.deferredResult.resolve(this.jobResult);
  58:  }
  59:  }
  60:   
  61:  private GetClassNameFromConstructor(constructorText: string): string
  62:  {
  63:  var childClassName = "?";
  64:  try
  65:  {
  66:      var funcNameRegex = /function (.{1,})\(/;
  67:      var results = (funcNameRegex).exec(constructorText);
  68:      childClassName = (results && results.length > 1) ? results[1] : "?";
  69:  }
  70:  catch (ex)
  71:  {
  72:  }
  73:   
  74:  return childClassName;
  75:  }
  76:   
  77:  private GetClassAndMethodName(method: Function): string
  78:  {
  79:  var classAndMethodName = "";
  80:   
  81:  try
  82:  {
  83:      var calleeMethodText = method.toString();
  84:      var methodsOnThis = this['__proto__'];
  85:      var methodName: string;
  86:   
  87:      for (var methodOnThis in methodsOnThis)
  88:      {
  89:          if (calleeMethodText == methodsOnThis[methodOnThis])
  90:          {
  91:              methodName = methodOnThis;
  92:              break;
  93:          }
  94:      }
  95:      methodName = (methodName == "?" ? "anonymous" : methodName);
  96:   
  97:      var constructorText = methodsOnThis['constructor'];
  98:      var className = this.GetClassNameFromConstructor(constructorText);
  99:      classAndMethodName = $.format("{0}.{1}()", className, methodName);
 100:  }
 101:  catch (ex)
 102:  {
 103:  }
 104:   
 105:  return classAndMethodName;
 106:  }
 107:   
 108:  private ProxyStep(actualStep: Delegates.JobStepDelegate,
 109:  callback: AsyncJob.Delegates.CallbackDelegate)
 110:  : void
 111:  {
 112:  this.currentStepName = this.GetClassAndMethodName(actualStep);
 113:  try
 114:  {
 115:      this.jobCallStack.push(this.currentStepName);
 116:      if (this.deferredResult.state() != Enums.JQueryDeferredState.Pending)
 117:      {
 118:          callback();
 119:          return;
 120:      }
 121:   
 122:      actualStep.bind(this)(callback);
 123:  }
 124:  catch (ex)
 125:  {
 126:      this.HandleExceptionDuringJobStep(ex, callback);
 127:  }
 128:  }
 129:   
 130:  public AppendJobStep(jobStep: Delegates.JobStepDelegate): void
 131:  {
 132:  this.jobSteps.push(jobStep);
 133:  }
 134:   
 135:  private CreateProxySteps(): AsyncJob.Delegates.ProxyStepDelegate[]
 136:  {
 137:  var proxySteps = new Array<AsyncJob.Delegates.ProxyStepDelegate>();
 138:  this.jobSteps.forEach((jobStep: AsyncJob.Delegates.JobStepDelegate)=>
 139:  {
 140:      var proxyStep = this.ProxyStep.bind(this, jobStep);
 141:      proxySteps.push(proxyStep);
 142:  });
 143:   
 144:  return proxySteps;
 145:  }
 146:  }

 

Line 130 is the AppendJobStep() method. This was called from the Initialize() method in our child class and all it does is push the passed job step into an array.

Line 15 is the PerformRun() method which is called from our child class method Run(). This method first creates an array of proxy steps and then uses the Async.series() method to iterate these steps in an async fashion.

Line 20 executes the CompleteRun() method when the Async.series() is completed in PerformRun().

Line 137 is the CreateProxySteps() method called from PerformRun(). It takes the array of job steps created by the calls to AppendJobStep() and wraps each inside a call to ProxyStep(). It then takes the wrapped calls and appends it to an array which is finally returned from the method.

Line 108 is the ProxyStep() method. The first thing it does is retrieve the Class.MethodName() for the actual job step.

On line 116 the method checks to see if our deferred is still pending which means no previous step has failed or resolved the deferred. If it is still pending the actual step will be executed on line 122.

Line 126 is within the catch block which wraps the call to the job step. This calls the method HandleExceptionDuringJobStep().

Line 30 is the method HandleExceptionDuringJobStep(). This method creates a POJO object containing the exception method’s name, the actual exception object, and the job callstack array. Next it executes the callback and passes it the constructed POJO object.

Line 49 is the method CompleteRun() which was called inside PerformRun() when the Async.series() was completed. This method will either reject or resolve the deferredResult object. Remember the deferredResult object was created in the constructor of our child class and passed to us in the BaseAsyncJob constructor. It is what our child class Run() method returns to its caller.

delegates

There are 3 delegates:

  1. CallbackDelegate
  2. JobStepDelegate
  3. ProxyStepDelegate
   1:  export interface CallbackDelegate
   2:  {
   3:  (possibleError?: any): void;
   4:  }
   5:   
   6:   
   7:  export interface JobStepDelegate
   8:  {
   9:      (callback: CallbackDelegate): void;
  10:  }
  11:   
  12:   
  13:  export interface ProxyStepDelegate
  14:  {
  15:      (actualStep: JobStepDelegate, callback: CallbackDelegate): void
  16:  }
Advertisements

Performance Diagnostics for Windows Store Apps in Visual Studio 2013

Pillars of Performance

Microsoft has identified three pillars of performance which ground the perceptions of Windows Store (WS) Applications. These perceptions include such effects as responsiveness, battery usage, and others which trigger bad reviews of a WS application.

Line of Business (LOB) applications will not be purchased from the app store and therefore do not depend on reviews for monetization, however these Pillars of Performance are still vital to an LOB application.

Fast

A WS Application should be snappy when moving from one view to the next. In addition all elements must remain responsive during a heavy back end operation.

In addition to creating snappy single operations, we must also consider a composite operation of back end work as well as UI animations and view changes. A lot of the costs incurred during a composite operation are incurred by the UI framework on our behalf.

Fluid

This pillar describes how ‘buttery smooth’ the UI is during panning, scrolling, or animation. For example scrolling horizontally across data groups on the view or the animation which may show detailed information about a specific data group.

Efficient

This pillar signifies how well the app plays with other apps in the Win8.1 sandbox. For example, if a WS Application has a large amount of memory leaks, or unnecessary disk read/write activity then it will consume more battery energy than it needs. This will force the user to stop their work and recharge the battery.

Managed Memory Analysis

Visual Studio 2013 allows us to perform analysis on managed memory in our WS Applications. If we go back and review the 3 Pillars of Performance we see that managing memory and cleaning up leaks affects the two Pillars of Application Speediness and Efficiency.

For example we may see a page coming up slowly because it is storing a large object graph into memory. In this case we’ll use the Managed Memory Analysis results to detect this large object graph storage and may be able to work around the need to store that much data into memory.

In order to perform the Managed Memory Analysis you will also need the SysInternals tool ProcDump.

Batch File Utility

In addition to Visual Studio 2013 and the ProcDump tool there is another tool (a batch file) which is not required but will make the process of dumping and opening the analyses easier. Prior to running the batch file you should start the application and get it to the state where you want to perform the analysis. You can start the target app through the Start Menu or through Visual Studio (without debugging – Ctrl+F5.

   1:  set ProcDump=SysInternals\ProcDump
   2:   
   3:  set ProcName=App2.exe
   4:   
   5:  %ProcDump% -ma -o %ProcName% BASELINE.DMP
   6:   
   7:   
   8:  @echo ============================================================
   9:  @echo ============================================================
  10:  @echo ============================================================
  11:  @echo The baseline dump file has been written.
  12:  @echo Exercise your app and when you are ready for the next dump,
  13:   
  14:  @pause
  15:   
  16:  %ProcDump% -ma -o %ProcName% FINAL.DMP
  17:   
  18:   
  19:  @echo ============================================================
  20:  @echo ============================================================
  21:  @echo ============================================================
  22:  @echo Your dump files have been completed:
  23:  @dir *.dmp
  24:   
  25:  @FINAL.DMP

.csharpcode, .csharpcode pre
{
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
}
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
{
background-color: #f4f4f4;
width: 100%;
margin: 0em;
}
.csharpcode .lnum { color: #606060; }

 

abc

Batch File Line Numbers

1. Sets the variable named ProcDump to the full path of the ProcDump utility

3. Sets the variable named ProcName to the name of the application you wish analyze. This is the name which appears in Task Manager when the application is running.

5. Creates the baseline memory dump of the application. This dump file will be named BASELINE.DMP and be stored in the same folder as the batch file.

14. This line pauses the script which allows us to run the application through the states we wish to analyze before taking the final memory dump.

16. This line is executed after the pause and creates the final dump file named FINAL.DMP. It will also be stored in the same folder as the batch file.

25. This last line will cause Visual Studio to be launched and open the final dump file. You will then see a summary of the dump file as shown below:

Pic1

Execute the Memory Analysis

  1. Click the Action link named ‘Debug Managed Memory’ on the right and you will be presented with this screen:
  2. pic2
  3. At the top right of the screen click the Select Baseline drop down. Then browse and choose the BASELINE.DMP file. Your next screen will show results for the analysis between the FINAL.DMP and the BASELINE.DMP memory dumps.

Tracking down a Memory Leak

In order to demonstrate the act of tracking down a memory leak we’ll use a simple application that allows the user to transfer back and forth from Page 1 to Page 2 with the click of a button:

pic3 pic4

When Page 2 comes up, it creates a large number of objects and never deletes them.

After starting the app we must run the batch file to create the BASELINE.DMP file. When the batch script pauses we will switch between the app pages a few times and then have the batch script create the FINAL.DMP file.

Once Visual Studio performs the analysis we will see the view showing the difference between the BASELINE.DMP and the FINAL.DMP memory dumps. This view has several columns in it and they are described next.

pic5

 

Object Type Column

The type of object for the line. Instances of this type may or may not have been leaked.

Count Column

The number of instances for this type that were left when we took the FINAL.DMP dump file. This number does not indicate a leak if it is greater than zero.

Total Size Column

The total size of the instances which were left when we took the FINAL.DMP dump file. Again, this number does not indicate a leak if it is greater than zero.

Count Diff Column

The difference between the number of instances in the BASELINE.DMP and the FINAL.DMP dump files. This number does indicate a leak if it is greater than zero.

Total Size Diff.

This number represents the size of all of the instances represented by the Count Diff Column. This number does represent a leak if it is greater than zero.

We see there are a lot of objects showing that are in the System namespace and we want to filter our results to just those objects in our application’s namespace. We can do this by typing App2 in the upper left Search box.

pic6

We can then immediately see there are 1,500 instances of the Order class which have been leaked.

If we expand the App2.Order object type, we see the 10 largest instances. Then if we select one of those instances we’ll see it is referenced by an of Order[] which is referenced by a List<Order> which is finally referenced by an Object[].

We should then look in our application’s codebase to find where Order instances are created and why they’re not being cleaned up.

Energy Consumption

Visual Studio 2013 has a new diagnostics tool which provides Energy Consumption metrics for a WS Application. To run the tool, open a WS application and press Ctrl+Alt+F9 or go to the Debug menu and choose Performance and Diagnostics.

You will be presented with the dialog shown below where you should check ‘Energy Consumption’ and press Start.

pic7

In our example, we will run a WS Application under the Energy Consumption tool and perform some operations within the application. After doing so, we will go back to VS where we will see this screen:

pic8

Timeline Graph

Press the ‘Stop Collection’ link to stop the diagnostics run and build a report. After a few seconds you’ll see a timeline graph at the top representing energy consumption over time.

pic9

This report provides power usage metrics for four items:

1. CPU

2. Display

3. Network

4. The Total of the above 3.

Display Usage

The display metrics normally stay constant during the run however in the Timeline Chart at the top, the display usage drops to zero between 5s and 12s. This is the result of a problems in the WS Application which was tested. Therefore the display usage during this time slot can be disregarded.

Some things which affect display usage are:

· Display size

· Screen brightness

· Display technology

CPU Usage

In the timeline graph of the energy usage by the CPU, we see several spikes between 1s and 5s. This is when the application is starting.

We’ll see another spike between 7s and 10s but again, we can ignore this metric change as it takes place during this particular application’s misbehavior.

The third is at 20s and 21s which is where the application is opening and rendering its screen. The last spike at 69s happens when the application was shutting down and storing its data.

Network Usage

In this run, we will not have any network usage as the run was performed on a wired computer which has no cell or Wi-Fi data transmissions. However we should know that cell service transmissions can be a real drain on battery life if they are used to communicate a large block of data. This is because the radio is constantly moving down from and back up to a higher energy level as it separates the block of data into smaller blocks for the cell network and transmits/receives them.

Doughnut Graph

This graph is shown on the right side of the report.

pic11

The paragraph at the bottom indicates how long a ‘Standard Battery’ would last if the user did the same operations for about 11 hours. According to Microsoft the algorithm to determine the battery life uses a software model trained on lower-end devices. So a ‘Standard Battery’ here is a battery typically found on lower-end devices.

A typical use for this information is to first baseline a set of operations in your application to see how much energy was used. If that level is considered too high, the application should be refactored and this report would be ran again.

XAML UI Responsiveness

A key to meeting the Fast and Fluid pillars is to keep the app rendering at 60 FPS. Visual Studio 2013 has a diagnostics tool named ‘XAML UI Responsiveness’ which addresses how responsive your XAML UI is.

The tool is ran just like the Energy Consumption tool described earlier. Open your project and press Ctrl+Alt+F9 and check the XAML UI Responsiveness tool:pic12

After you press the Start button your application will start and you should use the target portion of your application for which you want the diagnostics. Then go back to Visual Studio and click ‘Stop Collection’ link.

XAML UI Responsiveness Report

After clicking the link you will see a report such as this one:

pic13

This report is divided into four swim lanes.

Swim Lane 1

Shows how long the application ran during this diagnostic. You can click and drag to limit the other swim lanes to a specific time slice of the run. Note that you can click and drag on swim lanes 1-3.

Swim Lane 2

Shows the CPU utilization on the UI Thread separated into 4 categories:

Parsing

Occurs during parsing of the XAML into a tree of XAML elements.

Layout

Occurs when the XAML element tree is positioned and transformed into the visual tree.

Per Frame Callback

Occurs on code defined in the Composition Target.Rendering callback.

XAML Other

Any other application code executed during UI processing is shown in this category. For example, modifying an elements RenderTransform on a timer.

Swim Lane 3

Shows our key element (frames per second) during the run of the application. The FPS is measured on the UI thread and on the Composition thread.

UI Thread

This thread is responsible for running per frame callbacks (CompositionTarget.Rendering) as well as parsing the XAML and building the XAML element tree. It lays the element tree by measuring, arranging, and applying styles to each element. And last it will take the element tree and build a visual tree by rasterizing each element.

In the screen shot above you’ll notice that the UI FPS is much lower than 60 FPS. This is ok since the Composition Thread is running at 60 FPS which means the UI Thread is feeding the Composition Thread the rasterized elements fast enough for it to reach 60 FPS.

Composition Thread

Performs any independent translations and sends the rasterized visual tree to the GPU.

Parsing JavaScript in C# Part I

I have a personal project I am working on in which I want to compile the method and class signatures (but not the implementation logic) of JavaScript files.

I found the solution I’m going to use but not before wandering through a buh-zillion pages on the web and trying a few approaches until I arrived at something which I think will work for what I need. This blog entry is about that wandering.

A JavaScript file for testing

I needed a script file with a class and method in it to use for my tests. There are different ways to create a ‘class’ in JavaScript, but for this test I decided to use the function prototype method generated from Script#.

The C# class:

public class Class1
{
    public void DoSomething(int parm1)
    {
    }
}

The resulting JavaScript:

//! SSGen.debug.js
//

(function() {

////////////////////////////////////////////////////////////////////////////////
// Class1

window.Class1 = function Class1() {
}
Class1.prototype = {

    doSomething: function Class1$doSomething(parm1) {
        ///
        ///
    }
}

Class1.registerClass('Class1');
})();

//! This script was generated using Script# v0.7.4.0

Attempt I – Use the Microsoft.JScript.Vsa.VsaEngine

I found an entry on Rick Strahl’s blog about evaluating JavaScript in C# and although it talks about evaluating JavaScript instead of parsing it, I figured it would be a good place to start.

I wrote this C# test code:

        static private void ParseJsWithVsa()
        {
            string jsPath = @"SSGen.debug.js";
            string javaScript = File.ReadAllText(jsPath);
            VsaEngine engine = VsaEngine.CreateEngine();
            object evalResult = Eval.JScriptEvaluate(javaScript, engine);
        }

And got these compilation warnings:

Warnings

But it it did compile! So I ran it and this was my result:

Err1

Okay, well that’ll be easy to fix I thought. I decided to just remove the Window. from line 10 of the JavaScript and try it again with this result:

Err1

Arrgh. I guess that the parsing worked without trouble but the actual evaluation failed, I couldn’t find any way to get the parsing results so I decided not to work on this approach any longer. Instead I decided I’d try the ICodeCompiler noted in the compilation warnings above.

Attempt II – Use the ICodeProvider

Here’s the c#:

        static private void ParseJsWithICodeCompiler()
        {
            string jsPath = @"SSGen.debug.js";
            JScriptCodeProvider jsProvider = CodeDomProvider.CreateProvider("JScript") as JScriptCodeProvider;
            using (TextReader text = File.OpenText(jsPath))
            {
                CodeCompileUnit ccu = jsProvider.Parse(text);
            }
        }

Here’s the result:

ICC Error

Oh well. So much for that idea.

Attempt III – Use JSLint

I know that JSLint is a code quality tool but I figured that somewhere down in the mess of JSLint code it’s got to parse the target script and maybe I could hook into it and get the results of the parsing action.

I created a test ASP.Net web site which would show me the results of running JSLint against my test script file.

Here is my ASP.Net site’s markup:


    JSLint Parse Tester
<script type="text/javascript" src="http://ajax.aspnetcdn.com/ajax/jQuery/jquery-1.7.2.min.js"></script><script type="text/javascript" src="Scripts/Test.js"></script>
<script type="text/javascript" src="Scripts/JSLint.js"></script></pre>
<form id="form1">
<div>
<h3>Result of calling JSLINT(script);</h3>

<hr />

<h3 id="resultsTitle"></h3>
<div id="results"></div>
</div>
</form>
<pre>

And here is the JavaScript I wrote to execute JSLint against my test script file:

///
///

function OnGetScriptToParseComplete(script)
{
    var result = JSLINT(script);
    $('#jsLintResult').text(result);

    if (result)
    {
        var tree = JSON.stringify(JSLINT.tree, [
         'string', 'arity', 'name', 'first',
         'second', 'third', 'block', 'else'
     ], 4);

        tree = tree.replace(/\n/g, "
");
        tree = tree.replace(/ /g, " ");
        $('#results').html(tree);

        $('#resultsTitle').text("JSLINT.tree:");
    }
    else
    {
        var errs = JSON.stringify(JSLINT.errors, undefined, 4);
        errs = errs.replace(/\n/g, "
");
        errs = errs.replace(/ /g, " ");
        $('#results').html(errs).css('color', 'red');

        $('#resultsTitle').text("JSLINT.errors:");
    }
}

$(document).ready(function ()
{
    $.ajax({
        url: "scripts/ToParse.js",
        dataType: 'text',
        success: OnGetScriptToParseComplete
    });
});

And here are the results:

JSLint results

Before I delved into this approach any further I continued looking online for solutions to parse JavaScript and found something named ANTLR which seemed to be exactly what I needed.

Attempt IV – Use ANTLR

ANTLR is a tool which, among many other things, allows me to generate lexers and parsers in a target language (i.e. C#) to use against a specific grammar.

I’ve been working with ANTLR for a couple of days now, it’s got several ‘pieces’ which must be downloaded, version matched, and fitted together to do what I need but it seems to be the best solution so far. I’ll write more about ANTLR and my project in the next part of this series.

JavaScript Intellisense with Visual Studio 11 Beta

I’ve spent a lot of time in VS10 editing large JavaScript class frameworks and so I was very thankful for the Intellisense support in the IDE. It’s was a wonderful experience to write OO JavaScript with Intellisense but it was hit or miss when the Intellisense would actually show up in the IDE. It seemed that sometimes the IDE would just ‘forget’ about a class. And there were many other things which made the experience of writing in JS difficult at best such as all those times when VS10 SP1 would crash intermittently when editing large bodies of JS files.

I decided to take a look at VS11 beta and see how it handled Intellisense when doing JavaScript object oriented development. I knew that Microsoft was targeting JavaScript as one of the languages to build Metro apps from so I figured they may have cleaned up the JS development experience in the IDE.

To write this blog entry, I’m comparing the VS11 Beta as of March 2010 with my experience in VS10 SP1.

Referencing Class Files

In VS10 I had to place a reference comment in the file to see a class in another file:

/// <reference path=”Truck.js” />

This isn’t too bad unless you’re working with dozens of class files in which case you’ll have inevitable circular references somewhere. When VS10 ran into circular references, it would slowly leak memory and eventually crash.

In a VS11 Metro project, all you need to do is reference the JS class files from one of your .HTM pages. Once referenced, the class is available to you in every other JS file. So just adding this line to one of the HTM pages in your project will make the Robot class available everywhere:

<script type=”text/javascript” src=”js/Robot.js”></script>

But not in VS11 Web Projects

The above technique does not work in web projects, it only works in Metro projects which was very strange to me. In the web projects I still had to add reference comments to a JS file to pick up classes defined in other files.

Why? If the IDE can build Intellisense from the script tags in the web files for a Metro project, then why can’t it do the same for a web project? Maybe it’s something to do with the beta version, I don’t know but I’m anxious to try it out in the RTM version.

How to play Audible.com content on your WP7 phone

No, Audible hasn’t yet released an app to play their content on the new Windows Phone. However there is a way you can play your audiobook content on your WP7 phone and you can do so without fumbling with multiple CDs.

First my story. In February 2011 I left my iPhone in my jeans and sent it through the washer resulting in no more iPhone for Christopher. I decided to buy a WP7 phone instead of another iPhone and became excited about my new phone’s potential but it never occurred to me that I wouldn’t be able to play my Audibooks on the new phone because Audible hadn’t released an Audible app. When I finally admitted to myself that there was no Audible app, I started to burn my books to CD and play them on my CD player. But that got old (and expensive) quick so I waited for the mango release with great anticipation thinking that surely there would be an App in Mango that played my Audible content. Mango came and went but still no Audible app.

Sigh…

Since then I’ve figured out a way to play my audiobooks from Audible without having to burn CDs. To play your audiobooks you will need the following:

Things Needed

  1. Your Audible audiobook downloads
  2. iTunes
  3. Virtual CD – Available here
  4. My AudiobookTagger software – Available on CodePlex

Step 1 – Configure VirtualCD’s sound file mode



VirtualCD has the ability to host a virtual CD-RW drive which we’ll use for burning the audiobook from iTunes. In addition, VirtualCD has a sound file mode which causes it to convert each track burned to the virtual CD-RW to MP3.

Step 2 – Use iTunes to burn your Audiobook

Create a playlist in iTunes specifically for your audiobook and drag the audiobook into the playlist. Then right-click the playlist and choose ‘Burn Playlist to Disc’.
Once iTunes complets the burn and the VirtualCD sound file mode has completed converting the tracks, you will be left with a series of directories for each CD containing the MP3 tracks of the CDs.
Sound File Mode Output

At this point you could just drop these MP3 files into your Zune software but you wont get any author or book information and the track numbers are all afflicted.

The screen shot to the left is what my Zune software looked like after adding those MP3 files. Notice that Zune tagged one of those burned audiobook CDs as the album Max Killa Hertz dated 1995. Really? I don’t have a clue where Zune got that information from. It would be a difficult task to listen to the tracks in the proper order without the correct track information but that is what I wrote AudiobookTagger to do.

Step 3 – Export the iTunes playlist to XML

In order to tag the audiobook correctly, AudiobookTagger will get the track information from the iTunes playlist XML file.

Step 4 – Modify the playlist XML to suit your information

Inside the playlist XML is one or more dictionaries indicated by the <dict> element containing information about your audiobook.

The AudiobookTagger gets its information from the first dictionary. This dictionary can be found by searching for the phrase ‘Track ID’ and when you find it, just make sure it’s the first occurrence found. You can then modify the book name, author, and Genre.

To add multiple Genres for the book, just enter them seperated by commas.

Step 5 – Modify the AudiobookTagger settings

ZuneAudioBookDirectory

The path to your music files for the Zune software.

MediaFileExtension

If you modify VirtualCD to produce other than MP3 files with the Sound File Mode, change the extension here.

VirtualCDOutputDirectory

This is the directory configured in VirtualCD where it converts and stores the MP3 files.

iTunesPlaylistXmlPath

Place the path to the exported iTunes playlist XML file here.

Step 6 – Run the AudiobookTagger

The first thing which the AudiobookTagger will do is report to you what it is about to do. This will include the tag information from reading the playlist XML file as well as the MP3 files. If everything is correct, press any key and the AudiobookTagger will correctly tag and sequence all the MP3 files and move them to your Zune directory.

Step 7 – The result in Zune

All of the Audiobook tracks show in the proper sequence now with the proper media tags.
Note that the image you see for my book Full Black was not placed there by the AudiobookTagger. I placed the image there manually through the Zune software interface.

Verifying all the parts of a Sharepoint application

Trying to build a 5,000 piece Lego Death Star while simultaneously balancing on a highwire would not be the easiest act to perform nor probably the smartest. However I am doing just that as I am currently working on a SP 2007 application consisting of over 30 development projects, and at the same time trying to migrate the project from TFS 2008 to 2010, perform the first Sprint of a newly conceived Agile development process, and deploy the new sprint to a test environment with no standard deployment policies in place. Although it wouldn’t be smart or easy to build the Lego Death Star on a highwire, small private companies like mine do what they need to do and nimbly turn on a dime for the customer so this SP project is being done regardless of difficulty.

When I came on to this project, we began to move the single Visual Studio solution containing +30 projects to 7 solutions each containing a handful of projects. At the same time, we worked to migrate from TFS 2008 to TFS 2010. After spending long hours of daylight and evening time with the lead developer, working thorough all of these issues and becoming more knowledgable about the system, I became the lead on one of the major SP sites named YMWCB (actual name withheld to protect the guilty.) YMWCB is that 5,000 piece Lego Death Star.

We work in a standard virtualized SP development environment where each person has their own virtual machine complete with SP, SQL Server, and Visual Studio. So after the migration of the codebase, seperating the 32 project solution into smaller solutions, and after working out all the kinks, I was tasked with setting up a few VMs each containing an installation of YMWCB  for the testers and a developer.   Installing YMWCB requires running some Powershell scripts, batch files, a command line executable, and performing some manual steps in SP. After doing all of that, the site will usually still blow up at which time Visual Studio is used to attach its debugger to the W3WP process to figure out what’s going on. After this debugging, another round of scripts is ran and more manual SP steps are performed. This process of site explosions, debugger, and scripts goes on until finally everything is in place and YMWCB runs without trouble.

I’ve been through this process so many times, that I’ve come up with an acronym for it- EXDES (EXplosion, DEbugger, Script). This acronym is said with the same pronunciation a deep southern Baptist would use to pronounce the word EXODUS. And BTW I have a license to talk about southerners as I spent 15 years in Arkansas.

So, back to my story of putting together my Lego Death Star while trying not to fall to my death: After experiencing the 3rd EXDES process, it dawned on me that I should write a utility which validates the YMWCB installation and report which pieces were missing or mis-configured. I wrote the utility as a C# command line application with a base class named BaseVerificationModule and multiple child verification module classes which would each verify some portion of the YMWCB installation.

I wrote a verification module class for each of these:

  1. Verify each subsite of the YMWCB site has the proper groups.
  2. Verify each of the above groups are given the proper permissions.
  3. Verify each YMWCB subsite’s document libraries have the appropriate event handler registered for the Added, Updated, and CheckedIn actions.
  4. Ensure the event handler registered above is the current version available in the GAC.
  5. Verify the YMWCB top level site contains the proper custom permissions and that these custom permissons have the correct attributes (e.g. OpenItems, CreateAlerts, ViewPages, etc.)
  6. Verify the top level YMWCB site contains the SP lists used to hold lookup information (such as region or area) used by the main application/web parts and ensure these lists contain the proper data.
  7. Verify each YMWCB subsite has 2 lists named ‘Draft Work’ and ‘Archived Work’ (names changed, again to protect the guilty.)
  8. Verify the YMWCB top level site has the correct lists in it as well.
  9. Perform a GAC verification on each of the 18 application assemblies. This includes:
    1. The assembly is in the GAC.
    2. There is only one version in the GAC.
    3. The assembly can be loaded from the GAC.
  10. Verifies certain lists contain certain custom added columns.
  11. Verify the web.config in the SP virtual directory. This includes:
    1. Ensure the AjaxToolkit is set to 1.0 in the <assemblies> section.
    2. Ensure the necessary custom web parts are configured int he <SafeControls> section.
    3. Verify the database connection string in the <connectionStrings> section can be used to open a connection to the database.

Does Kerberos authentication affect SQL Server connection pooling? Part 2

The Problem

In my first post on this subject, I wrote about a problem that occurs in the ADO.Net connection pool when a .Net application authenticates its callers to SQL Server using their Kerberos ticket. I am now going to present a solution to this problem and an actual implementation of the solution.

Most applications today are n-tiered applications with a data tier which accesses the database on behalf of the frontend user. The n-tiered application uses Kerberos so that the frontend user’s identity is passed through the application’s tiers to the data tier and used to authenticate the user to the database.

Since the data tier’s database connection is established under the credentials of each frontend user, the application receives two important benefits: data auditing accuracy and a tighter more focused security model. SQL Server audit trails are able to record data operations and tie them to each individual user. In addition, each user’s operations are performed under their specific database permission set.

 

The problem with this pattern is in the connection pool. The ADO.Net connection pools are indexed by the user and by the database so in the above example, there would be 3 connection pools. One pool for User1, one pool for User2, and one pool for User3 which drastically affects the application’s scalability since a database connection would have to be created for each user.

The Solution

One solution to this problem is to use Microsoft’s Trusted Subsystem pattern. In this pattern, the data tier would authenticate to the database using its own service account and operate against the database on behalf of each frontend user. By using this pattern, the connection pooling problem is solved because the data tier’s service account is the only account which authenticates to the database. However, there is at least one drawback to using the data tier’s service account to authenticate to the database:  The SQL Server auditing module can no longer tie the actions it records to the individual frontend users. In addition, the data tier’s service account would need enough permissions in the database to accommodate all users from the least privileged user to the most privileged user.

One way around the audit problem is to not use the SQL Server auditing module and replace it with one at the data tier level. Since the data tier gets the Kerberos ticket, it knows who is requesting the data and can therefore write the audit trail entries itself. But why re-invent the wheel when SQL Server has a tried and true enterprise level auditing system?

In order to solve both the audit trail and the permissions problem, the Trusted Subsystem approach should be used and the data tier’s service account should be granted impersonate privileges on each frontend user account. Each time the service account accesses the database for a user, it should issue an Execute As Login=xxx statement, perform the necessary operations, and then issue a Revert statement. This way, the SQL Server auditing module can accurately record operations per user and the database operations the service account performs are constrained by each user’s permissions.

 

Solution Implementation

In order to prove this solution in a way as close to the real world as possible, I setup an environment with the following virtual machines in a development domain I named POPEDEV:

Host Operating System Role
DC1 2008 R2 x64 Domain controller – POPEDEV domain
WEB1 2008 R2 x64 IIS
WEB2 2008 R2 x64 IIS
WEB3 2008 R2 x64 IIS
SQL1 2008 R2 x64 SQL Server 2008 (running under Network Service)
WORK1 Win7 x86 Development and user workstation

 

My test involved a web service to return a set of products from a SQL Server Products table. Each of the web services had a service operation named GetProducts(). I created three web services: Hop1, Hop2, and BDE. The client called Hop1.GetProducts() which called Hop2.GetProducts() which called BDE.GetProducts(). I created three web services and therefore three hops to exercise the Kerberos ticket delegation process. The Kerberos ticket which was created on the workstation would be passed to Hop1 and then to Hop2 and finally to BDE.

I deployed the following web services to each web server:

Host Web Service Application Pool Identity Role
WEB1 Hop1 Network Service Proxy web service to call Hop2.
WEB2 Hop2 Network Service Proxy web service to call the BDE.
WEB3 BDE POPEDEV\BDE Business Data Engine. This is the data tier.

 

The client application on WORK1 would call the Hop1 web service on WEB1 which would do nothing except call the Hop2 web service on WEB2. The Hop2 web service would do nothing except call the BDE web service on WEB3. The BDE web service would be my data tier and would connect to the database as a trusted subsystem to return the requested data to Hop2 which would return the data to Hop1 which would return the data to the client on WORK1.

I wrote my web services in WCF and configured them to use the WsHttpBinding with message security and Kerberos authentication.

 

For my testing, I created 5 domain users: USER1 – USER5 and wrote the client application to be able to impersonate one of those 5 domain users before calling Hop1. This way, I could simulate any of the 5 users being the frontend user. In addition, I created the domain user POPEDEV\BDE which would be the data tier’s service account user.

Constrained Delegation

When the Kerberos ticket is passed from one the Hop1 web service to the Hop2 web service, Hop1 is basically delegating to Hop2 the user’s request for data. By the same token, when Hop2 called the BDE and passed it the Kerberos service ticket, it was delegating to the BDE web service. However, the ability to delegate from one machine/service default ability in Active Directory.  Each machine/service must be given explicit delegation permissions.

This is called constrained delegation and is a security feature of Active Directory. The opposite would be open delegation where, when a process received a user’s identity in a Kerberos ticket, could call any other service local or remote on that user’s behalf. This could present a whole host of security problems if an attacker were able to launch a service within the organization and induce users to call it.

So I needed to setup the delegation settings on WEB1, WEB2, and WEB3 to be able to delegate to WEB2, WEB3, and SQL Server service as shown below:

 

In the screen shot above, I gave WEB1 permission to delegate to the service HTTP/web2.popedev.com because the HOP2 web service on WEB2 was running under the Network Service account.


Delegating to the BDE Web Service

The last set of delegation permissions was for the Hop2 web service on WEB2 to be able to delegate to the BDE web service on WEB3. This wasn’t quite as straight forward as permitting WEB1 to delegate to WEB2 because the BDE web service was running under the service account POPEDEV\BDE.

First I added the service principal name HTTP/web3.popedev.com to the POPEDEV\BDE account:

Next I enabled WEB2 to delegate to that same service name:

 

Running the WhoAmI Test

The first test I wanted to perform was to verify that the Kerberos ticket was being passed between the web servers and would make it all the way to the BDE web service.

I created a method named WhoAmI() in each web service which would return the current thread, windows, and other identities. It returned 4 identities as follows:

  • ServiceSecurityContext.Current.PrimaryIdentity.Name
  • ServiceSecurityContext.Current.WindowsIdentity.Name
  • WindowsIdentity.GetCurrent().Name
  • Thread.CurrentPrincipal.Identity.Name

If the Kerberos service ticket was being passed from web service to web service, then these four methods should return the name of the frontend user (e.g. POPEDEV\User2). Except that the BDE web service, which was not impersonating its users to the SQL Server would return POPEDEV\BDE for the Windows identity because POPEDEV\BDE was its service account identity.

In addition, I ran NetMon on WEB1 and WEB2 to watch for Kerberos packets as another verification step.

I ran my console tester and the opening menu asked which operation I wanted to run. I chose option 0, the WhoAmI operation.

After choosing the WhoAmI operation, the console tester asked which user to impersonate. I chose POPEDEV\User4.

After choosing which user to impersonate, the console tester called the WhoAmI method on Hop1. In the screen shot above, you can see that the Hop1 and Hop2 web services see the user as POPEDEV\User4. The BDE web service shows the windows identity as POPEDEV\BDE and the thread identity as POPEDEV\User4.

The next screen shot shows the NetMon captures with the packet filter set to watch for Kerberos frames. As you can see, WEB1 requested http/WEB2.POPEDEV.COM and WEB2 requested http/WEB3.POPEDEV.COM which is exactly what I would have expected.

WEB1 NetMon trace:

WEB2 NetMon trace:

After seeing the results of the WhoAmI test and the captured frames in NetMon, I was confident that Kerberos was being used to authenticate all the way to the BDE web service.

Running The Full Test

For the full test, I wanted to prove that any user could request data from the data tier and the audit trail would record the select operation under that user and I wanted to prove that the data tier was using a single connection pool.

The data which the BDE web service returned was a set of products in a table I named Products. Here is a sample of what the BDE needed to run in order to return the products correctly:

Execute as Login= ‘Popedev\User3’

select * from Product

revert

The first line ‘Execute as…’ enabled the BDE to run the select statement underneath the identity of the frontend user whose identity came across in the Kerberos ticket. Remember that the BDE connected as its service account so that a single connection pool would be used. In addition, the BDE web service needed to run the SQL underneath the frontend user’s identity so that the audit trail would be accurate and that the SQL operations would be run under the frontend user’s security.

The last line ‘Revert’ reverted the connection back to the BDE web service’s service account identity.

Here is a shot of the current database connections before I ran the test. There are 2 connections to the master database and 1 connection to the tempdb database. After running the test, I should see a new connection to my test database which I named kerbtest.

I ran the console tester and told it I wanted operation 1 – GetProducts and to impersonate POPEDEV\User3.

The console tester next asked whether I wanted to use LINQ or ADO.Net to get the products. I chose LINQ and it returned 7 products.

The SQL Server activity monitor now shows process 54 which was my BDE web service’s connection. Notice that process 54 is under the BDE web service’s service account POPEDEV\BDE and not POPEDEV\User3 who is the frontend user in my test.

This next screen shot shows the audit trail for session 54. Notice the SESSION_SERVER_PRINCIPAL_NAME is POPEDEV\BDE because that is who opened connection 54. The DATABASE_PRINCIPAL_NAME is POPEDEV\User3 because that is who the console tester was running under.

I then ran my test again but this time I told the tester to impersonate POPEDEV\User5 and to use ADO.Net instead of LINQ:

Here is a screen shot of the SQL Server activity monitor. Notice that there is still one session – 54:

The next screen shot shows the audit trail. The 2 entries reflect session id 54 as excepted and the DATABASE_PRINCIPAL_NAME reflects POPEDEV\User5 and POPEDEV\User3. In addition, the statement is different because I used LINQ the first time and ADO.Net the second time.