My diary of software development

Async.js with TypeScript

I became an instant fan of the Async JavaScript Library the first time I used it. I am currently working on an HTML5 project at work which contains a lot of asynchronous code along to access IndexedDB as well as multiple web service calls. I was talking to another developer at my company about the ‘ugliness’ and maintenance nightmare of all this async code when he introduced me to the Async library.

The code below is an example of the asynchronous code in our application. As you can tell we’re using the async.waterfall() control flow:

   1:              var deferredResult = $.Deferred<String>();
   2:              var self = this;
   3:              async.waterfall([
   4:                  (callback: any) =>
   5:                  {
   6:                      self.dbContext.ReadWorkUnitSummary(groupId)
   7:                          .then((summary: Interfaces.Models.IWorkUnitSummary) =>
   8:                          {
   9:                              self.WrapUnits(summary);
  10:                              callback(null, summary);
  11:                          })
  12:                          .fail(error => callback(error));
  13:                  },
  14:                  (summary: Interfaces.Models.IWorkUnitSummary, callback: any) =>
  15:                  {
  16:                      self.dbContext.CollapseDataRecords(summary.DataRecords)
  17:                          .then((dataRecord: Interfaces.Models.IDataRecord) => callback(null, dataRecord))
  18:                          .fail((error) => callback(error));
  19:                  },
  20:                  (dataRecord: Interfaces.Models.IDataRecord, callback: any) =>
  21:                  {
  22:                      self.dataWebService.GetSiteRule(dataRecord.DataRecordId)
  23:                          .then((siteRule: Interfaces.Models.IDataSiteRule) =>
  24:                          {
  25:                              self.ProcessSiteRule(siteRule);
  26:                              callback(null, siteRule);
  27:                          })
  28:                          .fail(error => callback(error));
  29:                  },
  30:                  (siteRule: Interfaces.Models.IDataSiteRule, callback: any) =>
  31:                  {
  32:                      self.dataWebService.CreateDataSite(siteRule)
  33:                          .then((url: string) =>
  34:                          {
  35:                              deferredResult.resolve(url);
  36:                              callback();
  37:                          })
  38:                          .fail(error => callback(error));
  39:                  }
  40:              ],
  41:              (possibleError: any) =>
  42:              {
  43:                  if (possibleError)
  44:                  {
  45:                      deferredResult.reject(Error);
  46:                  }
  47:              });
  49:              return deferredResult;

Readable and approachable

I believe the TypeScript  language and the Async library makes the above code both readable and approachable. It is readable because a developer can parse it in her head without needing outside documentation or the author standing there explaining it. The code is approachable because there is an obvious simple recurring pattern in it so the reader feels comfortable and doesn’t run away screaming “we’ve got to rewrite this! I have no idea how it works!”.

I feel that when I write readable and approachable code it becomes easier for another developer to pick up the code and modify it than it is if I wrote the code to be ‘amazing’ and understandable by only myself (and actually I’ll forget most of it within 6 months anyway). So kudos to TypeScript and Async for helping me to deliver maintainable code to my customer.

BECOMING Less readable and approachable

As I continued working with asynchronous code I began to realize that when debugging it wasn’t easy to tell how I got to where I was in the debugger. Often blocks such as the above would call another block and then it would be difficult at best to sift through the call stack to find block of calling asynchronous code.

For example here is what the call stack looks like in the Chrome debugger when stopped inside the ProcessSiteRule() method called on line 20 above:


Another problem I was running into was what happened when an exception was thrown from one of the anonymous methods in the Async call chain. If an exception was thrown from within an anonymous method, the rest of the Async call chain would not be called, the promise would never be resolved, and the caller would be left hanging forever. This type of bug is really difficult to track down.

I could fix some of that by placing a try/catch into each anonymous task but then what text would I give the rejection? For example if I added lines 5and 6 to the async step below, I would get a runtime exception. I could then catch the exception and reject the promise on line 17. The information I use to reject the promise would make it difficult to track down this block of code when reading the logs.

   1:  (dataRecord: Interfaces.Models.IDataRecord, callback: any) =>
   2:  {
   3:      try
   4:      {
   5:          var cow: any = {};
   6:          cow.bark();
   7:          self.dataWebService.GetSiteRule(dataRecord.DataRecordId)
   8:              .then((siteRule: Interfaces.Models.IDataSiteRule) =>
   9:              {
  10:                  self.ProcessSiteRule(siteRule);
  11:                  callback(null, siteRule);
  12:              })
  13:              .fail(error => callback(error));
  14:      }
  15:      catch (e)
  16:      {
  17:          callback("Exception when getting the site rule for DataRecord. -\n" + JSON.stringify(e, null, 4));
  18:      } 
  19:  },


In addition placing these draconian try/catch blocks in each anonymous Async task would make them a simple stack of asynchronous tasks much heavier and less readable than before.

Asynchronous job

I wanted to find a way to do the following with our asynchronous code:

  1. Provide specific information on which task in the anonymous Async call chain that an exception was thrown.
  2. Do not cause an unresolved promise hang when an exception was thrown in the asynchronous task.
  3. Be readable and approachable.

As I sat back and studied the code I realized that most of my code was using the async.waterfall() or async.series() control flows. I started thinking of our async code such as the above block as an Async Job. Those multiple asynchronous tasks handed to the Async control flow would be steps in an AsyncJob.

Here is the class I came up with which does the same thing as the async.waterfall at the beginning of this article. I called the job an AsyncJob and my specific class is named GetDataSiteUrlJob. Here is the basic structure without implementation of my class:

   1:  export class GetDataSiteUrlJob
   2:  extends Helpers.AsyncJob.BaseAsyncJob
   3:  implements Interfaces.Helpers.IAsyncJob
   4:  {
   5:  constructor(groupId: number,
   6:      dataWebService: Interfaces.WebService.IDataWebService,
   7:      dbContext: Interfaces.Db.IDbContext)
  10:  public Run(): JQueryPromise<string>
  12:  private Initialize()
  14:  private ReadWorkUnitSummary(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  16:  private WrapUnits(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  18:  private CollapseDataRecords(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  20:  private ProcessSummarySiteRule(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  22:  private CreateDatasite(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  23:  }

I’ll explain each element of this class, its base class, its interface, and the delegates in more detail later but first I want to show the the information that can be provided when an exception is thrown in one of the Async tasks.

I added this breaking code to the GetDataSiteUrlJob.ProcessSummarySiteRule() method:

   1:  private ProcessSummarySiteRule(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
   2:  {
   3:      var cow: any = {};
   4:      cow.bark();
   6:      callback();
   7:  }
When I ran the GetDataSiteUrlJob the information below is what it was able to provide when the cow could not bark in ProcessSummarySiteRule():


interface explanation

   1:  export interface IAsyncJob
   2:  {
   3:      Run(): JQueryPromise<any>;
   4:  }


The IAsyncJob interface is quite simple. There is one method to call: Run() which returns a promise.


child Class explanation

Here is the relevant code of the GetDataSiteUrlJob class:

   1:  export class GetDataSiteUrlJob
   2:  extends Helpers.AsyncJob.BaseAsyncJob
   3:  implements Interfaces.Helpers.IAsyncJob
   4:  {
   5:  private workUnitSummary: Interfaces.Models.IWorkUnitSummary;
   6:  private groupId: number;
   7:  private dataWebService: Interfaces.WebService.IDataWebService;
   8:  private dbContext: Interfaces.Db.IDbContext;
   9:  private siteRule: Interfaces.Models.IDataSiteRule;
  10:  private collapsedDataRecord: Interfaces.Models.IDataRecord;
  12:  constructor(groupId: number,
  13:  dataWebService: Interfaces.WebService.IDataWebService,
  14:  dbContext: Interfaces.Db.IDbContext)
  15:  {
  16:      super($.Deferred<any>());
  18:      this.groupId = groupId;
  19:      this.dataWebService = dataWebService;
  20:      this.dbContext = dbContext;
  22:      this.Initialize();
  23:  }
  25:  public Run(): JQueryPromise<string>
  26:  {
  27:      return this.PerformRun();
  28:  }
  30:  private Initialize()
  31:  {
  32:      this.AppendJobStep(this.ReadWorkUnitSummary);
  33:      this.AppendJobStep(this.WrapUnits);
  34:      this.AppendJobStep(this.CollapseDataRecords);
  35:      this.AppendJobStep(this.ProcessSummarySiteRule);
  36:      this.AppendJobStep(this.CreateDatasite);
  37:  }
  39:  private ReadWorkUnitSummary(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  41:  private WrapUnits(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  43:  private CollapseDataRecords(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  45:  private ProcessSummarySiteRule(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  47:  private CreateDatasite(callback: Helpers.AsyncJob.Delegates.CallbackDelegate): void
  48:  {
  49:      var self = this;
  50:      this.dataWebService.CreateDataSite(this.siteRule)
  51:          .then((url: string) =>
  52:          {
  53:              self.jobResult = url;
  54:              callback();
  55:          })
  56:          .fail((error) => callback(error));
  57:  }
  58:  }


On line 16 in the constructor, we pass in the deferred object which we want to use.

Line 25 is the implementation of IAsyncJob.Run(). It calls the base class method PerformRun() which executes each of our job steps.

Line 30 is the Initialize() method which was called from the constructor. This is where we append each job step to the job. These job steps are akin to the asynchronous methods passed to Async.

On line 53 the code is setting self.jobResult to a URL. The self.jobResult is defined in the base class and represents what will be passed into the deferred.resolve() method at the end of this job. Remember that the IAsyncJob.Run() returns a promise.

Each job step must be defined to take a CallbackDelegate parameter. This parameter is akin to the callback argument to each of the anonymous methods in the Async call:

   1:  (callback: any) =>
   2:  {
   3:      self.dbContext.ReadWorkUnitSummary(groupId)
   4:          .then((summary: Interfaces.Models.IWorkUnitSummary) =>
   5:          {
   6:              self.WrapUnits(summary);
   7:              callback(null, summary);
   8:          })
   9:          .fail(error => callback(error));
  10:  },


base class

   1:  export class BaseAsyncJob
   2:  {
   3:  public jobResult: any;
   4:  public deferredResult: JQueryDeferred<any>;
   5:  public jobSteps = new Array<Delegates.JobStepDelegate>();
   6:  private currentStepName: string;
   7:  private jobCallStack = new Array<String>();
  10:  constructor(deferredResult: JQueryDeferred<any>)
  11:  {
  12:  this.deferredResult = deferredResult;
  13:  }
  15:  public PerformRun(): JQueryPromise<any>
  16:  {
  17:  try
  18:  {
  19:      var proxySteps = this.CreateProxySteps();
  20:      async.series(proxySteps, this.CompleteRun.bind(this));
  21:  }
  22:  catch (e)
  23:  {
  24:      this.deferredResult.reject(e);
  25:  }
  27:  return this.deferredResult.promise();
  28:  }
  30:  public HandleExceptionDuringJobStep(exception: any, callback: any): void
  31:  {
  32:  try
  33:  {
  34:      var exceptionDescription =
  35:          {
  36:              ExceptionInMethod: this.currentStepName,
  37:              Exception: exception,
  38:              JobCallStack: this.jobCallStack,
  39:          };
  41:      callback(exceptionDescription);
  42:  }
  43:  catch (ex)
  44:  {
  45:      callback(ex);
  46:  }
  47:  }
  49:  public CompleteRun(possibleError: any): void
  50:  {
  51:  if (possibleError)
  52:  {
  53:      this.deferredResult.reject(possibleError);
  54:  }
  55:  else
  56:  {
  57:      this.deferredResult.resolve(this.jobResult);
  58:  }
  59:  }
  61:  private GetClassNameFromConstructor(constructorText: string): string
  62:  {
  63:  var childClassName = "?";
  64:  try
  65:  {
  66:      var funcNameRegex = /function (.{1,})\(/;
  67:      var results = (funcNameRegex).exec(constructorText);
  68:      childClassName = (results && results.length > 1) ? results[1] : "?";
  69:  }
  70:  catch (ex)
  71:  {
  72:  }
  74:  return childClassName;
  75:  }
  77:  private GetClassAndMethodName(method: Function): string
  78:  {
  79:  var classAndMethodName = "";
  81:  try
  82:  {
  83:      var calleeMethodText = method.toString();
  84:      var methodsOnThis = this['__proto__'];
  85:      var methodName: string;
  87:      for (var methodOnThis in methodsOnThis)
  88:      {
  89:          if (calleeMethodText == methodsOnThis[methodOnThis])
  90:          {
  91:              methodName = methodOnThis;
  92:              break;
  93:          }
  94:      }
  95:      methodName = (methodName == "?" ? "anonymous" : methodName);
  97:      var constructorText = methodsOnThis['constructor'];
  98:      var className = this.GetClassNameFromConstructor(constructorText);
  99:      classAndMethodName = $.format("{0}.{1}()", className, methodName);
 100:  }
 101:  catch (ex)
 102:  {
 103:  }
 105:  return classAndMethodName;
 106:  }
 108:  private ProxyStep(actualStep: Delegates.JobStepDelegate,
 109:  callback: AsyncJob.Delegates.CallbackDelegate)
 110:  : void
 111:  {
 112:  this.currentStepName = this.GetClassAndMethodName(actualStep);
 113:  try
 114:  {
 115:      this.jobCallStack.push(this.currentStepName);
 116:      if (this.deferredResult.state() != Enums.JQueryDeferredState.Pending)
 117:      {
 118:          callback();
 119:          return;
 120:      }
 122:      actualStep.bind(this)(callback);
 123:  }
 124:  catch (ex)
 125:  {
 126:      this.HandleExceptionDuringJobStep(ex, callback);
 127:  }
 128:  }
 130:  public AppendJobStep(jobStep: Delegates.JobStepDelegate): void
 131:  {
 132:  this.jobSteps.push(jobStep);
 133:  }
 135:  private CreateProxySteps(): AsyncJob.Delegates.ProxyStepDelegate[]
 136:  {
 137:  var proxySteps = new Array<AsyncJob.Delegates.ProxyStepDelegate>();
 138:  this.jobSteps.forEach((jobStep: AsyncJob.Delegates.JobStepDelegate)=>
 139:  {
 140:      var proxyStep = this.ProxyStep.bind(this, jobStep);
 141:      proxySteps.push(proxyStep);
 142:  });
 144:  return proxySteps;
 145:  }
 146:  }


Line 130 is the AppendJobStep() method. This was called from the Initialize() method in our child class and all it does is push the passed job step into an array.

Line 15 is the PerformRun() method which is called from our child class method Run(). This method first creates an array of proxy steps and then uses the Async.series() method to iterate these steps in an async fashion.

Line 20 executes the CompleteRun() method when the Async.series() is completed in PerformRun().

Line 137 is the CreateProxySteps() method called from PerformRun(). It takes the array of job steps created by the calls to AppendJobStep() and wraps each inside a call to ProxyStep(). It then takes the wrapped calls and appends it to an array which is finally returned from the method.

Line 108 is the ProxyStep() method. The first thing it does is retrieve the Class.MethodName() for the actual job step.

On line 116 the method checks to see if our deferred is still pending which means no previous step has failed or resolved the deferred. If it is still pending the actual step will be executed on line 122.

Line 126 is within the catch block which wraps the call to the job step. This calls the method HandleExceptionDuringJobStep().

Line 30 is the method HandleExceptionDuringJobStep(). This method creates a POJO object containing the exception method’s name, the actual exception object, and the job callstack array. Next it executes the callback and passes it the constructed POJO object.

Line 49 is the method CompleteRun() which was called inside PerformRun() when the Async.series() was completed. This method will either reject or resolve the deferredResult object. Remember the deferredResult object was created in the constructor of our child class and passed to us in the BaseAsyncJob constructor. It is what our child class Run() method returns to its caller.


There are 3 delegates:

  1. CallbackDelegate
  2. JobStepDelegate
  3. ProxyStepDelegate
   1:  export interface CallbackDelegate
   2:  {
   3:  (possibleError?: any): void;
   4:  }
   7:  export interface JobStepDelegate
   8:  {
   9:      (callback: CallbackDelegate): void;
  10:  }
  13:  export interface ProxyStepDelegate
  14:  {
  15:      (actualStep: JobStepDelegate, callback: CallbackDelegate): void
  16:  }

Cisco and Microsoft NPS

My church uses a Cisco ISA570 as our firewall and runs Windows Server 2012 R2 servers on the LAN. The ISA by default uses its local database for authenticating management users and VPN users but I wanted to use the Network Policy Server and Active Directory on our 2012 R2 servers for authentication and authorization. It turns out that getting this setup and working was much easier than I’d thought.

NPS Configuration

First I added the ISA-570 to the NPS as a standard RADIUS client:


I wanted to use NPS to authenticate 3 groups of users:

  1. Those that have admin management access to the ISA.
  2. Those that have read-only management access to the ISA.
  3. Those who can VPN into the ISA.

I wanted each user authenticated against AD and to use their AD Security Group to match the group above. This meant I needed to create 3 Network Policies on the NPS and set each policy’s condition to match an AD Security Group I had previously created:

Firewall admin access policy









There are 2 things to note about all 3 network policies:

  1. The Filter-Id value must match the name of the group (not created yet) on the ISA570.
  2. The Authentication Method is PAP and SPAP which as it says on the screen shots is an unencrypted authentication method. And this bothered me.

Both of these are clearly stated on page 394 of the ISA500 Admin Guide. I plan to open a support request with Cisco and ask if there is another Authentication Method I can use besides PAP, SPAP. I did try a few different methods but none worked except PAP, SPAP.

isa570 configuration

First I needed to point the ISA at the NPS (RADIUS):



Next I created 3 new groups which matched the Filter-Id setting on my NPS Network Policies:



And that was it. Everything is working as I’d wanted.

Pillars of Performance

Microsoft has identified three pillars of performance which ground the perceptions of Windows Store (WS) Applications. These perceptions include such effects as responsiveness, battery usage, and others which trigger bad reviews of a WS application.

Line of Business (LOB) applications will not be purchased from the app store and therefore do not depend on reviews for monetization, however these Pillars of Performance are still vital to an LOB application.


A WS Application should be snappy when moving from one view to the next. In addition all elements must remain responsive during a heavy back end operation.

In addition to creating snappy single operations, we must also consider a composite operation of back end work as well as UI animations and view changes. A lot of the costs incurred during a composite operation are incurred by the UI framework on our behalf.


This pillar describes how ‘buttery smooth’ the UI is during panning, scrolling, or animation. For example scrolling horizontally across data groups on the view or the animation which may show detailed information about a specific data group.


This pillar signifies how well the app plays with other apps in the Win8.1 sandbox. For example, if a WS Application has a large amount of memory leaks, or unnecessary disk read/write activity then it will consume more battery energy than it needs. This will force the user to stop their work and recharge the battery.

Managed Memory Analysis

Visual Studio 2013 allows us to perform analysis on managed memory in our WS Applications. If we go back and review the 3 Pillars of Performance we see that managing memory and cleaning up leaks affects the two Pillars of Application Speediness and Efficiency.

For example we may see a page coming up slowly because it is storing a large object graph into memory. In this case we’ll use the Managed Memory Analysis results to detect this large object graph storage and may be able to work around the need to store that much data into memory.

In order to perform the Managed Memory Analysis you will also need the SysInternals tool ProcDump.

Batch File Utility

In addition to Visual Studio 2013 and the ProcDump tool there is another tool (a batch file) which is not required but will make the process of dumping and opening the analyses easier. Prior to running the batch file you should start the application and get it to the state where you want to perform the analysis. You can start the target app through the Start Menu or through Visual Studio (without debugging – Ctrl+F5.

   1:  set ProcDump=SysInternals\ProcDump
   3:  set ProcName=App2.exe
   5:  %ProcDump% -ma -o %ProcName% BASELINE.DMP
   8:  @echo ============================================================
   9:  @echo ============================================================
  10:  @echo ============================================================
  11:  @echo The baseline dump file has been written.
  12:  @echo Exercise your app and when you are ready for the next dump,
  14:  @pause
  16:  %ProcDump% -ma -o %ProcName% FINAL.DMP
  19:  @echo ============================================================
  20:  @echo ============================================================
  21:  @echo ============================================================
  22:  @echo Your dump files have been completed:
  23:  @dir *.dmp
  25:  @FINAL.DMP

.csharpcode, .csharpcode pre
font-size: small;
color: black;
font-family: consolas, “Courier New”, courier, monospace;
background-color: #ffffff;
/*white-space: pre;*/
.csharpcode pre { margin: 0em; }
.csharpcode .rem { color: #008000; }
.csharpcode .kwrd { color: #0000ff; }
.csharpcode .str { color: #006080; }
.csharpcode .op { color: #0000c0; }
.csharpcode .preproc { color: #cc6633; }
.csharpcode .asp { background-color: #ffff00; }
.csharpcode .html { color: #800000; }
.csharpcode .attr { color: #ff0000; }
.csharpcode .alt
background-color: #f4f4f4;
width: 100%;
margin: 0em;
.csharpcode .lnum { color: #606060; }



Batch File Line Numbers

1. Sets the variable named ProcDump to the full path of the ProcDump utility

3. Sets the variable named ProcName to the name of the application you wish analyze. This is the name which appears in Task Manager when the application is running.

5. Creates the baseline memory dump of the application. This dump file will be named BASELINE.DMP and be stored in the same folder as the batch file.

14. This line pauses the script which allows us to run the application through the states we wish to analyze before taking the final memory dump.

16. This line is executed after the pause and creates the final dump file named FINAL.DMP. It will also be stored in the same folder as the batch file.

25. This last line will cause Visual Studio to be launched and open the final dump file. You will then see a summary of the dump file as shown below:


Execute the Memory Analysis

  1. Click the Action link named ‘Debug Managed Memory’ on the right and you will be presented with this screen:
  2. pic2
  3. At the top right of the screen click the Select Baseline drop down. Then browse and choose the BASELINE.DMP file. Your next screen will show results for the analysis between the FINAL.DMP and the BASELINE.DMP memory dumps.

Tracking down a Memory Leak

In order to demonstrate the act of tracking down a memory leak we’ll use a simple application that allows the user to transfer back and forth from Page 1 to Page 2 with the click of a button:

pic3 pic4

When Page 2 comes up, it creates a large number of objects and never deletes them.

After starting the app we must run the batch file to create the BASELINE.DMP file. When the batch script pauses we will switch between the app pages a few times and then have the batch script create the FINAL.DMP file.

Once Visual Studio performs the analysis we will see the view showing the difference between the BASELINE.DMP and the FINAL.DMP memory dumps. This view has several columns in it and they are described next.



Object Type Column

The type of object for the line. Instances of this type may or may not have been leaked.

Count Column

The number of instances for this type that were left when we took the FINAL.DMP dump file. This number does not indicate a leak if it is greater than zero.

Total Size Column

The total size of the instances which were left when we took the FINAL.DMP dump file. Again, this number does not indicate a leak if it is greater than zero.

Count Diff Column

The difference between the number of instances in the BASELINE.DMP and the FINAL.DMP dump files. This number does indicate a leak if it is greater than zero.

Total Size Diff.

This number represents the size of all of the instances represented by the Count Diff Column. This number does represent a leak if it is greater than zero.

We see there are a lot of objects showing that are in the System namespace and we want to filter our results to just those objects in our application’s namespace. We can do this by typing App2 in the upper left Search box.


We can then immediately see there are 1,500 instances of the Order class which have been leaked.

If we expand the App2.Order object type, we see the 10 largest instances. Then if we select one of those instances we’ll see it is referenced by an of Order[] which is referenced by a List<Order> which is finally referenced by an Object[].

We should then look in our application’s codebase to find where Order instances are created and why they’re not being cleaned up.

Energy Consumption

Visual Studio 2013 has a new diagnostics tool which provides Energy Consumption metrics for a WS Application. To run the tool, open a WS application and press Ctrl+Alt+F9 or go to the Debug menu and choose Performance and Diagnostics.

You will be presented with the dialog shown below where you should check ‘Energy Consumption’ and press Start.


In our example, we will run a WS Application under the Energy Consumption tool and perform some operations within the application. After doing so, we will go back to VS where we will see this screen:


Timeline Graph

Press the ‘Stop Collection’ link to stop the diagnostics run and build a report. After a few seconds you’ll see a timeline graph at the top representing energy consumption over time.


This report provides power usage metrics for four items:

1. CPU

2. Display

3. Network

4. The Total of the above 3.

Display Usage

The display metrics normally stay constant during the run however in the Timeline Chart at the top, the display usage drops to zero between 5s and 12s. This is the result of a problems in the WS Application which was tested. Therefore the display usage during this time slot can be disregarded.

Some things which affect display usage are:

· Display size

· Screen brightness

· Display technology

CPU Usage

In the timeline graph of the energy usage by the CPU, we see several spikes between 1s and 5s. This is when the application is starting.

We’ll see another spike between 7s and 10s but again, we can ignore this metric change as it takes place during this particular application’s misbehavior.

The third is at 20s and 21s which is where the application is opening and rendering its screen. The last spike at 69s happens when the application was shutting down and storing its data.

Network Usage

In this run, we will not have any network usage as the run was performed on a wired computer which has no cell or Wi-Fi data transmissions. However we should know that cell service transmissions can be a real drain on battery life if they are used to communicate a large block of data. This is because the radio is constantly moving down from and back up to a higher energy level as it separates the block of data into smaller blocks for the cell network and transmits/receives them.

Doughnut Graph

This graph is shown on the right side of the report.


The paragraph at the bottom indicates how long a ‘Standard Battery’ would last if the user did the same operations for about 11 hours. According to Microsoft the algorithm to determine the battery life uses a software model trained on lower-end devices. So a ‘Standard Battery’ here is a battery typically found on lower-end devices.

A typical use for this information is to first baseline a set of operations in your application to see how much energy was used. If that level is considered too high, the application should be refactored and this report would be ran again.

XAML UI Responsiveness

A key to meeting the Fast and Fluid pillars is to keep the app rendering at 60 FPS. Visual Studio 2013 has a diagnostics tool named ‘XAML UI Responsiveness’ which addresses how responsive your XAML UI is.

The tool is ran just like the Energy Consumption tool described earlier. Open your project and press Ctrl+Alt+F9 and check the XAML UI Responsiveness tool:pic12

After you press the Start button your application will start and you should use the target portion of your application for which you want the diagnostics. Then go back to Visual Studio and click ‘Stop Collection’ link.

XAML UI Responsiveness Report

After clicking the link you will see a report such as this one:


This report is divided into four swim lanes.

Swim Lane 1

Shows how long the application ran during this diagnostic. You can click and drag to limit the other swim lanes to a specific time slice of the run. Note that you can click and drag on swim lanes 1-3.

Swim Lane 2

Shows the CPU utilization on the UI Thread separated into 4 categories:


Occurs during parsing of the XAML into a tree of XAML elements.


Occurs when the XAML element tree is positioned and transformed into the visual tree.

Per Frame Callback

Occurs on code defined in the Composition Target.Rendering callback.

XAML Other

Any other application code executed during UI processing is shown in this category. For example, modifying an elements RenderTransform on a timer.

Swim Lane 3

Shows our key element (frames per second) during the run of the application. The FPS is measured on the UI thread and on the Composition thread.

UI Thread

This thread is responsible for running per frame callbacks (CompositionTarget.Rendering) as well as parsing the XAML and building the XAML element tree. It lays the element tree by measuring, arranging, and applying styles to each element. And last it will take the element tree and build a visual tree by rasterizing each element.

In the screen shot above you’ll notice that the UI FPS is much lower than 60 FPS. This is ok since the Composition Thread is running at 60 FPS which means the UI Thread is feeding the Composition Thread the rasterized elements fast enough for it to reach 60 FPS.

Composition Thread

Performs any independent translations and sends the rasterized visual tree to the GPU.

StandardStyles.xaml Visualizer


When I started working with WinRT apps in Visual Studio,one of the first things I noticed was all the AppBar Button styles in the StandardStyles.xaml dictionary. There are almost 200 in there although they are all commented out by default when you create a new project.

Deciding which button to put on my AppBar was a common thing I found myself doing and it was also frustrating since I couldn’t immediately see what these buttons looked like. There were places online which described how to examine the Segoe UI font in the Windows Character Map dialog but that was a big pain and it didn’t really show me what those AppBar Buttons would look like.


What I needed was some type of visualizer for the styles in the StandardStyles.xaml file.

my resource viewer application

There are 4 main types of styles in StandardStyles.xaml:

  1. AppBarButtons
  2. Buttons
  3. TextBlocks
  4. RichTextBlocks

I needed an application that would read StandardStyles.xaml, collect the above 4 types of styles, and display each style’s target control on the screen. And it would be nice if I could click the styled control and get the XAML for creating a control with that style copied to my clipboard.

I wanted to be able to click this:


And get this on my clipboard:

Style="{StaticResource ZeroBarsAppBarButtonStyle}"

I’ve pasted screenshots of this app to provide a visual reference here for the 4 types of styles in StandardStyles.xaml. If you’re interested in the source code of my app, I’ve uploaded it to my SkyDrive here.

app_bar_buttons reference

This section shows all 192 AppBarButton styles.























button reference

This section shows the 6 Button styles.


text_block reference

This section shows the 13 TextBlock styles.





rich_text_block reference

This section shows the 4 styles.



Script# comes with a jQuery import library but not a jQuery UI import library. I went out looking for one and found one on CodePlex named Script# Contrib which hadn’t been completed yet but it was a great start to the concept.

I took it and completed it for jQuery UI 1.8.21 and contacted one of the developers to ask about merging or forking my code. He told me they weren’t working on it anymore and to go ahead and fork it.

So I took it, cleaned it up, and wrote the jQuery UI Script# Import Library available here on CodePlex.


This library allows you to develop C# code against the jQueryUI API. For example, you can write this C#:



Which is then translated into this JavaScript by Script#:



I have a personal project I am working on in which I want to compile the method and class signatures (but not the implementation logic) of JavaScript files.

I found the solution I’m going to use but not before wandering through a buh-zillion pages on the web and trying a few approaches until I arrived at something which I think will work for what I need. This blog entry is about that wandering.

A JavaScript file for testing

I needed a script file with a class and method in it to use for my tests. There are different ways to create a ‘class’ in JavaScript, but for this test I decided to use the function prototype method generated from Script#.

The C# class:

public class Class1
    public void DoSomething(int parm1)

The resulting JavaScript:

//! SSGen.debug.js

(function() {

// Class1

window.Class1 = function Class1() {
Class1.prototype = {

    doSomething: function Class1$doSomething(parm1) {


//! This script was generated using Script# v0.7.4.0

Attempt I – Use the Microsoft.JScript.Vsa.VsaEngine

I found an entry on Rick Strahl’s blog about evaluating JavaScript in C# and although it talks about evaluating JavaScript instead of parsing it, I figured it would be a good place to start.

I wrote this C# test code:

        static private void ParseJsWithVsa()
            string jsPath = @"SSGen.debug.js";
            string javaScript = File.ReadAllText(jsPath);
            VsaEngine engine = VsaEngine.CreateEngine();
            object evalResult = Eval.JScriptEvaluate(javaScript, engine);

And got these compilation warnings:


But it it did compile! So I ran it and this was my result:


Okay, well that’ll be easy to fix I thought. I decided to just remove the Window. from line 10 of the JavaScript and try it again with this result:


Arrgh. I guess that the parsing worked without trouble but the actual evaluation failed, I couldn’t find any way to get the parsing results so I decided not to work on this approach any longer. Instead I decided I’d try the ICodeCompiler noted in the compilation warnings above.

Attempt II – Use the ICodeProvider

Here’s the c#:

        static private void ParseJsWithICodeCompiler()
            string jsPath = @"SSGen.debug.js";
            JScriptCodeProvider jsProvider = CodeDomProvider.CreateProvider("JScript") as JScriptCodeProvider;
            using (TextReader text = File.OpenText(jsPath))
                CodeCompileUnit ccu = jsProvider.Parse(text);

Here’s the result:

ICC Error

Oh well. So much for that idea.

Attempt III – Use JSLint

I know that JSLint is a code quality tool but I figured that somewhere down in the mess of JSLint code it’s got to parse the target script and maybe I could hook into it and get the results of the parsing action.

I created a test ASP.Net web site which would show me the results of running JSLint against my test script file.

Here is my ASP.Net site’s markup:

    JSLint Parse Tester
<script type="text/javascript" src=""></script><script type="text/javascript" src="Scripts/Test.js"></script>
<script type="text/javascript" src="Scripts/JSLint.js"></script></pre>
<form id="form1">
<h3>Result of calling JSLINT(script);</h3>

<hr />

<h3 id="resultsTitle"></h3>
<div id="results"></div>

And here is the JavaScript I wrote to execute JSLint against my test script file:


function OnGetScriptToParseComplete(script)
    var result = JSLINT(script);

    if (result)
        var tree = JSON.stringify(JSLINT.tree, [
         'string', 'arity', 'name', 'first',
         'second', 'third', 'block', 'else'
     ], 4);

        tree = tree.replace(/\n/g, "
        tree = tree.replace(/ /g, " ");

        var errs = JSON.stringify(JSLINT.errors, undefined, 4);
        errs = errs.replace(/\n/g, "
        errs = errs.replace(/ /g, " ");
        $('#results').html(errs).css('color', 'red');


$(document).ready(function ()
        url: "scripts/ToParse.js",
        dataType: 'text',
        success: OnGetScriptToParseComplete

And here are the results:

JSLint results

Before I delved into this approach any further I continued looking online for solutions to parse JavaScript and found something named ANTLR which seemed to be exactly what I needed.

Attempt IV – Use ANTLR

ANTLR is a tool which, among many other things, allows me to generate lexers and parsers in a target language (i.e. C#) to use against a specific grammar.

I’ve been working with ANTLR for a couple of days now, it’s got several ‘pieces’ which must be downloaded, version matched, and fitted together to do what I need but it seems to be the best solution so far. I’ll write more about ANTLR and my project in the next part of this series.


Part I of this series is here

Part II of this series is here


By monitoring the TMG log while trying to make a call, I figured out what additional rules and network entities I needed:


An address range to PHONE.COM


A domain name set to SIP.PHONE.COM

SIP Phone Com

A new protocol I named RTP Pope

RTP Pope

A new protocol I named UDP 6060

UDP 6060

Two new firewall rules (Phone Init and NTP)



The VOIP phones have been working over Microsoft TMG for about 3 months now without problems.

Part I

In Part I I talked about moving my church’s office and network resources to the cloud because they decided to sell their building to build a new site and had dispersed from working in their old office to a temporary office as well as work from their homes.

My church decided to move their phone numbers to Phone.Com to use IP phones and soft phones for their communications after their move from their old office. In this post, I’m going to talk about my struggle to get their IP phones working through Microsoft TMG.

Why Microsoft TMG?

I’ve been asking myself that question a lot lately. At the office I had them setup with a Cisco PIX for their firewall but I chose to move them to TMG since it’s a lot easier for me to manage. In addition I believe it will help me with the Office 365 identity integration with their Active Directory. But right now I’m just trying to get their IP phones working.

The First Experience

I didn’t know how IP phones worked when I started this, since at the church office, they had a phone provider who brought in the lines to a PBX and then to the offices. There was nothing for me to manage.

At the temporary office, everything worked for the IP phones as long as I used the Verizon FIOS router but stopped as soon as I switched over to TMG. Nice. Time for me to go and study how IP phones are supposed to work.

After poking around on the Internet’s resources reading about the SIP and RTP protocols, I discovered a really slick wizard in TMG which allowed me to create rules for VOIP. I went through the wizard and ended up with three new firewall policy rules:


This made sense so I grabbed an IP phone and dialed someone. The call went through (yippee!!!) but I couldn’t hear them and they couldn’t hear me (sigh).

So that would mean the SIP protocol went through fine but the RTP protocol which would handle the voice didn’t go through.

What The TMG Log Showed

I started a log query on TMG restricted to just the IP phone’s IP address and made another prank call. This is what the log showed me:

Log 1

There are two things I see wrong here. The first and obvious is that the connection was denied. The second is that the protocol name is ‘Phones 6060’ which was an old protocol I had defined earlier when I was working through my ignorance of how IP phones worked but have since deleted.

I see that the destination port is an even port number. According to the RTP protocol, the data is sent over an even port number and the corresponding data would be carried over the next higher port number which would be an odd number. So this log is telling me that the phone was trying to perform RTP communications over the even port number 32746 but the attempt was denied and that would be why I don’t see communications over port 32747 which would be data coming back.

But why is the protocol ‘Phones 6060’ showing up in the logs? I know I deleted it and have even rebooted the TMG server since then so there’s no reason for it to show up in the logs. The fact that this deleted protocol is showing up and the fact that TMG is not allowing the IP phones out even after I created the proper firewall rules makes me less confident that what I see in the TMG UI is not actually what TMG is doing under the covers. Thanks Microsoft.

Well, it’s a rainy and stormy day today so this is as good a day as any to work through this…


The decision

My church has decided to sell their land and buildings in order to build and move into a new location about 20 miles away. This new site will not be completed for about a year so in the mean time, they’ll be working out of temporary offices and their homes in a much more loosely coupled fashion.

Since I handle their email, file shares, networking, and various other IT stuff on the side, I decided it was time to look at moving their data to the cloud.


Where to put all those files?

They easily have over 400GB of files on their network shares. Google Docs was the first place I looked and found that I’d have to convert all those files to GD format and the docs could not be cached locally so the user would have to be online to get to their documents. That was a non-starter for me so I nixed GD.

Next I took a look at Amazon S3. Amazon makes it very easy to get this many documents up to S3, I just send them a USB drive with all of the docs on it. And the pricing was perfect because at $0.14/GB plus the estimated number of requests and data transfer size, they would be looking at about $100/month.


What about the email?

I began to prepare the files for shipping to Amazon by locking their file shares down to read-only access and copying the files to a removable drive over a period of several days. As I was in the middle of doing this, I began to hear and read about Microsoft’s Office 365 and realized that would be perfect for their email. I wouldn’t have to manage a mail server any longer or be a liaison to their current mail provider.

For way less than the cost they were paying their current provider they could have their email in Exchange and their working office files in SharePoint in the cloud. I researched the O365 plans and set them up with the E4 plan. With this plan, they’ll be able to use and upgrade to the latest version of Office with each release, have multiple GB of mail storage, and have tens of GB of SharePoint storage.


To be continued…

I’ll continue this series with information on moving their email, setting up their contacts, and moving their files to Office 365 and Amazon.