Skip to content

2012 in review

The stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

600 people reached the top of Mt. Everest in 2012. This blog got about 8,000 views in 2012. If every person who reached the top of Mt. Everest viewed this blog, it would have taken 13 years to get that many views.

Click here to see the complete report.

Windows Phone Eight Got No Chance

According to the Windows Phone Developer Blog, Windows Phone has certified and published over 75,000 new apps and games over the last year.  The platform, in my mind, is superior as a development platform; leveraging a proven run-time, well-documented and easy to use libraries, and is portable between various devices of various form factors.  Last I messed around with the Windows Phone SDK, I was able to write a basic app that let a user search for recipe keywords and displayed the results in about 20 minutes; this was without reading any documentation, tutorials or code samples.  Try that with iOS or Android development.  Regardless of what you think about Microsoft as a company, their development tools and platforms are arguably worlds beyond anything else out there.

Guess what.  None of this matters.

Microsoft’s attempt to muscle their way into the mobile OS market has been met with failure after failure because they’re constantly trying to play catch-up with the moving target of innovation.  I live and breathe in the start-up world.  I attend start-up meetings, groups and conventions.  I’m on countless mailing lists and online bulletin boards focused on entrepreneurs.  Almost no one I talk to is thinking about, or even interested in, building mobile applications on the Microsoft platform.  A start-up’s resources are extremely limited, and most have little or no outside funding.  No matter how fantastic the development platform is, there’s simply no reason to waste cycles building a completely separate code base for a platform that has about 3% of the market share.  Even huge companies that actually do have the resources to build Windows Phone versions of their products are either choosing not to, or just do so as an experiment to test the waters.  The result?  Innovation itself moves at a faster pace than Windows Phone can adapt.  In other words, all the cool new things coming out and making the headlines in the tech world are not coming out on the Windows Phone platform.  If your platform is not adopted by the geeks, the innovators and the early-adopters, it has no chance of jumping that chasm into the main stream.

Microsoft is well aware of this problem and is working on genetically engineering chickens to lay those non-existent eggs.  A lot of this comes in the form of either developing apps themselves (for example, Facebook and Twitter sharing is simply built into the OS) or paying smaller companies with the top apps to create Windows Phone ports.  There are two problems with this approach, neither of which I have solutions for.

First, you’re still straggling behind the line of innovation.  Only after an app has proven itself, has 50 million users, and makes the front page of TechCrunch and Wired would Microsoft begin to back such a port.  Just now are we seeing things like Evernote or Rhapsody for Windows Phone, even though these apps are mainstream on other platforms.  If you’re a geek with a Windows Phone, you’ll be waiting on the side lines while all your iPhone and Android friends enjoy all the cutting edge innovation coming out every week.

Second, the apps that do make it out for Windows Phone are the red-headed step children of the real thing.  Sure, there’s a few Twitter apps for Windows Phone.  None of which were actually written by Twitter.  With Twitter’s reputation for screwing over devs, who knows if these apps will even continue to work in the future.  Who knows if they might suddenly break when Twitter rolls out some great new feature that changes the way all their APIs work.  Know what won’t break?  The iOS and Android Twitter apps, written by Twitter.

I’m somewhat of a Yelp addict myself.  Some how, this company has managed to install some sort of Pavlovian response in my psyche forcing me to check-in wherever I go.  I actually really dig this.  Sometimes, I’m trying to remember the name of a place I was at a few weeks ago, and can dig it up on my Yelp history.  I also sometimes notice other people I know have checked in as well, or get Facebook comments on my check-in.  Every now and then, there’s some sort of discount or coupon offered when checking in.

Is there a Yelp app for Windows Phone?  Well, if you want to call it that.  The Yelp app on Windows Phone is pretty much an insult to all things Yelp.  You can’t check in, you can’t rate or write reviews; it’s basically a read-only interface to Yelp, and provides absolutely nothing you couldn’t get by visiting the website itself.  There’s absolutely nothing mobile about it – just read the reviews!

It seems a lot of these so-called ports are simply an excuse to check off a column in a list.  “Yup, we have a Yelp app – check!”

Now if I’m shopping for phones, I not only have to make sure there’s a Yelp app, but I also have to research the feature parity of this app by reading reviews or trying it out on a friend’s phone first.  This leads to a total distrust of the porting efforts.  If an app is ported simply to slap a brand name in the app store, but provides absolutely nothing that made that app successful, the port is a lie.

Imagine if Yelp came out with a new feature that let you take a picture of the menu at a restaurant and added star ratings to each menu item using augmented reality.  We’d see this ported over to Windows Phone in about 3-7 years.  Once again, Microsoft cannot play catchup as innovation marches forward.

It seems Microsoft’s attempting to build the world’s greatest mobile platform (which they might have succeeded at) and expects innovation to simply happen magically.  In reality, there’s absolutely nothing urging the innovators to develop the new “it” app of tomorrow on this platform.

Microsoft needs to find a way to fuel innovation itself on this platform, not bribe the current top innovators to write these less-than-stellar ports of their hit iPhone apps.  In fact, I’d say this is pretty much all they should be doing.

If I were Microsoft, here’s what I’d do:

  • Drop all development fees completely.  No more $100/yr fee to develop on Windows Phone.  Update: The Microsoft BizSpark program will actually waive this fee for the first year.  It’s a fantastic program which I encourage all start-ups to look into.
  • Invest in start-ups that will develop cutting edge technologies on their platform.  Start a $100MM VC fund for mobile innovators.
  • Invest in technologies that make cross-platform development easier.  Imagine if Microsoft invested in or acquired Xamarin, and turned Mono into a cross-platform phone development tool.  Using Visual Studio, developers could write native apps in C# that targeted Windows Phone, Android and iOS using technologies such as Silverlight and XAML.  Phone specific features could be abstracted with minimal platform specific code.  It’s possible to make developing an app for both Windows Phone and iPhone easier than developing the same app for iPhone only.
  • Push employees to build cool apps internally.  Start a program where Microsoft employees can build cool apps while at work, publish the apps to the store, and get them promoted by Microsoft.  Employees should be able to keep 100% of any revenue the app makes as a bonus.

The way things are going right now, I can’t see Microsoft even making a dent in the mobile and tablet market.  They seem to be lagging far behind and focusing way too much on catch-up and way too little on innovation.  I’d love to see that change, as the platform is awesome and the development tools are top notch.  I guess only time will tell.

Using ENUM types with PostgreSQL and Castle ActiveRecord

One thing I really love about PostgreSQL is ENUM types, which basically allow you to create enumerations as datatypes and use them just as you would any intrinsic type.  For example, I have an ENUM called UnitsEnum that allows me to represent a unit of measurement (cup, teaspoon, gallon, etc).  I then use this data type in various database columns, functions, views, whatever.

These data types are simple to create with the following command:

CREATE TYPE UnitsEnum AS ENUM ('Unit', 'Teaspoon', 'Tablespoon', 'FluidOunce', 'Cup', 'Pint', 'Quart', 'Gallon', 'Gram', 'Ounce', 'Pound');

From that point on, you can use UnitsEnum just as you would a varchar or an int or a Boolean datatype, using it to define column definitions, using it as a function parameter or return value, etc. For all practical purposes, it’s now built into Postgres:

CREATE TABLE RecipeIngredients
  -- Stuff here
  Unit UnitsEnum NOT NULL --Unit column of type UnitsEnum

select count(1) from RecipeIngredients where Unit = 'Teaspoon';

select count(1) from RecipeIngredients where Unit = 'Crapload';
--Error is thrown

As far as the user is concerned, the data type appears to be a string that simply errors out when the value is not within a certain allowed set, and in fact strings can implicitly convert to an enum (but not numeric types which I’ve found odd).

Using an ENUM as a column type would be somewhat analogous to creating a text column, but creating a foreign key constraint on a table of measurements which contained the strings in the enumeration above. However, on the heap table the ENUM is actually stored in its numeric representation which could possibly make binary searches a bit faster. One could argue that you could also do the same by using a numeric data type to hold enumerations, and then set CHECK constraints to limit the values. This would be equally nice, however I really like being able to see the string representations of these types when I’m performing ad-hoc queries on my database. This is the best of both worlds.

While this is awesome from a database design perspective, what would be even more awesome is bridging that data type with the business logic code. In other words, I want the models in my ORM to use C# enums that map perfectly to the defined Postgres type.

Luckily, that’s possible to do with Castle ActiveRecord, and actually quite easy.

First, we’d need a C# enum. It would need to match the types in the Postgres ENUM (the names, that is, as the numeric values don’t actually matter here):

public enum Units
   //Individual items
   Unit = 0,

   Teaspoon = 1,
   Tablespoon = 2,
   FluidOunce = 3,
   Cup = 4,
   Pint = 5,
   Quart = 6,
   Gallon = 7,

   Gram = 8,
   Ounce = 9,
   Pound = 10,

Now, let’s use this in a model called RecipeIngredient, which represents a usage of an ingredient within a certain recipe:

public class RecipeIngredient
   : ActiveRecordLinqBase<RecipeIngredient>
   // Various column mappings go here

First, we’d add a new column mapping called Unit:

private Units unit;

[Property(SqlType = "UnitsEnum", ColumnType = "KPCServer.DB.UnitsMapper, Core")]
public Units Unit
   get { return unit; }
   set { unit = value; }

We now have a Unit property of type Units, our C# enumeration of valid units of measurements. So, now the model is restricted to only valid units of measurements, using all the type safety of C#.

There’s a few things to look at on the [Property] attribute for this mapping. First is SqlType. This property defines what type in SQL will be used for this column. This will cause ActiveRecord to use this type when it’s writing database creation scripts and what not.

Next, the ColumnType attribute specifies a managed type that’s responsible for type mapping information used by NHibernate. I’m using KPCServer.DB.UnitsMapper, in the Core.dll assembly. Unfortunately, a mapper class can only represent a single ENUM type. Thus, if you have a bunch of enums, you’ll need to repeat a lot of this logic. To DRY this out a bit, I’ve created a helper base class called PgEnumMapper:

public class PgEnumMapper<T> : NHibernate.Type.EnumStringType<T>
   public override NHibernate.SqlTypes.SqlType SqlType
         return new NHibernate.SqlTypes.SqlType(DbType.Object);

The reason we need to do this is by default, NHibernate will convert C# enums to their numeric representations (which would be much more database agnostic,) attempting to store their value as a number in your database. Postgres will not cast numeric values to an ENUM type, so you’ll get an error trying to do this. The mapper above says, “No, I need this to be a string in the SQL you generate.”

Now, we can use this PgEnumMapper for each of our enums:

public class UnitTypeMapper : PgEnumMapper<UnitType>

No need to add anything to UnitTypeMapper, it’s perfectly fine the way it is. You’d want to create a PgEnumMapper derived class for each C# enum you want to support in your ORM of course.

That’s all there is to it! You can now use ENUM types in Postgres and magically have them map to C# enums. Wow I love Postgres.

WCF and ActiveRecord, let’s be friends!

For the coders out there that follow this blog, you’ll probably know the KitchenPC back-end is completely built on Castle ActiveRecord, an easy to use ORM for .NET build on NHibernate.  I’m a huge fan, even though the project is no longer active and most people are probably using Fluent NHibernate or Microsoft’s own ORM, the Entity Framework (blech).  I also happen to be a huge fan of the ActiveRecord pattern in general, so I’m sticking with it and you can’t make me change.

Getting WCF to play nicely with ActiveRecord turned out to be somewhat of a time vampire; not because the solution is hugely complicated, but because it required me to dig in to the inner workings of both WCF and ActiveRecord.  I also found almost zero information out there covering integration between these two technologies, as most blog posts and articles focus on NHibernate itself and may or may not apply to ActiveRecord specific implementation details.  Posting questions on StackOverflow also yielded nothing but cricket noises.

For that reason, I decided to write a technical blog post illustrating exactly how to get these guys to be friends.  Turns out, it’s not all that scary!


First, I will assume the reader is already family with Castle ActiveRecord, and knows how to define mappings, initialize the framework in Application_Start, and has the basics down of using the framework.  If not, I’d suggest reading up on Castle ActiveRecord, following a few tutorials, and trying it out first on a simple ASP.NET web site.

I’ll also assume you have some basic knowledge of WCF.  However, if you want to be really well versed in the extensibility story for WCF, I highly suggestion reading this article.

ActiveRecord Scopes

NHibernate, for reasons of efficiency, works based on sessions.  To avoid running tons of SQL statements after every line of code you write, the underlying engine will batch things up and run a bunch of statements at once when it’s wise to do so.  It’s also smart enough to know when it needs to re-query data, commit pending transactions first, or invalidate cached data.  SessionScopes can be nested, which means internally they’re represented in a stack of SessionScope objects.

A SessionScope object tracks a collection of pending queries, and commits them to a database (provided everything is kosher) when the object is disposed.  A very simple example of this might be:

   using (new SessionScope())
      Recipe r = Recipe.Find(123);
      r.PrepTime = 60;
      r.Ingredients.Add(new Ingredient(5));


Here, we find a Recipe object in the database with the ID of 123, set a new prep time for that recipe, then add a new ingredient to the recipe’s ingredient collection. NHibernate, under the covers, is tracking what objects have changed and issuing UPDATE commands through the database provider when SessionScope is disposed.  If we throw an exception halfway through, things we already updated won’t actually get changed, as those UPDATE commands will never be sent.

One thing you’ll notice is that I don’t actually refer to my SessionScope object anywhere. You might be asking yourself how r.Update() knows which session we’re currently in. Well, this is done through an implementation of IThreadScopeInfo. When you call SessionScope.Current, the Castle framework calls upon the configured IThreadScopeInfo which is responsible for finding the current session.

Castle ActiveRecord ships with a few of these. One is called WebThreadScopeInfo and stores the current session stack in the HttpContext.Current.Items[] collection. This means that the session is scoped to each individual HTTP request. Another implementation is HybridWebThreadScopeInfo, which allows you to run sessions in any random ol’ thread, where HttpContext.Current would be null. The HybridWebThreadScopeInfo class will first check HttpContext.Current, and if it’s null, return a session stack keyed to the current thread, creating a new stack if necessary.

This design allows you to call functions which call other functions which call other functions without having to worry about passing sessions and scopes all over the place.

Castle ActiveRecord actually makes this even easier. There exists an HTTP module called SessionScopeWebModule which runs before each HTTP request, creates a new SessionScope for that request, and disposes of it when the HTTP request ends. Using this module, you can write the above code as just:

   Recipe r = Recipe.Find(123);
   r.PrepTime = 60;
   r.Ingredients.Add(new Ingredient(5));


There’s no need to new up a SessionScope, as one has been created for you by SessionScopeWebModule. Pretty cool, eh?

So why doesn’t this solution work in WCF?

As I pointed out in my previous post, the WCF stack is completely independent of ASP.NET. HTTP modules won’t be run and there’s no HttpContext.Current.Items collection to store the active session stack. For this reason, you’ll need to re-create a few of these solutions in a way that’s compatible with WCF’s architecture.  You can also run in aspNetCompatibilityEnabled mode, but that has its own set of limitations.

Unfortunately, Castle ActiveRecord doesn’t have support for this out of the box, but it offers the extensibility to design a solution similar to the ASP.NET handlers. In fact, I based my code completely off the built in SessionScopeWebModule HTTP module and HybridWebThreadScopeInfo scope info class.

Step 1: Creating a scope for each WCF request

In ASP.NET, this would be done using an httpModule.  In WCF, this is done by implementing a message inspector.  A message inspector, which is an implementation of IDispatchMessageInspector, is a class that can look at a message traveling through the WCF pipeline and do things before and after the message is processed.  It’s dead simple, and has an AfterReceiveRequest method which is called after the request is received and deserialized, and a BeforeSendReply which is called before the reply is assembled.  We can use this to create a new session scope for our WCF operations.

public class MyInspector : IDispatchMessageInspector
   public MyInspector()

   public object AfterReceiveRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
      return new SessionScope();

   public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
      SessionScope scope = correlationState as SessionScope;

      if (scope != null)

Ok so what’s going on here? When we receive a request, we new up a SessionScope object. The constructor actually adds itself to the current session stack through the configured IThreadScopeInfo, so all that is taken care of for us. This is actually quite similar to what the web module does, only the web module adds that SessionScope object to the HttpContext.Current.Items collection so the scope can be disposed of later.  We don’t need to that because we take advantage of correlation state.  Any object AfterReceiveRequest returns will be passed in to BeforeSendReply, which is useful for tracking objects in a message inspector through the lifespan of a WCF request.

You’ll notice that we use this concept in BeforeSendReply to grab the reference to the SessionScope created above and dispose of it, thus committing any pending transactions to the database.

This inspector now has to be attached to a WCF service. This can be done in a variety of ways (mostly mucking with your web.config), but perhaps the easiest is by implementing a custom IServiceBehavior. An IServiceBehavior defines a custom behavior for a service, such as adding custom message inspectors. A very cool feature of this is your IServiceBehavior can derive from the .NET Attribute class which allows you to tag a WCF service directly with a behavior, no need to mess around with configuration files. This class is quite simple to implement:

public class MyServiceBehaviorAttribute : Attribute, IServiceBehavior
   public void AddBindingParameters(ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase, System.Collections.ObjectModel.Collection<ServiceEndpoint> endpoints, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)

   public void ApplyDispatchBehavior(ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase)
      foreach (ChannelDispatcher cDispatcher in serviceHostBase.ChannelDispatchers)
         foreach (EndpointDispatcher eDispatcher in cDispatcher.Endpoints)
            eDispatcher.DispatchRuntime.MessageInspectors.Add(new MyInspector());


   public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)

Nothing to really explain here. When the behavior is applied, it goes through all the endpoints that connect to that service and applies our message inspector. You can then tag your WCF service with that behavior:

public class Service1 : IService1
   // Your operation contracts here

When we have this hooked up, every request to your service will be gauranteed to have its own session scope, so no need to new up a SessionScope object yourself. It’ll work just like ASP.NET using the SessionScopeWebModule module.

Well crap, that won’t actually work…

If you’re smart and/or know a lot about IIS, you’ll see a major problem with this. WCF exhibits thread agility, which mean the message inspector (where our SessionScope was created) and the service operation (where we’ll be looking for SessionScope.Current) could be different threads!

As I mentioned before, the current SessionScope is resolved through a IThreadScopeInfo implementation. The WebThreadScopeInfo implementation uses HttpContext.Current to track the session stack, providing an immunity against ASP.NET thread agility. The HybridWebThreadScopeInfo implementation attempts to use HttpContext.Current, but if it’s not found it basically creates a stack keyed by the local thread. This kinda works for simple cases, like loading some data into memory within an initialization thread, but probably isn’t very smart for us. If we need to lazy load some property and a session scope had never been created on that thread, we’re going to crash with the exception: “failed to lazily initialize a collection of role: xxx, no session or session was closed”.  This would of course happen intermittently, and only in the middle of the night or on your vacation.  You’ll never be able to reproduce it on your dev box, and it would make you hate the world and yearn for a simpler time before all this technology.

The solution? Write a super-duper-hybrid session scope info class. One that does it all. This thing of beauty will check the HttpContext.Current if we’re running on an ASP.NET page, check the WCF OperationContext.Current if we’re in an WCF operation, and then fall back to a stack keyed by the thread if all else fails. It will slice, it will dice, you’ll be seeing infomercials for this IThreadScopeInfo. And I won’t charge you three payments of $19.95 to use it. In fact, I’ll give you the code for free.

This implementation is heavily based on the HybridWebThreadScopeInfo implementation, which I recommend skimming over first. I just added a bit of extra logic to handle the WCF case.

public class WcfThreadScopeInfo : AbstractThreadScopeInfo, IWebThreadScopeInfo
   const string ActiveRecordCurrentStack = "activerecord.currentstack";

   static Stack stack;

   public override Stack CurrentStack
         Stack contextstack;

         if (HttpContext.Current != null) //We're running in an ASP.NET context
            contextstack = HttpContext.Current.Items[ActiveRecordCurrentStack] as Stack;
            if (contextstack == null)
               contextstack = new Stack();
               HttpContext.Current.Items[ActiveRecordCurrentStack] = contextstack;

            return contextstack;

         if (OperationContext.Current != null) //We're running in a WCF context
            NHibernateContextManager ctxMgr = OperationContext.Current.InstanceContext.Extensions.Find<NHibernateContextManager>();
            if (ctxMgr == null)
               ctxMgr = new NHibernateContextManager();

            return ctxMgr.ContextStack;

         //Working in some random thread
         if (stack == null)
            stack = new Stack();

         return stack;

At first glance, it looks similar to the HybridWebThreadScopeInfo code, because, well, it is. It has a ThreadStatic Stack field, it checks the HttpContext.Current first, blah blah. The difference is the part in the middle which checks OperationContext.Current. This property will have a value if we’re running within the WCF pipeline. An OperationContext is analogous to HttpContext, but only for the WCF world. Similar to HttpContext.Current.Items, we can even store random junk we want to have for the lifespan of the operation. However, rather than just being a simple key/value pair like .Items is, they had to make it a bit more complicated and use something called Extensions.

An extension is any class that derives from IExtension<T>. Our extension, of course, needs to store the current SessionScope stack. Implementing it is pretty straight forward.

public class NHibernateContextManager : IExtension<InstanceContext>
   public Stack ContextStack { get; private set; }

   public NHibernateContextManager()
      this.ContextStack = new Stack();

   public void Attach(InstanceContext owner)

   public void Detach(InstanceContext owner)

You’ll notice IExtension<T> has Attach<T> and Detach<T> methods that are called by WCF when an extension is added or removed from the operation context, which we need to implement with dummy methods to satisfy the interface.

We can configure ActiveRecord to use this awesome new IThreadScopeInfo implementation within web.config:

   threadinfotype="WcfThreadScopeInfo, Website">

Or if you’re using an InPlaceConfigurationSource and initializing ActiveRecord in your global.asax, you can use:

InPlaceConfigurationSource source = new InPlaceConfigurationSource();
// Stuff
source.ThreadScopeInfoImplementation = typeof(WcfThreadScopeInfo);

This will tell ActiveRecord to use WcfThreadScopeInfo to resolve the current session scope any time it needs to know.

In Summary…

Ok so what’s going on now? When a WCF request comes in, the IDispatchMessageInspector message inspector is run, which creates a new ActiveRecord scope for that request. The constructor for SessionScope will see if there’s already a SessionScope stack to add itself to, and it will do so by looking at the configured IThreadScopeInfo implementation. Our implementation will be able to use a ASP.NET context, a WCF context, or a thread context to store this stack in. This means that any time we need to look up the current session, which is done all over the place in the ActiveRecord framework, we’ll be able to find the current session scope or create one if necessary. When the request is ended, the session scope is disposed of and pending updates are committed to the database. If you had new’ed up any child session scopes on the stack, you’d of course be responsible for disposing of those properly as well.

Well, there you have it. Making WCF and ActiveRecord best friends is fairly straight forward, and lets you learn about the inner workings of NHibernate, ActiveRecord, and the WCF pipeline all at the same time. Have fun!

One does not simply… switch to WCF

In my last blog post, I talked about the nature of new code, and how it sometimes spawned a spontaneous redesign of existing architecture to avoid future technical debt.  In my case, laying the initial foundation for mobile and tablet apps caused me to rethink my web service layer.  Since then, I’ve been spending a lot of time going through the KitchenPC API and making sure everything is solid, well designed, consistent and flexible.  After all, once mobile and tablet apps have been built, it will be difficult to change the interface contracts.

Unfortunately, I seem to have fallen down a rabbit hole and signed up for a pretty extensive re-design of the KitchenPC back-end.  I ran into conflicts between my ideal API design and the limitations of ASP.NET Web Services.  In the end, I decided to upgrade the KitchenPC web service layer to use a more modern framework, Windows Communication Foundation (WCF).

31335440Turns out, one does not simply… switch to WCF.

On the outside, ASP.NET Web Services (using either SOAP or JSON) and a WCF service are similar in nature, especially when running under IIS.  An ASP.NET Web Service is an .ASPX file that points to an implementation of a class derived from System.Web.Services.WebService.  Publicly exposed methods of this class are tagged with [WebMethod] and accept and return serializable data types.  ASP.NET is able to generate a JavaScript proxy on the fly, allowing client-side browser code easy access.  In WCF, an .SVC file points to a Service contract, which is any class that’s tagged with the [ServiceContract] attribute.  Often, this is done by implementing an interface tagged with said attribute.

The fact that an ASP.NET Web Service extends a common base class, WebService, and a WCF Service does not, is an important consideration when porting existing code.  Any methods that depend on the ASP.NET request pipeline, such as references to Context, Session, or Application, will need to be reconsidered.  This means you cannot depend on things like cookies,  HTTP headers, IP addresses, identity management, and other things users of the ASP.NET framework take for granted.

pigeon_with_cookieTo understand why, let’s take a step back and look at the design behind WCF.  WCF was implemented as a protocol agnostic communications framework.  Nowhere does WCF assume that HTTP is being used as a transport mechanism over the wire.  One could be using raw TCP/IP, or perhaps named pipes, shared memory, or a USB controlled carrier pigeon launcher.  Nowhere does WCF assume it’s even running on a web server like IIS.  A WCF endpoint could be running within a console application, or a Windows Service.  Many of these architectures have no concept of things like HTTP headers, session state management or other things associated with the web world.  Though a pigeon could probably carry a cookie if it were only a few bites.

For this reason, within the context of a WCF operation, you’re working with very generalized notions of communication.  I was definitely bitten by this “limitation” while porting my web services over, as I use session cookies to communicate authentication information used for user specific service calls.

Those who want to quickly port their web services over to WCF will, however, be pleased to know that the designers of WCF foresaw this radical paradigm shift and took pity.  There exists a compatibility mode that bridges the ASP.NET pipeline with WCF, and permits information sharing between those two very different worlds.  This compatibility mode can be enabled in your web.config file by using:

   <serviceHostingEnvironment aspNetCompatibilityEnabled="true" ... />

When running in this mode, you’ll be able to access things such as HttpContext.Current and session state.  If you’re interested in how this works under the covers, there’s a great blog post here.  This is most definitely an approach for those building services that never need to run outside the context of a web server or support protocols other than HTTP.  However, I decided to really dive into the WCF world and design a fully independent KitchenPC API and not take the easy way out.  Either that, or the fact that I couldn’t get ASP.NET compatibility working under IIS when running in Integrated Pipeline mode and I got frustrated.  But let’s go with the first thing, it sounds more noble.

Luckily, getting rid of cookie dependencies was fairly straight forward.  My web services already took a ticket string which contained a serialized authentication ticket.  However, this parameter was optional if the HTTP header had a cookie value set containing the same information, as I didn’t want to pass in that potentially long ticket twice in each HTTP request.  The ticket was also never returned back with the Logon web method, since it just set the cookie in the HTTP response.  The new design requires a session ticket to be supplied on sensitive web service calls, and it’s up to the client to store that ticket for future calls.  In my mind, this is simpler and more straight forward.  Web, mobile and tablet will all call the API in the same way, and enforcing these parameters becomes less complicated in terms of code.

You might be asking me, at this point, what the actual benefit of switching to WCF is.  After all, the web services worked just fine before, right?  Do I really need a protocol agnostic KitchenPC service, when in reality HTTP will be the primary, if not only, means of access?  Well, the answer is flexibility.  Massive amounts of flexibility.  Every single last component of WCF is fully customizable or replaceable, from message inspection, to fault handling, to serialization.  Don’t like something about how your service works?  Change it.  And that’s where WCF starts to get extremely complicated, as I’ve been learning over the past few days.  That’s also why I’ve decided to write a couple more WCF posts, summarizing how I’ve tweaked WCF to build the KitchenPC web service I’ve always wanted.

Stay tuned!

Without Exception

Part of being an entrepreneur is getting used to the fact that work is infinite in nature.  Actually accomplishing something doesn’t reduce the number of items in your to-do list, it simply creates more work.  The more productive you are, the more work will be created, causing an annoying feedback loop similar to putting a microphone too close to the speaker.

I was reminded of this over the long Thanksgiving weekend while hacking together the first few k-locks of code that will ultimately become the common mobile library for KitchenPC.  This is a UI agnostic API that will eventually power things like Windows Phone, Surface, Windows 8 apps, and other devices.

The main function of this library is to abstract client-to-server communications between a mobile or tablet device and the server.  It works similar to a web service proxy, like one that Visual Studio will generate for you from a WSDL contract.  However, I wanted to write one from scratch because, one, that’s the way I roll, and two, I can write a version that’s faster, more portable, and more light weight.

It’s written on the CoreCLR (which has been ported to various platforms, including iOS and Android) which provides only a subset of the raw power of the full .NET Framework.  I think developing a core KitchenPC framework on this platform can ease development for KitchenPC apps on various devices in the future, and also some day evolve into a full fledged open-source KitchenPC SDK.  I wanted to make the code pure, without any outside dependencies, and as portable, easy to manage and distribute as possible.

While basic WCF libraries are available on the CoreCLR, they’re prone to a few problems.  One, they only implement SOAP transports.  While KitchenPC can speak SOAP, the Javascript libraries use JSON to communicate with the server, which is much less verbose and efficient.  Two, my initial attempts to get that code running on the Mono framework were far from successful.  Even Xamarin has marked the WCF Stack as Alpha Quality.  If I wanted something truly efficient, truly cross-platform, and truly low-level, I was just going to have to write it myself.  So that’s what I did.

So, now you’re aware of the amount of work I signed up for, here comes the part where it spawned into more work, like crazed bunnies in the spring.

The KitchenPC web services are built with the idea in mind that return values should always be valid and indicate a successful result.  Errors should be handled as exceptions, which are serialized through SOAP faults or JSON exceptions.  This was the design decision I made, simply because it seemed cleaner to me.  I could just throw exceptions at any place in the stack, without having to litter try/catch blocks all over the place.  I’ve used a bunch of web-based APIs that return things like Result, which have a Success property indicating if the call was successful, an Error property with error information, etc.  The Exchange Server API is designed in this manner.

Well, turns out, when the ASP.NET Script Services throw an exception and that exception information is serialized out to the client, the resulting HTTP code is 500 (Internal Server Error).  This probably makes sense, as you’d want to be able to trap that code and handle that condition as an error.  In HTTP terms, the result looks like this:

HTTP/1.1 500 Internal Server Error
Cache-Control: private
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/7.5
jsonerror: true
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sat, 24 Nov 2012 22:42:33 GMT
Content-Length: 91

-- JSON serialized version of your Exception object

Now, with the CoreCLR WebClient libraries, to my knowledge (and after many hours of trying), there is simply no way to actually get access to that serialized JSON exception in the response.  Once WebClient sees a HTTP 500, it throws a generic WebException.  Basically, you’d do something like:

WebClient client = new WebClient();
client.Headers["Content-Type"] = "application/json; charset=UTF-8";

client.UploadStringCompleted += delegate(object sender, UploadStringCompletedEventArgs e)
   if (e.Error != null)
      // e.Error will be a generic WebException, with nothing helpful of any sort

   // On a successful call, e.Result will contain the HTTP response

client.UploadStringAsync(new Uri(url), json);

If the web service call failed, an HTTP 500 would be returned, which would cause e.Error to be set to a generic WebException object. The Message property would say something like “NotFound”, or something unhelpful. If you tried to access e.Result, you’d get another exception. Great.  They made it as hard as possible to do anything low-level with this API.

I tried every way I could think of to intercept the actual HTTP response before an exception is thrown (writing all sorts of really messy code,) but the framework was just not having it. It’s no wonder so many applications say things like “An error has occurred. Please try again later.” – It’s bloody impossible to ever get the information you need!

I suppose the only way around it would be to re-implement the HTTP protocol at the socket level, but who has that kind of time?

Without access to this exception information, I could not do things like tell the user their password was wrong, or if a menu could not be created and why.  Sure, I could guess what the error most likely was, but that’s lame.

Even if there were a way around this, I ran into another issue.  Through the course of debugging this mess, I discovered a major bug on KitchenPC that only repros on production.  I wasn’t even getting back exception data even if I were able to parse it.  The HTTP response looked like:

{"Message":"There was an error processing the request.","StackTrace":"","ExceptionType":""}

That’s right, the production KitchenPC server was eating exception information, which I rely on for various site UI functionality (such as logon errors, duplicate menu names, registration issues, etc)!  All this code was broken, and I hadn’t even noticed since I do very little testing in production.

Turns out, ASP.NET will only serialize exception information if you have <customErrors mode="Off" /> set in web.config, and debug mode enabled. This, of course, exposes all sorts of other debug information with site crashes as well, which you may or may not want your users to have access to. There doesn’t appear to be a way to allow ASP.NET web services to serialize certain exceptions, without running in debug mode, since the creators of ASP.NET saw this as sensitive data, and definitely not something you’d be depending on for your UI to function properly!  Uh oh.

During the KitchenPC beta, I just ran the code in debug mode as I actually wanted detailed errors to be displayed, logged, etc.  It was a beta product, I needed to log things, trap things, and if something happened, I wanted users to be able to report as much information as they could on it.  Thus, I never noticed this design was potentially flawed in production use.  I don’t think it was until I switched to .NET 4.5 and started using web.config transforms that I really had a solid production configuration.  So, many of the error handling features have been broken on my site for the past few weeks.  Sigh.

I knew the only approach was to re-think how exceptions are handled.  First, exceptions should be exceptional.  An exception should be thrown if and only if that condition was completely unexpected.  Things like bad passwords and duplicate user names are perfectly reasonable, and thus should be communicated with a valid return object and an HTTP 200 result.

I then spent all of Sunday night revisiting every single API, ripping out the exception throwing code and implementing a new base return type which has properties such as Error.  Ugh, I’ve now implemented the exact design I said at the beginning of this post that I hated so much, but I guess that’s how things must work.

It’s of course very important to get all these API issues worked out now rather than later.  Once I release mobile and tablet apps, changing APIs can be quite a hassle.  With web code, I can simply change the JavaScript files and invalidate the CDN cache when I deploy changes.  With mobile applets, I really don’t want to have to force users to upgrade to a new version to continue running their app.

These new changes have been rolled out to production, so there should be no more missing error messages when you enter the wrong password.  Now, all web service calls return either a Result object, or an object that inherits from Result.  This makes the KitchenPC API much more consistent, and allows me to handle result errors internally within my proxy so that I can throw exceptions when the result contains an error.

I think overall, it was a good redesign.  Now, maybe I can actually get some work done.

Mobile Intentions

It is with great pleasure that I officially announce that I am now planning to begin the process of thinking about the initial development of KitchenPC mobile and tablet apps.  In other words, the phrase “I need a mobile story… but now what?” has popped into my head recently.

The very first launch of KitchenPC included an extremely basic, and poorly thought out mobile solution (which was reachable at  Thinking I needed some sort of mobile solution prompted me to craft this web-based turd in the matter of a single weekend, and it ended up being used by absolutely no one.  Seriously, I think it got about 0.000001 visitors per day.  Though this mobile site was optimized for smaller screens, allowed basic scenarios like recipe search, shopping list management, and basic calendar functions, it was a flop.  It suffered from a very common problem.  Mobile sites are often just watered down versions of a real website, since most developers fail to look past any differences between the desktop and mobile beyond mere screen size.

Surely, a better solution needs to be devised that takes into account what sorts of culinary tasks people are actually doing on their phones.  Surely, creating a mobile app that’s a straight port of the KitchenPC website would be an exercise in futility.  Surely, I should stop calling you Shirley.


Last night, I grabbed my dry erase markers and tapped them against my knuckles as I stared pensively at the massive 8×4′ whiteboard I have on my office wall.  I started thinking about the types of features this awesome new mobile app would need.  As I started writing down some ideas, it became clear after about ten minutes that I was repeating the exact same dumb mistake that I made before.  I was enumerating all the KitchenPC features in my head and writing down the ones that I thought could potentially be useful on a phone; a scaled back version of the website.  These included things like recipe search, adding recipes to menus, managing saved menus, adding recipes to the queue, creating a shopping list based on a menu, and more.  I was also trying to add extra bells and whistles, such as voice search by ingredient, and sharing recipes between devices using NFC.

It became increasingly clear that I was looking at months of development time, and all to develop a product that would be a mobile port of what I already have today on the web.  I really needed to take a step back and figure out what people are doing on phones.  What’s the minimum viable product to create on a phone that will target these key scenarios?  I was writing a version 1.0 product, not “KitchenPC 2.0 Now With Smart Phone Support”.

Take a Step Back

So, really now, what is the most basic mobile recipe app I can create?  One that will insult users over the features it’s blatantly missing?  When you hear users complaining that your app is missing features, only then do you know you’re on the right track; it means people are using it, they want to be using it, and reacting to these requests becomes consumer driven, not engineer driven.

Well, I need the ability to log on to KitchenPC, which means using an existing account, creating a new account, or logging on with Facebook credentials.  Yes, log on is good.  What else?

Shopping Lists?

One of the scenarios that miserably failed during the beta was a persistent shopping list.  Turns out, no one wanted to manage their weekly shopping list on a desktop computer.  Most people just write this on paper, or throw it together right before they leave for the store.  Plus, the people who were using this feature complained that adding specific ingredients and amounts was somewhat of a hassle, as the database schema was normalized just as recipe ingredient data is.  So, the current incarnation of KitchenPC has a “What You’ll Need To Buy” list whenever you’re looking at multiple recipes, however no real shopping list management feature.

Perhaps a mobile device is different though.  People take their smart phones to the grocery store with them, and there are plenty of successful shopping list apps for every mobile platform.  Using the KitchenPC back-end, I could easily build a very powerful shopping list app.  This would be flexible, allowing NLP ingredient entry, storing arbitrary non-food items (such as “paper towels”,) and let the user cross off items after they were purchased.  It could also use push-updates to sync shopping list changes instantly between devices that were logged on to the same account.  This is a great example of a feature that’s designed specifically for mobile scenarios, rather than a straight port of a website feature.

My Menus?

One of the most powerful features of KitchenPC is the ability to organize recipes you find into various menus.  This allows for basic meal planning (since you can create a menu each week, or a menu for a certain gathering or event) and also just lets you save your favorite recipes, grouped any way you wish.  However, does this correspond to any mobile scenarios?

For meal planning scenarios, most definitely.  Users who plan out recipes to make during the week need to be able to scroll through them while at the store and easily see what they’ll need to buy.  I’ve also more than once found myself somewhat peckish while wandering the food aisles and wanted to look up the ingredients for a specific recipe.  Allowing users to access their saved menus on their smart phone is definitely a primary goal.

Recipe Search?

All of my initial specs for KitchenPC Mobile have included a basic recipe search.  KitchenPC is a recipe search engine, of course you’d need to be able to… like.. search for recipes on the mobile version, right?


However, recipe search seems to infect any spec I write, like a virus, slowly spreading into more and more features.  Once you have recipe search, you need a mobile interface to define various search criteria.  Keywords, meal types, ingredients to include and exclude.  Then, users would start wanting some of the more advanced search options like time limits, ratings, pictures, dietary restrictions, etc.  Then you’d need a results list, the ability to add results to menus, which turns into full menu management features (creating menus, managing the recipes within those menus, and removing recipes from menus.)  Suddenly, you’re back to creating a mobile port of the entire website.

So, do people really need recipe search in the mobile app?  I found myself leaning towards no.  However, then one thing struck me.  Cutting this feature creates a dependency between the KitchenPC website and the mobile app.  The app is all of a sudden useless for first time users, as well as users who rarely use desktop computers.  Without the ability to find recipes in the database, users can’t create menus.  It would be useful as a shopping list app, if they wanted to type in everything manually, but there’s already myriad apps for this.  Do I want KitchenPC Mobile to basically be an extension of the website?  Probably not.

Obviously, I need an extremely simplified search feature, even if it just allows keyword search and nothing else.


In the end, I decided to create a list of “must-haves” for the mobile site, which would enable the key scenarios I wanted to deliver.  Then, create a list of “bonus features” or “nice-to-haves” that would extend those scenarios, or perhaps be on the road map for future versions.


  • Ability to log on, create an account, and log off (Duh)
  • Powerful and flexible shopping list feature
  • At least a read-only view of saved menus, with perhaps a sample menu for new users
  • Ingredient aggregation within menus, allowing users to see what they’ll need to buy at the store to make the recipes within a menu
  • Basic recipe search (keywords only) and a results list that lets you save a recipe to a menu, or create a new menu


  • More powerful recipe searches, more on parity with the website
  • More control over saved menus; moving recipes between menus, removing recipes from menus, creating new menus and renaming menus
  • Queue feature (adding any recipe to the queue, viewing the queue and de-queueing recipes)
  • Selecting individual recipes from search results or menus and seeing a real time tally of ingredients you’d need to buy

Next Step, Wireframes

The next step in the process will be defining the overall flow of this app.  This includes mocking up wire frames, defining each screen, and figuring out how the features relate to each other.  Since I know the basic scenarios, I know to make those tasks easy to accomplish using the fewest number of button clicks.

Hopefully, I can resist the urge to overly complicate the design.  I’d really like to be able to get something together within a month or so, and get it out there for users to try out.  Getting a tablet version out (which of course targets different scenarios, like in-kitchen use) is also important, and can hopefully be available for all the new tablets sold during the holiday season.

Comments?  Wishlist features?  Just want to talk about how you use your smart phone or tablet?  Let me know in the comments below, or shoot me an email!