Skip to content

Join me at the next Seattle Mystery Meet!

As a bit of a foodie, I’ve recently become familiar with an event called Mystery Meet.  The casual culinary enthusiast is given the opportunity to explore fantastic new restaurants and connect with other like-minded individuals.  There’s a set ticket price which includes a multi-course meal and dessert, and everything is taken care of ahead of time so there’s no messing with cash and splitting credit cards 18 different ways.  Mystery Meet, as the play on words cleverly suggests, comes with a bit of a twist.  Diners do not know where they’re heading or what they’ll be eating until the day before.  This adds a level of excitement, and encourages only the more adventurous to blindly purchase tickets.

Before each event, a series of clues are leaked through the website or social media which, with any luck, will shed some light on where the event will take place.  Each event also has a host, local to that area, usually another food blogger or – perhaps – an entrepreneur who runs a food related website.

That’s right – the next Mystery Meet event, held on Tuesday, June 18th at 7:30pm, will be hosted by yours truly.  Tickets are now on sale for $50 bucks, and will sell out quickly.  Sound like fun?  Like food?  Like entrepreneur things involving food?  Bored on a Tuesday night and all caught up on Game of Thrones?  If you answered Yes to one or more of these questions, I’d love to see you there!  It’s also the kind of event where it’s perfectly okay to show up by yourself.  I’ll make sure everyone gets a chance to meet everyone else, and maybe even ask people to change seats halfway through the meal.

In the mean time, a bit more on Mystery Meet can be seen here in this lovely video.  Hope to see some of my readers there!

Advertisement

Questions From a Reader

William is an undergrad at the University of Portland studying finance.  In his entrepreneurial class, they’re required to construct a business model for a potential start-up.  He and his partners are creating a business called RecipeKey, which offers personalized recipe searching similar to KitchenPC.  The other day, William sent me an email with a few questions about KitchenPC and I’ve decided to answer them as a blog post.  Enjoy!

It seems like you put a lot of hard work into KitchenPC.  What reasons did you have for stopping?  Did you just lose interest or have something else come up?  Or did your users not see it as a “must have”?

First off, I wouldn’t say I’ve stopped working on KitchenPC.  I’ve been working on the project in one form or another for about ten years now, and more seriously for the last six.  During that time, I’ve taken a few breaks here and there, especially when I get to the point where the project starts in evolve into something else.  I haven’t been actively working on the code the last few months, and no new features have been launched, but the site is still alive and well, with many visitors every day. Just last week I went to lunch with someone interested in the site and we bounced some ideas around.

Did you ever make money off of KitchenPC, or was it just a side project?  Would you ever consider going back and putting more work in and trying to develop it into a successful business?

I haven’t made a dime off KitchenPC, and it really hasn’t been my intention to do so.  At this point, I’m more interested in building a successful product that people are interested in using.  While it’s true that many types of businesses require revenue to stay afloat, a small consumer website is usually not one of them.  I have no employees, and I can afford the costs out of pocket.  I currently pay Rackspace about $70/mon to host the two servers that KitchenPC runs off of (a web server and a database server), as well as a couple bucks to Amazon for content storage.  The development costs were light, since I did all the coding myself.  I did outsource the design (HTML, CSS, graphics, etc) to a Polish company, and the cost for that was about $4,000.  Overall, KitchenPC was a very cheap company to build.

Since revenue isn’t required to stay afloat, the primary goal becomes building a product that people want to use.  Once that is successfully achieved, perhaps a way to generate revenue could be considered, especially if costs go up considerably.  However, this has not been successfully achieved.

Let’s say I decided to offer a Freemium model for KitchenPC.  Let’s assume I had some great features and charged $5/mon for them, which I’d say is about the most anyone would be willing to pay.  Most Freemium companies have a conversion rate of between 1% and 10%, with the average being around 2-4%.  With KitchenPC’s level of traction, I’d assume the bottom end at around 2%.  I’ve had about 2,000 users so far create accounts, so 40 of them paying would yield a monthly revenue of around $200.  That would yield a net profit (if you assume I work for free) of around $130 or so.  Not really a venture worth pursuing.

Another idea might be advertising.  Let’s assume I can get $1 per click, and 1 in 1,000 visitors will click on an ad.  KitchenPC averages about 622 visitors per month, or 0.622 ad clicks per month.  At 62 cents in ad revenue per month, making KitchenPC an ad driving business seems even more futile.

Looking at these numbers, I think we can agree it seems silly to worry much about a revenue model before you have a successful product to sell.  Right now, the only thing that matters is building a great product, and my focus is 100% on that.

The second part of your question; would I consider putting more work in to develop KitchenPC into a successful business.  I’m not of the opinion that my lack of success is tied to the amount of work I put in.  I spent a year on the site full time, working on the site 15-20 hours a day for months at a time.  I did as much research as I could, iterated, spent thousands on usability testing, running ideas off real customers, sending out surveys and polls, and everything else I could think of.  I worked a lot harder on making KitchenPC a successful site than I did at any project at Microsoft, but was still met with lukewarm reception.  A lot of people do love the site, but still most of the major site features go mostly unused.  The work I put in to the site was definitely not worth it for the level of success I achieved with the product, however, it was definitely worth it for the experience and growth as an entrepreneur.  In other words, I don’t think I would quit my job again, use up all my life savings, and work harder on a better version of the same site.

I think at the moment, it’s moved more into the realm of being a side project.  I have a few ideas on iterations I can explore, so it’s somewhat possible that I might stumble across a formula that demonstrates enough traction that would cause me to consider quitting my job and working on it full time again.  I’d rather lessen my risk by working on the site during my spare time to test the waters with a few ideas, rather than diving in and going all out again.  Luckily, I have a fantastic code base and a lot of cool technology that can be used in various ways.

Thanks for reaching out to me, William, and I hope my answers have helped!  The experience you get from building a company, iterating on ideas and working with customers is incredibly valuable and worth the time spent.  Best of luck with RecipeKey, and let me know if you need any beta testers.

 

Why iPhone is a gigantic piece of crap

Yes, this post will be a rant.  Sorry.

It’s become clear that no one at Apple actually bothers to test the iPhone with realistic scenarios, nor are they in the least bit interested in developing a platform in which solutions can be built on top of.  Their opinion that every app needs to live in its own little world, isolated completely from the operating system or other applications, has basically ruined any chance they have at maintaining their market share.  I’m not in the least bit surprised that Android is up 13% in the last year, and iOS is down 7%.

The final straw for me is their complete and utter lack of any sort of turn-by-turn driving directions that actually work.  I don’t mean an app that does this, I mean providing, as a phone, a solution that will get you to the place you want to go – a solution that integrates in with the phone as a whole.

I first bought the iPhone 4S the day it came out.  I had never before owned an iPhone, and was excited about what it could do.  After all, it was the best selling smart phone ever, and pioneered the whole application platform thing, right?  First, to my surprise, it came with absolutely no GPS software at all.  I was forced to spend $60 bucks or so on the TomTom app, which, by itself worked great.  However, the built in map software was the Apple version, with no turn-by-turn guidance.  Since iOS is a completely closed platform, TomTom cannot say “Hey, I should be the default maps app so load me whenever you need to get somewhere!”  This would have been nice.  A real platform would have exactly this.

Instead, if you click on an address in an email, or on a location on a Facebook event, it would load the crap Apple Maps app which forced you to click “next” every time you completed a turn.  You couldn’t even copy and paste, since the TomTom app made you type in a street, then number, then city then state.  It could not parse a pasted whole address.  I would usually have to write down the address on a PostIt note, then re-type it in to the TomTom app.  What a complete joke.

Along comes the new Apple Maps!  Yay!  Enter all sorts of media buzz here.

This version of Maps finally had turn-by-turn directions.  Despite what you read in the news, in general it works fairly well.  I won’t blame Apple for not immediately having the flawless data it took Google years to build.  So, now we have a fairly good maps app that will launch by default from other apps and links.

Of course, this brings me to my next major gripe about the iPhone.  You can’t charge the phone at the same time you use GPS.  No, seriously, you can’t.  You don’t believe me, but it’s true.

My last car, a Jetta TDI, had a little iPhone connector built in to the center armrest to charge iPhones and iPods.  You could also use it to listen to music on the car stereo.  However, when you plug it in, the phone’s built-in speaker is automatically muted.  So, if you’re using GPS, don’t expect to actually hear anything while your phone is being charged.  And, of course, if you don’t charge your phone, you’ll have about an hour of battery life while the GPS is running.  This appears to be a behavior of the phone that applies to every app, as neither TomTom, Google Maps, or Apple Maps will make a sound while it’s charging.  Even if you turn the stereo to “Aux In”, though you’ll be able to hear music on the phone, you won’t get any GPS turn-by-turn directions.

Recently, I got a new car – a 2013 Subaru Outback.  This car doesn’t have an iPhone connector, but it does have Bluetooth.  The dealer even helped me pair my iPhone to the car, so I could talk on the phone while driving.  However, if you have Bluetooth turned on, you won’t hear a peep out of the phone while using GPS; even if you have it turned to Aux-Input.  I’ve given up and decided to keep Bluetooth off, since I use GPS so often in my car.  I’ve tried looking at my phone while driving, but I miss turns every time.  Plus, it’s dangerous.

This blinding limitation irritated me to no end while in San Francisco last month.  I rented a Chevy Cruze, which had a USB connector for a phone.  I was staying with friends and family, most over an hour from the conference, which forced me to drive all over the bay area.  As I’m not familiar with the bay area, I was using GPS the entire time.  And, like all other cars, I of course cannot charge the phone and hear the GPS at the same time.  This caused me to show up to the conference with a phone with about a 50% charge, which usually didn’t even last the day.  By dinner time it was totally dead, and I had no choice but to charge it on long stretches of freeway, then unplug it when I got near the restaurant.

Luckily, I was meeting up with a friend who works at Apple.  He claimed there was a solution for this, and you can configure the audio output when using Bluetooth.

Turns out, on Apple Maps, when the phone is paired you’ll see a little “Bluetooth” logo on the bottom right corner of the map.  If you press it, you can select if the audio will be sent over Bluetooth or to the phone’s built in speaker.  This works okay in theory, however, get this – this is an app feature, not a system setting!  That’s right, this same setting does not exist on Google Maps or on TomTom.  Since Apple insists that every app needs to be an island, they expect every app to implement this “Do you want to hear your phone?” feature themselves.  To make this even more ridiculous, this feature only applies to Bluebooth pairing.  If I want to plug the phone in to a USB connection, which would allow me to charge the phone, there’s of course no setting for that.

Now I’m pretty much forced to use Apple Maps because it’s the only program that seems to be written correctly.  So, the other day I was headed to an event.  I had the event details stored as a Facebook event.  I went to the Facebook app, clicked the event, and clicked on the directions.  The Facebook app now actually opens the Google Maps website!  That’s right, the web site – not even the app.  I guess Facebook got so annoyed with this, they decided to bypass the API completely and just launch the browser.  The website asks if I want to use my current location, which I said yes.  It also has a button that says “Always Open in the Google Maps App”.  First, this always button seems to mean always for that exact URL, meaning the location I happen to be headed to right then.  Second, when it loads the Google Maps app, it only remembers the destination.  I have to then type in my current location, at which point it just prints out the entire list of directions; no turn-by-turn.  If I exit the app, go back in, then I can see that destination in my history, select it, and use turn-by-turn.  None of this is slightly usable while you’re driving, so I have to get this all setup while I’m still parked.  This is a hassle if I just want to use GPS for the last few minutes of the drive, and don’t want GPS yapping at me for the first 30 minutes.  Lately, I’ve just gone back to writing down the address on a PostIt note, opening up Google Maps, and typing it in by hand.  That’s been the only thing that consistently works on this platform.

The fact that absolutely nothing in the iOS world works together, apps cannot play off each other to deliver real solutions for realistic scenarios, Apple doesn’t allow an app to be a “default app” for web browsing, mapping, making calls, texting, etc, means that iOS will never deliver anything more powerful than its most powerful single app.

My next phone will be an Android.

 

Anyone wanna do Launch?

Thought I’d write up a quick post letting people know I’ll be at this year’s Launch festival, in San Francisco March 4th – 6th.  There will be a whole slew of new companies showcasing great new products (no, I won’t be presenting… this year) and a great time to be had by all.

I’m mostly looking forward to the chance to mingle with a room full of other entrepreneurs, which is actually the main reason I’m going (plus, they were nice enough to give me a free ticket!)

If you’re going, or would just like to grab drinks one evening, then feel free to send me a note as I’d love to meet up!

That’s it.

2012 in review

The WordPress.com stats helper monkeys prepared a 2012 annual report for this blog.

Here’s an excerpt:

600 people reached the top of Mt. Everest in 2012. This blog got about 8,000 views in 2012. If every person who reached the top of Mt. Everest viewed this blog, it would have taken 13 years to get that many views.

Click here to see the complete report.

Windows Phone Eight Got No Chance

According to the Windows Phone Developer Blog, Windows Phone has certified and published over 75,000 new apps and games over the last year.  The platform, in my mind, is superior as a development platform; leveraging a proven run-time, well-documented and easy to use libraries, and is portable between various devices of various form factors.  Last I messed around with the Windows Phone SDK, I was able to write a basic app that let a user search for recipe keywords and displayed the results in about 20 minutes; this was without reading any documentation, tutorials or code samples.  Try that with iOS or Android development.  Regardless of what you think about Microsoft as a company, their development tools and platforms are arguably worlds beyond anything else out there.

Guess what.  None of this matters.

Microsoft’s attempt to muscle their way into the mobile OS market has been met with failure after failure because they’re constantly trying to play catch-up with the moving target of innovation.  I live and breathe in the start-up world.  I attend start-up meetings, groups and conventions.  I’m on countless mailing lists and online bulletin boards focused on entrepreneurs.  Almost no one I talk to is thinking about, or even interested in, building mobile applications on the Microsoft platform.  A start-up’s resources are extremely limited, and most have little or no outside funding.  No matter how fantastic the development platform is, there’s simply no reason to waste cycles building a completely separate code base for a platform that has about 3% of the market share.  Even huge companies that actually do have the resources to build Windows Phone versions of their products are either choosing not to, or just do so as an experiment to test the waters.  The result?  Innovation itself moves at a faster pace than Windows Phone can adapt.  In other words, all the cool new things coming out and making the headlines in the tech world are not coming out on the Windows Phone platform.  If your platform is not adopted by the geeks, the innovators and the early-adopters, it has no chance of jumping that chasm into the main stream.

Microsoft is well aware of this problem and is working on genetically engineering chickens to lay those non-existent eggs.  A lot of this comes in the form of either developing apps themselves (for example, Facebook and Twitter sharing is simply built into the OS) or paying smaller companies with the top apps to create Windows Phone ports.  There are two problems with this approach, neither of which I have solutions for.

First, you’re still straggling behind the line of innovation.  Only after an app has proven itself, has 50 million users, and makes the front page of TechCrunch and Wired would Microsoft begin to back such a port.  Just now are we seeing things like Evernote or Rhapsody for Windows Phone, even though these apps are mainstream on other platforms.  If you’re a geek with a Windows Phone, you’ll be waiting on the side lines while all your iPhone and Android friends enjoy all the cutting edge innovation coming out every week.

Second, the apps that do make it out for Windows Phone are the red-headed step children of the real thing.  Sure, there’s a few Twitter apps for Windows Phone.  None of which were actually written by Twitter.  With Twitter’s reputation for screwing over devs, who knows if these apps will even continue to work in the future.  Who knows if they might suddenly break when Twitter rolls out some great new feature that changes the way all their APIs work.  Know what won’t break?  The iOS and Android Twitter apps, written by Twitter.

I’m somewhat of a Yelp addict myself.  Some how, this company has managed to install some sort of Pavlovian response in my psyche forcing me to check-in wherever I go.  I actually really dig this.  Sometimes, I’m trying to remember the name of a place I was at a few weeks ago, and can dig it up on my Yelp history.  I also sometimes notice other people I know have checked in as well, or get Facebook comments on my check-in.  Every now and then, there’s some sort of discount or coupon offered when checking in.

Is there a Yelp app for Windows Phone?  Well, if you want to call it that.  The Yelp app on Windows Phone is pretty much an insult to all things Yelp.  You can’t check in, you can’t rate or write reviews; it’s basically a read-only interface to Yelp, and provides absolutely nothing you couldn’t get by visiting the website itself.  There’s absolutely nothing mobile about it – just read the reviews!

It seems a lot of these so-called ports are simply an excuse to check off a column in a list.  “Yup, we have a Yelp app – check!”

Now if I’m shopping for phones, I not only have to make sure there’s a Yelp app, but I also have to research the feature parity of this app by reading reviews or trying it out on a friend’s phone first.  This leads to a total distrust of the porting efforts.  If an app is ported simply to slap a brand name in the app store, but provides absolutely nothing that made that app successful, the port is a lie.

Imagine if Yelp came out with a new feature that let you take a picture of the menu at a restaurant and added star ratings to each menu item using augmented reality.  We’d see this ported over to Windows Phone in about 3-7 years.  Once again, Microsoft cannot play catchup as innovation marches forward.

It seems Microsoft’s attempting to build the world’s greatest mobile platform (which they might have succeeded at) and expects innovation to simply happen magically.  In reality, there’s absolutely nothing urging the innovators to develop the new “it” app of tomorrow on this platform.

Microsoft needs to find a way to fuel innovation itself on this platform, not bribe the current top innovators to write these less-than-stellar ports of their hit iPhone apps.  In fact, I’d say this is pretty much all they should be doing.

If I were Microsoft, here’s what I’d do:

  • Drop all development fees completely.  No more $100/yr fee to develop on Windows Phone.  Update: The Microsoft BizSpark program will actually waive this fee for the first year.  It’s a fantastic program which I encourage all start-ups to look into.
  • Invest in start-ups that will develop cutting edge technologies on their platform.  Start a $100MM VC fund for mobile innovators.
  • Invest in technologies that make cross-platform development easier.  Imagine if Microsoft invested in or acquired Xamarin, and turned Mono into a cross-platform phone development tool.  Using Visual Studio, developers could write native apps in C# that targeted Windows Phone, Android and iOS using technologies such as Silverlight and XAML.  Phone specific features could be abstracted with minimal platform specific code.  It’s possible to make developing an app for both Windows Phone and iPhone easier than developing the same app for iPhone only.
  • Push employees to build cool apps internally.  Start a program where Microsoft employees can build cool apps while at work, publish the apps to the store, and get them promoted by Microsoft.  Employees should be able to keep 100% of any revenue the app makes as a bonus.

The way things are going right now, I can’t see Microsoft even making a dent in the mobile and tablet market.  They seem to be lagging far behind and focusing way too much on catch-up and way too little on innovation.  I’d love to see that change, as the platform is awesome and the development tools are top notch.  I guess only time will tell.

Using ENUM types with PostgreSQL and Castle ActiveRecord

One thing I really love about PostgreSQL is ENUM types, which basically allow you to create enumerations as datatypes and use them just as you would any intrinsic type.  For example, I have an ENUM called UnitsEnum that allows me to represent a unit of measurement (cup, teaspoon, gallon, etc).  I then use this data type in various database columns, functions, views, whatever.

These data types are simple to create with the following command:

CREATE TYPE UnitsEnum AS ENUM ('Unit', 'Teaspoon', 'Tablespoon', 'FluidOunce', 'Cup', 'Pint', 'Quart', 'Gallon', 'Gram', 'Ounce', 'Pound');

From that point on, you can use UnitsEnum just as you would a varchar or an int or a Boolean datatype, using it to define column definitions, using it as a function parameter or return value, etc. For all practical purposes, it’s now built into Postgres:

CREATE TABLE RecipeIngredients
(
  -- Stuff here
  Unit UnitsEnum NOT NULL --Unit column of type UnitsEnum
);

select count(1) from RecipeIngredients where Unit = 'Teaspoon';
--Yay!

select count(1) from RecipeIngredients where Unit = 'Crapload';
--Error is thrown

As far as the user is concerned, the data type appears to be a string that simply errors out when the value is not within a certain allowed set, and in fact strings can implicitly convert to an enum (but not numeric types which I’ve found odd).

Using an ENUM as a column type would be somewhat analogous to creating a text column, but creating a foreign key constraint on a table of measurements which contained the strings in the enumeration above. However, on the heap table the ENUM is actually stored in its numeric representation which could possibly make binary searches a bit faster. One could argue that you could also do the same by using a numeric data type to hold enumerations, and then set CHECK constraints to limit the values. This would be equally nice, however I really like being able to see the string representations of these types when I’m performing ad-hoc queries on my database. This is the best of both worlds.

While this is awesome from a database design perspective, what would be even more awesome is bridging that data type with the business logic code. In other words, I want the models in my ORM to use C# enums that map perfectly to the defined Postgres type.

Luckily, that’s possible to do with Castle ActiveRecord, and actually quite easy.

First, we’d need a C# enum. It would need to match the types in the Postgres ENUM (the names, that is, as the numeric values don’t actually matter here):

public enum Units
{
   //Individual items
   Unit = 0,

   //Volume
   Teaspoon = 1,
   Tablespoon = 2,
   FluidOunce = 3,
   Cup = 4,
   Pint = 5,
   Quart = 6,
   Gallon = 7,

   //Weight
   Gram = 8,
   Ounce = 9,
   Pound = 10,
}

Now, let’s use this in a model called RecipeIngredient, which represents a usage of an ingredient within a certain recipe:

[ActiveRecord("RecipeIngredients")]
public class RecipeIngredient
   : ActiveRecordLinqBase<RecipeIngredient>
{
   // Various column mappings go here
}

First, we’d add a new column mapping called Unit:

private Units unit;

[Property(SqlType = "UnitsEnum", ColumnType = "KPCServer.DB.UnitsMapper, Core")]
public Units Unit
{
   get { return unit; }
   set { unit = value; }
}

We now have a Unit property of type Units, our C# enumeration of valid units of measurements. So, now the model is restricted to only valid units of measurements, using all the type safety of C#.

There’s a few things to look at on the [Property] attribute for this mapping. First is SqlType. This property defines what type in SQL will be used for this column. This will cause ActiveRecord to use this type when it’s writing database creation scripts and what not.

Next, the ColumnType attribute specifies a managed type that’s responsible for type mapping information used by NHibernate. I’m using KPCServer.DB.UnitsMapper, in the Core.dll assembly. Unfortunately, a mapper class can only represent a single ENUM type. Thus, if you have a bunch of enums, you’ll need to repeat a lot of this logic. To DRY this out a bit, I’ve created a helper base class called PgEnumMapper:

public class PgEnumMapper<T> : NHibernate.Type.EnumStringType<T>
{
   public override NHibernate.SqlTypes.SqlType SqlType
   {
      get
      {
         return new NHibernate.SqlTypes.SqlType(DbType.Object);
      }
   }
}

The reason we need to do this is by default, NHibernate will convert C# enums to their numeric representations (which would be much more database agnostic,) attempting to store their value as a number in your database. Postgres will not cast numeric values to an ENUM type, so you’ll get an error trying to do this. The mapper above says, “No, I need this to be a string in the SQL you generate.”

Now, we can use this PgEnumMapper for each of our enums:

public class UnitTypeMapper : PgEnumMapper<UnitType>
{
}

No need to add anything to UnitTypeMapper, it’s perfectly fine the way it is. You’d want to create a PgEnumMapper derived class for each C# enum you want to support in your ORM of course.

That’s all there is to it! You can now use ENUM types in Postgres and magically have them map to C# enums. Wow I love Postgres.

WCF and ActiveRecord, let’s be friends!

For the coders out there that follow this blog, you’ll probably know the KitchenPC back-end is completely built on Castle ActiveRecord, an easy to use ORM for .NET build on NHibernate.  I’m a huge fan, even though the project is no longer active and most people are probably using Fluent NHibernate or Microsoft’s own ORM, the Entity Framework (blech).  I also happen to be a huge fan of the ActiveRecord pattern in general, so I’m sticking with it and you can’t make me change.

Getting WCF to play nicely with ActiveRecord turned out to be somewhat of a time vampire; not because the solution is hugely complicated, but because it required me to dig in to the inner workings of both WCF and ActiveRecord.  I also found almost zero information out there covering integration between these two technologies, as most blog posts and articles focus on NHibernate itself and may or may not apply to ActiveRecord specific implementation details.  Posting questions on StackOverflow also yielded nothing but cricket noises.

For that reason, I decided to write a technical blog post illustrating exactly how to get these guys to be friends.  Turns out, it’s not all that scary!

Assumptions:

First, I will assume the reader is already family with Castle ActiveRecord, and knows how to define mappings, initialize the framework in Application_Start, and has the basics down of using the framework.  If not, I’d suggest reading up on Castle ActiveRecord, following a few tutorials, and trying it out first on a simple ASP.NET web site.

I’ll also assume you have some basic knowledge of WCF.  However, if you want to be really well versed in the extensibility story for WCF, I highly suggestion reading this article.

ActiveRecord Scopes

NHibernate, for reasons of efficiency, works based on sessions.  To avoid running tons of SQL statements after every line of code you write, the underlying engine will batch things up and run a bunch of statements at once when it’s wise to do so.  It’s also smart enough to know when it needs to re-query data, commit pending transactions first, or invalidate cached data.  SessionScopes can be nested, which means internally they’re represented in a stack of SessionScope objects.

A SessionScope object tracks a collection of pending queries, and commits them to a database (provided everything is kosher) when the object is disposed.  A very simple example of this might be:

   using (new SessionScope())
   {
      Recipe r = Recipe.Find(123);
      r.PrepTime = 60;
      r.Ingredients.Add(new Ingredient(5));

      r.Update();
   }

Here, we find a Recipe object in the database with the ID of 123, set a new prep time for that recipe, then add a new ingredient to the recipe’s ingredient collection. NHibernate, under the covers, is tracking what objects have changed and issuing UPDATE commands through the database provider when SessionScope is disposed.  If we throw an exception halfway through, things we already updated won’t actually get changed, as those UPDATE commands will never be sent.

One thing you’ll notice is that I don’t actually refer to my SessionScope object anywhere. You might be asking yourself how r.Update() knows which session we’re currently in. Well, this is done through an implementation of IThreadScopeInfo. When you call SessionScope.Current, the Castle framework calls upon the configured IThreadScopeInfo which is responsible for finding the current session.

Castle ActiveRecord ships with a few of these. One is called WebThreadScopeInfo and stores the current session stack in the HttpContext.Current.Items[] collection. This means that the session is scoped to each individual HTTP request. Another implementation is HybridWebThreadScopeInfo, which allows you to run sessions in any random ol’ thread, where HttpContext.Current would be null. The HybridWebThreadScopeInfo class will first check HttpContext.Current, and if it’s null, return a session stack keyed to the current thread, creating a new stack if necessary.

This design allows you to call functions which call other functions which call other functions without having to worry about passing sessions and scopes all over the place.

Castle ActiveRecord actually makes this even easier. There exists an HTTP module called SessionScopeWebModule which runs before each HTTP request, creates a new SessionScope for that request, and disposes of it when the HTTP request ends. Using this module, you can write the above code as just:

   Recipe r = Recipe.Find(123);
   r.PrepTime = 60;
   r.Ingredients.Add(new Ingredient(5));

   r.Update();

There’s no need to new up a SessionScope, as one has been created for you by SessionScopeWebModule. Pretty cool, eh?

So why doesn’t this solution work in WCF?

As I pointed out in my previous post, the WCF stack is completely independent of ASP.NET. HTTP modules won’t be run and there’s no HttpContext.Current.Items collection to store the active session stack. For this reason, you’ll need to re-create a few of these solutions in a way that’s compatible with WCF’s architecture.  You can also run in aspNetCompatibilityEnabled mode, but that has its own set of limitations.

Unfortunately, Castle ActiveRecord doesn’t have support for this out of the box, but it offers the extensibility to design a solution similar to the ASP.NET handlers. In fact, I based my code completely off the built in SessionScopeWebModule HTTP module and HybridWebThreadScopeInfo scope info class.

Step 1: Creating a scope for each WCF request

In ASP.NET, this would be done using an httpModule.  In WCF, this is done by implementing a message inspector.  A message inspector, which is an implementation of IDispatchMessageInspector, is a class that can look at a message traveling through the WCF pipeline and do things before and after the message is processed.  It’s dead simple, and has an AfterReceiveRequest method which is called after the request is received and deserialized, and a BeforeSendReply which is called before the reply is assembled.  We can use this to create a new session scope for our WCF operations.

public class MyInspector : IDispatchMessageInspector
{
   public MyInspector()
   {
   }

   public object AfterReceiveRequest(ref System.ServiceModel.Channels.Message request, System.ServiceModel.IClientChannel channel, System.ServiceModel.InstanceContext instanceContext)
   {
      return new SessionScope();
   }

   public void BeforeSendReply(ref System.ServiceModel.Channels.Message reply, object correlationState)
   {
      SessionScope scope = correlationState as SessionScope;

      if (scope != null)
      {
         scope.Dispose();
      }
   }
}

Ok so what’s going on here? When we receive a request, we new up a SessionScope object. The constructor actually adds itself to the current session stack through the configured IThreadScopeInfo, so all that is taken care of for us. This is actually quite similar to what the web module does, only the web module adds that SessionScope object to the HttpContext.Current.Items collection so the scope can be disposed of later.  We don’t need to that because we take advantage of correlation state.  Any object AfterReceiveRequest returns will be passed in to BeforeSendReply, which is useful for tracking objects in a message inspector through the lifespan of a WCF request.

You’ll notice that we use this concept in BeforeSendReply to grab the reference to the SessionScope created above and dispose of it, thus committing any pending transactions to the database.

This inspector now has to be attached to a WCF service. This can be done in a variety of ways (mostly mucking with your web.config), but perhaps the easiest is by implementing a custom IServiceBehavior. An IServiceBehavior defines a custom behavior for a service, such as adding custom message inspectors. A very cool feature of this is your IServiceBehavior can derive from the .NET Attribute class which allows you to tag a WCF service directly with a behavior, no need to mess around with configuration files. This class is quite simple to implement:

public class MyServiceBehaviorAttribute : Attribute, IServiceBehavior
{
   public void AddBindingParameters(ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase, System.Collections.ObjectModel.Collection<ServiceEndpoint> endpoints, System.ServiceModel.Channels.BindingParameterCollection bindingParameters)
   {
   }

   public void ApplyDispatchBehavior(ServiceDescription serviceDescription, System.ServiceModel.ServiceHostBase serviceHostBase)
   {
      foreach (ChannelDispatcher cDispatcher in serviceHostBase.ChannelDispatchers)
         foreach (EndpointDispatcher eDispatcher in cDispatcher.Endpoints)
            eDispatcher.DispatchRuntime.MessageInspectors.Add(new MyInspector());

   }

   public void Validate(ServiceDescription serviceDescription, ServiceHostBase serviceHostBase)
   {
   }
}

Nothing to really explain here. When the behavior is applied, it goes through all the endpoints that connect to that service and applies our message inspector. You can then tag your WCF service with that behavior:

[MyServiceBehavior]
public class Service1 : IService1
{
   // Your operation contracts here
}

When we have this hooked up, every request to your service will be gauranteed to have its own session scope, so no need to new up a SessionScope object yourself. It’ll work just like ASP.NET using the SessionScopeWebModule module.

Well crap, that won’t actually work…

If you’re smart and/or know a lot about IIS, you’ll see a major problem with this. WCF exhibits thread agility, which mean the message inspector (where our SessionScope was created) and the service operation (where we’ll be looking for SessionScope.Current) could be different threads!

As I mentioned before, the current SessionScope is resolved through a IThreadScopeInfo implementation. The WebThreadScopeInfo implementation uses HttpContext.Current to track the session stack, providing an immunity against ASP.NET thread agility. The HybridWebThreadScopeInfo implementation attempts to use HttpContext.Current, but if it’s not found it basically creates a stack keyed by the local thread. This kinda works for simple cases, like loading some data into memory within an initialization thread, but probably isn’t very smart for us. If we need to lazy load some property and a session scope had never been created on that thread, we’re going to crash with the exception: “failed to lazily initialize a collection of role: xxx, no session or session was closed”.  This would of course happen intermittently, and only in the middle of the night or on your vacation.  You’ll never be able to reproduce it on your dev box, and it would make you hate the world and yearn for a simpler time before all this technology.

The solution? Write a super-duper-hybrid session scope info class. One that does it all. This thing of beauty will check the HttpContext.Current if we’re running on an ASP.NET page, check the WCF OperationContext.Current if we’re in an WCF operation, and then fall back to a stack keyed by the thread if all else fails. It will slice, it will dice, you’ll be seeing infomercials for this IThreadScopeInfo. And I won’t charge you three payments of $19.95 to use it. In fact, I’ll give you the code for free.

This implementation is heavily based on the HybridWebThreadScopeInfo implementation, which I recommend skimming over first. I just added a bit of extra logic to handle the WCF case.

public class WcfThreadScopeInfo : AbstractThreadScopeInfo, IWebThreadScopeInfo
{
   const string ActiveRecordCurrentStack = "activerecord.currentstack";

   [ThreadStatic]
   static Stack stack;

   public override Stack CurrentStack
   {
      [MethodImpl(MethodImplOptions.Synchronized)]
      get
      {
         Stack contextstack;

         if (HttpContext.Current != null) //We're running in an ASP.NET context
         {
            contextstack = HttpContext.Current.Items[ActiveRecordCurrentStack] as Stack;
            if (contextstack == null)
            {
               contextstack = new Stack();
               HttpContext.Current.Items[ActiveRecordCurrentStack] = contextstack;
            }

            return contextstack;
         }

         if (OperationContext.Current != null) //We're running in a WCF context
         {
            NHibernateContextManager ctxMgr = OperationContext.Current.InstanceContext.Extensions.Find<NHibernateContextManager>();
            if (ctxMgr == null)
            {
               ctxMgr = new NHibernateContextManager();
               OperationContext.Current.InstanceContext.Extensions.Add(ctxMgr);
            }

            return ctxMgr.ContextStack;
         }

         //Working in some random thread
         if (stack == null)
         {
            stack = new Stack();
         }

         return stack;
      }
   }
}

At first glance, it looks similar to the HybridWebThreadScopeInfo code, because, well, it is. It has a ThreadStatic Stack field, it checks the HttpContext.Current first, blah blah. The difference is the part in the middle which checks OperationContext.Current. This property will have a value if we’re running within the WCF pipeline. An OperationContext is analogous to HttpContext, but only for the WCF world. Similar to HttpContext.Current.Items, we can even store random junk we want to have for the lifespan of the operation. However, rather than just being a simple key/value pair like .Items is, they had to make it a bit more complicated and use something called Extensions.

An extension is any class that derives from IExtension<T>. Our extension, of course, needs to store the current SessionScope stack. Implementing it is pretty straight forward.

public class NHibernateContextManager : IExtension<InstanceContext>
{
   public Stack ContextStack { get; private set; }

   public NHibernateContextManager()
   {
      this.ContextStack = new Stack();
   }

   public void Attach(InstanceContext owner)
   {
   }

   public void Detach(InstanceContext owner)
   {
   }
}

You’ll notice IExtension<T> has Attach<T> and Detach<T> methods that are called by WCF when an extension is added or removed from the operation context, which we need to implement with dummy methods to satisfy the interface.

We can configure ActiveRecord to use this awesome new IThreadScopeInfo implementation within web.config:

<activerecord
   isWeb="true"
   threadinfotype="WcfThreadScopeInfo, Website">

Or if you’re using an InPlaceConfigurationSource and initializing ActiveRecord in your global.asax, you can use:

InPlaceConfigurationSource source = new InPlaceConfigurationSource();
// Stuff
source.ThreadScopeInfoImplementation = typeof(WcfThreadScopeInfo);

This will tell ActiveRecord to use WcfThreadScopeInfo to resolve the current session scope any time it needs to know.

In Summary…

Ok so what’s going on now? When a WCF request comes in, the IDispatchMessageInspector message inspector is run, which creates a new ActiveRecord scope for that request. The constructor for SessionScope will see if there’s already a SessionScope stack to add itself to, and it will do so by looking at the configured IThreadScopeInfo implementation. Our implementation will be able to use a ASP.NET context, a WCF context, or a thread context to store this stack in. This means that any time we need to look up the current session, which is done all over the place in the ActiveRecord framework, we’ll be able to find the current session scope or create one if necessary. When the request is ended, the session scope is disposed of and pending updates are committed to the database. If you had new’ed up any child session scopes on the stack, you’d of course be responsible for disposing of those properly as well.

Well, there you have it. Making WCF and ActiveRecord best friends is fairly straight forward, and lets you learn about the inner workings of NHibernate, ActiveRecord, and the WCF pipeline all at the same time. Have fun!

One does not simply… switch to WCF

In my last blog post, I talked about the nature of new code, and how it sometimes spawned a spontaneous redesign of existing architecture to avoid future technical debt.  In my case, laying the initial foundation for mobile and tablet apps caused me to rethink my web service layer.  Since then, I’ve been spending a lot of time going through the KitchenPC API and making sure everything is solid, well designed, consistent and flexible.  After all, once mobile and tablet apps have been built, it will be difficult to change the interface contracts.

Unfortunately, I seem to have fallen down a rabbit hole and signed up for a pretty extensive re-design of the KitchenPC back-end.  I ran into conflicts between my ideal API design and the limitations of ASP.NET Web Services.  In the end, I decided to upgrade the KitchenPC web service layer to use a more modern framework, Windows Communication Foundation (WCF).

31335440Turns out, one does not simply… switch to WCF.

On the outside, ASP.NET Web Services (using either SOAP or JSON) and a WCF service are similar in nature, especially when running under IIS.  An ASP.NET Web Service is an .ASPX file that points to an implementation of a class derived from System.Web.Services.WebService.  Publicly exposed methods of this class are tagged with [WebMethod] and accept and return serializable data types.  ASP.NET is able to generate a JavaScript proxy on the fly, allowing client-side browser code easy access.  In WCF, an .SVC file points to a Service contract, which is any class that’s tagged with the [ServiceContract] attribute.  Often, this is done by implementing an interface tagged with said attribute.

The fact that an ASP.NET Web Service extends a common base class, WebService, and a WCF Service does not, is an important consideration when porting existing code.  Any methods that depend on the ASP.NET request pipeline, such as references to Context, Session, or Application, will need to be reconsidered.  This means you cannot depend on things like cookies,  HTTP headers, IP addresses, identity management, and other things users of the ASP.NET framework take for granted.

pigeon_with_cookieTo understand why, let’s take a step back and look at the design behind WCF.  WCF was implemented as a protocol agnostic communications framework.  Nowhere does WCF assume that HTTP is being used as a transport mechanism over the wire.  One could be using raw TCP/IP, or perhaps named pipes, shared memory, or a USB controlled carrier pigeon launcher.  Nowhere does WCF assume it’s even running on a web server like IIS.  A WCF endpoint could be running within a console application, or a Windows Service.  Many of these architectures have no concept of things like HTTP headers, session state management or other things associated with the web world.  Though a pigeon could probably carry a cookie if it were only a few bites.

For this reason, within the context of a WCF operation, you’re working with very generalized notions of communication.  I was definitely bitten by this “limitation” while porting my web services over, as I use session cookies to communicate authentication information used for user specific service calls.

Those who want to quickly port their web services over to WCF will, however, be pleased to know that the designers of WCF foresaw this radical paradigm shift and took pity.  There exists a compatibility mode that bridges the ASP.NET pipeline with WCF, and permits information sharing between those two very different worlds.  This compatibility mode can be enabled in your web.config file by using:

<system.serviceModel>
   <serviceHostingEnvironment aspNetCompatibilityEnabled="true" ... />
</system.serviceModel>

When running in this mode, you’ll be able to access things such as HttpContext.Current and session state.  If you’re interested in how this works under the covers, there’s a great blog post here.  This is most definitely an approach for those building services that never need to run outside the context of a web server or support protocols other than HTTP.  However, I decided to really dive into the WCF world and design a fully independent KitchenPC API and not take the easy way out.  Either that, or the fact that I couldn’t get ASP.NET compatibility working under IIS when running in Integrated Pipeline mode and I got frustrated.  But let’s go with the first thing, it sounds more noble.

Luckily, getting rid of cookie dependencies was fairly straight forward.  My web services already took a ticket string which contained a serialized authentication ticket.  However, this parameter was optional if the HTTP header had a cookie value set containing the same information, as I didn’t want to pass in that potentially long ticket twice in each HTTP request.  The ticket was also never returned back with the Logon web method, since it just set the cookie in the HTTP response.  The new design requires a session ticket to be supplied on sensitive web service calls, and it’s up to the client to store that ticket for future calls.  In my mind, this is simpler and more straight forward.  Web, mobile and tablet will all call the API in the same way, and enforcing these parameters becomes less complicated in terms of code.

You might be asking me, at this point, what the actual benefit of switching to WCF is.  After all, the web services worked just fine before, right?  Do I really need a protocol agnostic KitchenPC service, when in reality HTTP will be the primary, if not only, means of access?  Well, the answer is flexibility.  Massive amounts of flexibility.  Every single last component of WCF is fully customizable or replaceable, from message inspection, to fault handling, to serialization.  Don’t like something about how your service works?  Change it.  And that’s where WCF starts to get extremely complicated, as I’ve been learning over the past few days.  That’s also why I’ve decided to write a couple more WCF posts, summarizing how I’ve tweaked WCF to build the KitchenPC web service I’ve always wanted.

Stay tuned!

Without Exception

Part of being an entrepreneur is getting used to the fact that work is infinite in nature.  Actually accomplishing something doesn’t reduce the number of items in your to-do list, it simply creates more work.  The more productive you are, the more work will be created, causing an annoying feedback loop similar to putting a microphone too close to the speaker.

I was reminded of this over the long Thanksgiving weekend while hacking together the first few k-locks of code that will ultimately become the common mobile library for KitchenPC.  This is a UI agnostic API that will eventually power things like Windows Phone, Surface, Windows 8 apps, and other devices.

The main function of this library is to abstract client-to-server communications between a mobile or tablet device and the server.  It works similar to a web service proxy, like one that Visual Studio will generate for you from a WSDL contract.  However, I wanted to write one from scratch because, one, that’s the way I roll, and two, I can write a version that’s faster, more portable, and more light weight.

It’s written on the CoreCLR (which has been ported to various platforms, including iOS and Android) which provides only a subset of the raw power of the full .NET Framework.  I think developing a core KitchenPC framework on this platform can ease development for KitchenPC apps on various devices in the future, and also some day evolve into a full fledged open-source KitchenPC SDK.  I wanted to make the code pure, without any outside dependencies, and as portable, easy to manage and distribute as possible.

While basic WCF libraries are available on the CoreCLR, they’re prone to a few problems.  One, they only implement SOAP transports.  While KitchenPC can speak SOAP, the Javascript libraries use JSON to communicate with the server, which is much less verbose and efficient.  Two, my initial attempts to get that code running on the Mono framework were far from successful.  Even Xamarin has marked the WCF Stack as Alpha Quality.  If I wanted something truly efficient, truly cross-platform, and truly low-level, I was just going to have to write it myself.  So that’s what I did.

So, now you’re aware of the amount of work I signed up for, here comes the part where it spawned into more work, like crazed bunnies in the spring.

The KitchenPC web services are built with the idea in mind that return values should always be valid and indicate a successful result.  Errors should be handled as exceptions, which are serialized through SOAP faults or JSON exceptions.  This was the design decision I made, simply because it seemed cleaner to me.  I could just throw exceptions at any place in the stack, without having to litter try/catch blocks all over the place.  I’ve used a bunch of web-based APIs that return things like Result, which have a Success property indicating if the call was successful, an Error property with error information, etc.  The Exchange Server API is designed in this manner.

Well, turns out, when the ASP.NET Script Services throw an exception and that exception information is serialized out to the client, the resulting HTTP code is 500 (Internal Server Error).  This probably makes sense, as you’d want to be able to trap that code and handle that condition as an error.  In HTTP terms, the result looks like this:

HTTP/1.1 500 Internal Server Error
Cache-Control: private
Content-Type: application/json; charset=utf-8
Server: Microsoft-IIS/7.5
jsonerror: true
X-AspNet-Version: 4.0.30319
X-Powered-By: ASP.NET
Date: Sat, 24 Nov 2012 22:42:33 GMT
Content-Length: 91

-- JSON serialized version of your Exception object

Now, with the CoreCLR WebClient libraries, to my knowledge (and after many hours of trying), there is simply no way to actually get access to that serialized JSON exception in the response.  Once WebClient sees a HTTP 500, it throws a generic WebException.  Basically, you’d do something like:

WebClient client = new WebClient();
client.Headers["Content-Type"] = "application/json; charset=UTF-8";

client.UploadStringCompleted += delegate(object sender, UploadStringCompletedEventArgs e)
{
   if (e.Error != null)
   {
      // e.Error will be a generic WebException, with nothing helpful of any sort
   }

   // On a successful call, e.Result will contain the HTTP response
};

client.UploadStringAsync(new Uri(url), json);

If the web service call failed, an HTTP 500 would be returned, which would cause e.Error to be set to a generic WebException object. The Message property would say something like “NotFound”, or something unhelpful. If you tried to access e.Result, you’d get another exception. Great.  They made it as hard as possible to do anything low-level with this API.

I tried every way I could think of to intercept the actual HTTP response before an exception is thrown (writing all sorts of really messy code,) but the framework was just not having it. It’s no wonder so many applications say things like “An error has occurred. Please try again later.” – It’s bloody impossible to ever get the information you need!

I suppose the only way around it would be to re-implement the HTTP protocol at the socket level, but who has that kind of time?

Without access to this exception information, I could not do things like tell the user their password was wrong, or if a menu could not be created and why.  Sure, I could guess what the error most likely was, but that’s lame.

Even if there were a way around this, I ran into another issue.  Through the course of debugging this mess, I discovered a major bug on KitchenPC that only repros on production.  I wasn’t even getting back exception data even if I were able to parse it.  The HTTP response looked like:

{"Message":"There was an error processing the request.","StackTrace":"","ExceptionType":""}

That’s right, the production KitchenPC server was eating exception information, which I rely on for various site UI functionality (such as logon errors, duplicate menu names, registration issues, etc)!  All this code was broken, and I hadn’t even noticed since I do very little testing in production.

Turns out, ASP.NET will only serialize exception information if you have <customErrors mode="Off" /> set in web.config, and debug mode enabled. This, of course, exposes all sorts of other debug information with site crashes as well, which you may or may not want your users to have access to. There doesn’t appear to be a way to allow ASP.NET web services to serialize certain exceptions, without running in debug mode, since the creators of ASP.NET saw this as sensitive data, and definitely not something you’d be depending on for your UI to function properly!  Uh oh.

During the KitchenPC beta, I just ran the code in debug mode as I actually wanted detailed errors to be displayed, logged, etc.  It was a beta product, I needed to log things, trap things, and if something happened, I wanted users to be able to report as much information as they could on it.  Thus, I never noticed this design was potentially flawed in production use.  I don’t think it was until I switched to .NET 4.5 and started using web.config transforms that I really had a solid production configuration.  So, many of the error handling features have been broken on my site for the past few weeks.  Sigh.

I knew the only approach was to re-think how exceptions are handled.  First, exceptions should be exceptional.  An exception should be thrown if and only if that condition was completely unexpected.  Things like bad passwords and duplicate user names are perfectly reasonable, and thus should be communicated with a valid return object and an HTTP 200 result.

I then spent all of Sunday night revisiting every single API, ripping out the exception throwing code and implementing a new base return type which has properties such as Error.  Ugh, I’ve now implemented the exact design I said at the beginning of this post that I hated so much, but I guess that’s how things must work.

It’s of course very important to get all these API issues worked out now rather than later.  Once I release mobile and tablet apps, changing APIs can be quite a hassle.  With web code, I can simply change the JavaScript files and invalidate the CDN cache when I deploy changes.  With mobile applets, I really don’t want to have to force users to upgrade to a new version to continue running their app.

These new changes have been rolled out to production, so there should be no more missing error messages when you enter the wrong password.  Now, all web service calls return either a Result object, or an object that inherits from Result.  This makes the KitchenPC API much more consistent, and allows me to handle result errors internally within my proxy so that I can throw exceptions when the result contains an error.

I think overall, it was a good redesign.  Now, maybe I can actually get some work done.