.NET Fatigue

For those that missed it, Ali Kheyrollahi wrote a post describing his “commentary on the status of .NET” in which he concludes that .NET is a platform for small- to mid-size projects that represent a declining market that will soon die and argues for .NET to pivot. His tweet triggered a very long discussion and at least one blog post in agreement by Rob Ashton, which triggered another long Twitter dialogue. The subsequent discussions center a lot around .NET Core, ASP.NET Core, and all the craziness around the meanings (if any) of beta, RC, RTM, and whether breaking API changes are allowed in each of these various stages. I think it’s safe to say the general feeling outside Redmond is that .NET Core 1.0 RTM is really .NET Core 1.0 alpha 1, and developers are back to waiting for the third release committing to new platforms. (That’s certainly not an absolute.)

The root of this battle, however, appears to stem from bitterness across several boundaries:

  • Microsoft vs OSS (which has arguable improved by leaps and bounds)
  • Enterprise vs ___? (really just the poorly-named “dark matter” vs not)
  • Web vs desktop
  • Windows vs cross-platform/Docker
  • Azure vs on-prem/other cloud
  • C# vs VB/F#/other
  • TFS + VS + NuGet vs alternative tool chain
  • fill-in-the-blank, e.g. SharePoint vs WordPress

Historically, Microsoft has set the tone, and most of their customers follow suit. Microsoft provided a lot of excellent tools to build applications quickly, and this was seen as a good thing. However, other platforms have stolen the leader role, leaving .NET to copy, borrow, and generally play catch-up. There are a few exceptions, but as can be seen above, most of those are typically given second fiddle and not promoted as well as they should.

I have been quite disappointed over the last year and a half to see how Microsoft managed their transition to OSS. With the exception of F#, a lot of the discussions around the new world of .NET Core have appeared to be a lot less about finding out what customers wanted and instead continuing to tell customers what they want. (This really only applies to a vocal minority and not to the team as a whole.) With the recent announcements about .NET Core 1.0 RTM, that seems to have been rectified to a large degree in that .NET Core will now have a significantly better chance to allow migration from (now) netstandard to netcore. I think this is one of the better moves Microsoft has made in the last few years.

Will this ultimately right the ship and ensure everyone currently using .NET remains aboard? I don’t know. Much like the recent post on JavaScript Fatigue, I think a lot of .NET developers are feeling .NET Fatigue and looking for something else. .NET Core was the future hope, and like it or not, the continuous breaking changes have broken the hopes of many. I gave up on caring at least six months ago. There are enough missing APIs that I can’t move to .NET Core any time soon, if ever.

In the F# community, several are looking for ways to cross-compile to other platforms. Fable compiles F# to JavaScript via Babel, and Dave Thomas did a bit of work to show how one might compile to Elixir. This isn’t restricted to F#, of course. Several projects exist to transform C# to JavaScript, and Xamarin has been transforming .NET code to iOS and Android apps for years.

This leads me to wonder whether the question behind the creation of .NET Core, which I think was something like “Can we make a tiny .NET that can run everywhere?” was the right question. Remember, .NET was created b/c Sun sued Microsoft for extending Java. A lot has changed since then. Java is now supported on Azure! While I think the CLR is arguably better than the JVM, I wonder whether a better solution might not have been to contribute toward the JVM and work toward merging the JVM and CLR. Perhaps Erlang/BEAM is the right target? WebAssembly all the things?

All of these are non-trivial. I’m not convinced any are good ideas. However, what if the right answer is somewhere along the lines of VS Code (or even VS with its Python, TypeScript, etc. tools), an editor for far more than Microsoft-owned platforms and languages.

Is .NET needed anymore?

What if C#, F#, and VB compilers were extended to target additional platforms, e.g. JVM, Erlang, etc? Would you care?

What if you could use TypeScript to build Windows applications rather than C#? Would you care? (For what it’s worth, I would rather write TypeScript than C#, so this is not a made up question.)

What if Visual Studio and VS Code focused on providing excellent tooling for traditionally CLI-driven tools in the JVM ecosystem and integrated seamlessly across the new Linux support coming in Windows 10? Would that impact you?

Is .NET Core a hope for the future, or yet another Silverlight- or Metro-like fad? Can it really succeed where Java failed? (NOTE: Erlang and node.js succeeded in cross-platform support, so it is possible.)

React: What WPF Should Have Been

I’ve been learning and using React, and I like it’s general approach to building UIs. I really like the unidirectional data flow, though I’m still on the fence about virtual DOM. I get the benefits and really like that it makes testing really easy. However, I am not sold on its being unquestionably better than directly manipulating the DOM or Google’s Incremental DOM. Nevertheless, React, Om, Elm, and others have proven it a viable and fast approach.

One of the best advantages of the aforementioned frameworks is they all allow you to compose custom, reusable elements or components building off of existing HTML elements. (I think WebComponents allows something similar, though I have not personally liked its direction.) Another advantage is that you can declare all these things within JavaScript or your transpiler of choice.

I liked many similar attributes of WPF and XAML when I worked on desktop software. At that time, I had just taken a job doing .NET consulting and was still learning .NET. I had mostly tinkered with VB6, PHP, Ruby, Python, SVG, and XForms to build desktop and web applications. I really enjoyed adding SVG and XForms into web apps to make them richer, though I never quite mastered either and, at the time, both required plugins that didn’t often play well together. XAML offered a chance to do something similar with desktop applications, and I enjoyed it a lot.

However, I worried that Microsoft had opted to build their own markup language when so many, similar languages already existed and offered a nice composition story. Also, I recall thinking that the previous strategy of walled gardens appeared ready to collapse. Fast forward a few years, and Web Standards were alive and well in the form of HTML5 (with SVG baked in) as Chrome and Firefox began picking up market share, and XAML was isolated to the Microsoft platform. A few other vendors had similar markup language efforts, all of which remained isolated to their own platforms.

Imagine, if you will, a different scenario. Had Microsoft been prescient enough to see the revival and adoption of HTML and SVG and built WPF on top of these, do you think things would have gone differently? XAML had features of XForms without the complexity. XForms didn’t survive, so it is possible that a XAML built on HTML and SVG with some of the XForms-like features could perhaps have been the baseline for HTML5. Microsoft developers would have a single UI platform to write web and desktop applications.

One can dream.

The one thing Microsoft didn’t solve was the unidirectional data flow. I’m not sure they would have solved that. Data binding was and still is a popular approach. I dislike it b/c I’ve so often been bitten by refactoring things and those refactorings not making it into the UI markup templates. However, XAML didn’t require the markup file; you could/can also create those elements in your programming language. I first encountered the unidirectional style in a sample F# WPF application. There is no JSX equivalent, but the unidirectional data flow, immutable values, and all the rest are certainly possible. Yet WPF is still primarily known more for its data-binding abilities and preferred MVVM than for its flexibility in allowing different styles of development.

I can imagine an alternative reality in which React was just a JavaScript implementation of something realized first by Microsoft in .NET. Unfortunately, that didn’t happen. Nevertheless, one could write a set of elements for WPF to mimic HTML and SVG and then layer on a version of React, were one of a mind to do so. Even if not, you may still find it useful to give F# a try in order to mimic the same unidirectional data flow you can find in React. You can see an example in the recent Community for F# recording with Phil Trelford on F# on the Desktop.

Server MVC and Solving the Wrong Problem

TL;DR MVC frameworks provide infrastructure to solve make believe problems. They complicate what should be simple.

Update (Feb 4, 2016): While writing this post focused on server-side MVC, I came across a somewhat related post, Why I No Longer Use MVC Frameworks, discussing problems with MVC on the client-side. It follows a slightly different direction and is well worth your time.


In 2014, I presented a talk on F# on the Web at many user groups and conferences. One of my primary goals for the talk was to talk about F# adoption at Tachyus and show off just how easy it is to use F# for web development. After I presented the talk at CodeMash in January 2015, I put the talk on the shelf.

By January 2015, I had started contributing to Freya with Andrew Cherry and wanted to push the web machine-style approach, which I’ve found to be a far better solution. In the meantime, at Tachyus, we started using an in-house tool called Gluon, which generates a strongly-typed (and tightly coupled) TypeScript client for F# API definitions. These different approaches kept bringing me back to thinking how poorly the MVC paradigm fit server-side applications.

Model – View – Controller

With MVC, you are supposed to separate Models, Views, and Controllers. In most web MVC frameworks, models relate to some materialized version of your data store plus additional DDD or other machinery. Views are represented as HTML and, perhaps, ViewModels, though some like to lump ViewModels in with the Model piece of the puzzle. Controllers are almost always a replica of the RESTful Rails Controller pattern, a class containing methods the represent endpoints and return Views. For the purposes of our discussion, let’s assume that a View may be any serialization format: HTML, XML, JSON, etc.

Aside: I’m aware I just lost some of you. HTML as a serialization format? Yes, it’s just declarative markup and data, just the same as other XML, JSON, etc. formats. That it may be rendered by a browser into a user interface is not relevant, as many applications I’ve seen do use HTML as a data communications format b/c of its hypermedia support.

I think it’s worth noting that MVC started life as a UI pattern in Smalltalk. There was no server in sight. Stephen Walther wrote about The Evolution of MVC at the advent of ASP.NET MVC. If you are unfamiliar, please read it, and I’ll spare you my rendition here.

We’ve already uncovered that we have at least one rogue element floating around in our pattern: the ViewModel. “But wait; there’s more!” We can’t forget the real controller of our application: the Router. The Router is really like a Controller since it intercepts all incoming requests and dispatches to appropriate handlers. And this is really where it starts to break down. Note the terms I just used. I didn’t use “route to controllers, which then determine the appropriate methods to call.” I used entirely different terminology. If you break down most of these MVC frameworks, you will find something similar: a meta-programming mechanism for deconstructing all these pattern artifacts and turning them into handlers available for dispatch from the router. Don’t believe me? Look in your call stack the next time you debug one of these applications. You’ll see a glorious stack of meta-programming calls instead of the few, clean calls to your class-based implementation.


Router – Handler – Formatter

Let’s switch to the terms we actually use in describing what happens. I’ve listed them in the order they intercept a request, though we could switch them around and list them in the order in which they return a response. Isn’t that interesting? These pieces actually form a pipeline:

Request -> Router -> Handler -> Formatter -> Handler -> Router -> Response

This pipeline is listed in terms of handing off control. Most likely, your framework is tuned to let the Formatter write the formatted response directly back to the socket. However, many frameworks, such as ASP.NET Web API, let you return a value that the framework will later serialize and write to the socket. In other cases the framework may buffer the socket actions so it can intercept and manipulate headers, etc.


As I was building the presentation I linked above, I started writing minimal implementations of web apps in F# for TodoBackend, a site showcasing server-side Todo application APIs. I wrote examples in ASP.NET Web API, a little DSL I wrote called Frank, and an implementation using only the Open Web Interface for .NET (OWIN). The OWIN implementation was intended to show the maximum LOC you would have to write. I was surprised that it was rather close to the other implementations, longer by only 30-40 LOC.

The OWIN implementation surprised and delighted me. I was amazed at how simple it was to write a for-purpose routing mechanism without a proper Router implementation. I had built a light wrapper around the ASP.NET Router for Frank, but the combination of F#’s Active Patterns and System.ServiceModel.UriTemplate provided a nice, explicit, and still comparatively light and powerful mechanism for composing a Router.

Aside: Please note I’m not stating we should absolutely throw away all routing libraries. I merely want to point out that 1) what we are doing in web applications is primarily routing to handlers and 2) routers are not all that complicated, so you shouldn’t think you _need_ a routing library.

FWIW, were I to recommend a routing library to F# web developers, I would suggest [Suave](https://suave.io/). I’ve really come to enjoy its flexibility.


We’ve covered the non-MVC part of MVC frameworks. What about the others? I’m not entirely certain about the Model aspect, to be fair. I think the ASP.NET MVC team took a good approach and left it out of the core, which is interesting in that they really provided a VC framework. Already the “MVC” pattern has broken down.

What about Controllers then? Controllers are typically classes with handler methods, as we said above. There’s that Handler word again. It’s worth noting that the core of ASP.NET contains an IHttpHandler interface that ASP.NET MVC builds on, as well. So let’s take a deeper look at a Handler, shall we?

The simple OWIN router I showed above calls simple functions. Here they are:

If you were to ignore all the machinery built into Controller base classes to support the meta-programming-heavy MVC pattern, you would find something like this at the bottom of the call stack. Ultimately, you need to pass in some things from the Request, look up some data, transform it to a representation, and return it with headers, etc. It’s all relatively simple, really.

However, there’s a trade-off to using simple functions: they are now easily re-used in different parts of a router. I don’t particularly find this a problem, but if you were to shove hundreds of these functions next to each other in a single file, you might run into a maintenance problem since the function signatures are almost all identical.

I think the class-based approach has some nice benefits, though, I do prefer the function-based approach. With a class, you can collect related things together and take advantage of compilers to prevent re-using the same method name with similar parameters. Unfortunately, almost all real MVC frameworks work around these potential safety measures with their meta-programming capabilities and use of attributes. Oh, well.

Aside: I’m sure some of you are wondering about the model binding support that turns query string and form parameters into typed arguments. I don’t really know where those fit in this, so I’m adding them to this aside. While model binding seems to work okay, I can’t think of an application where I didn’t run into problems with it. I found it easier and hardly more effort to manually deserialize query strings and payloads using simple functions. These are often quite easy to reuse across multiple handlers/controllers, and you again gain explicit control. YMMV.


Formatting, or rather Content Negotiation, replaces the view layer. HTTP stipulates that the server returns a Representation of a Resource but not the Resource itself. In the case of a file, the Representation will likely be a copy. In the case of a dynamically generated response, the Handler serves as the Resource responds to a request using a formatter to render the requested output. While the most common response used to be HTML — and thus a View — many responses now return XML, JSON, or a myriad of other formats. Picking only one may be a pragmatic choice, but it’s really limiting in a framework. Note we have not even addressed the various special cases of JSON and XML formats that provide more meaning, links, etc. and don’t work with any one formatter.

Then again, you’ll find times when a single format and a general serializer is all you need. The example app I’ve described above serves only JSON and has serialize and deserialize functions that just call Newtonsoft.Json.JsonConvert:

What happened to all my talk of Content Negotiation, etc? It’s still there. Were I to realize I needed to support a client that wanted XML, I could add a pattern match on the Accept header and call different formatting functions. But why bother setting up all that infrastructure when I don’t need it? I’m not arguing for an ivory tower here. I just want to point out that HTTP supports this stuff, and MVC does not.

In short, formatting is an area that often severely limits a framework’s flexibility by choosing to make easy a few blessed formats and mostly forsaking the rest.

I think it’s also worth noting that writing a general-purpose serializer is difficult and requires a lot of work. Writing a bit of code to generate string formats from known types requires a bit of boilerplate but is rather trivial.


I want to make it clear I’m not trying to dump on all frameworks. While I do prefer composing applications from smaller libraries, I think frameworks can be very useful, especially when properly used to solve the problem for which the framework was designed. However, the MVC style frameworks fell off the rails (pun intended) long ago and suffers from a poor abstraction layer. A pattern or framework should provide abstractions that relate to their domain and solve common problems. Re-purposing a pattern name because it has some popularity is asking for trouble and will ultimately prove limiting. MVC is the reason so many people think of HTTP and REST as a form of CRUD over a network connection and have little to no understanding of the richness of the HTTP protocol.

MVC does not fit HTTP.

Image Credits: ASP.NET Web API Tracing (Preview) by Ron Cain on his blog

Demand Driven Architecture or REST or Linked Data?

I recently listened to David Nolen‘s talk from QCon London conference from back in July called Demand Driven Architecture. Before continuing, you should have a listen.


I really like a lot of things Mr. Nolen has done and really enjoy most of his talks and posts. I was less enthused with this one. I think my main hang up was his mis-representation of REST and resources. I get the feeling he equates resources with data stores. If you watched the video and then skimmed that Wikipedia page, you will quickly see that the notion of “joining” two resources is nonsensical. I think Mr. Nolen is really referring to that “pragmatic” definition that means POX + HTTP methods, which really would correlate well to data stores.

Real REST drives application state, so you would not need to join to anything else if using REST. His criticism of performance also misses the mark, for if you are using REST then you should also be carefully planning out and leveraging a caching strategy. REST isn’t appropriate for every application, but not for the reasons Mr. Nolen so casually dismisses it. Think about it this way: if REST is not suitable for performance or mobile devices, then you must also agree that all websites (not apps, just sites like Amazon.com) fail on mobile devices. That’s just absurd.

I can’t imagine anyone is surprised when Mr. Nolen mentions SQL. He’s just mentioned joins, and what else would a developer associate with that term? If you have hung around the web long enough, you may have heard of SPARQL, which is a query language for linked data. Linked Data and SPARQL never seem to have caught on, but they address at least part of the problem Mr. Nolen presents. A big part of Linked Data’s failing is lack of tooling and painful implementation (mostly related to RDF). Perhaps new tools like BrightstarDB will help turn things around.

Interestingly, Mr. Nolen’s solution, SQL, is strong at ad hoc querying but not so great at scalability. If the web, linked data, etc. were already addressed by SQL, then those technologies would not exist. I really don’t get the need for mis-representation and indirection here. (On a related note, Fabian Pascal recently posted on the “Conflation & Logical-Physical Confusion” surrounding mis-representing SQL as the relational model, which ties in well to this post.)

The only thing presented here is an alternative to REST where the client specifies dependencies for the server to fulfill. This is a flip of the REST style where the server drives application state and provides contracts via media types to a client-driven approach. This is a perfectly valid approach, and interestingly one where Linked Data could excel. Tools such as Datomic, GraphQL + Relay, and Falcor certainly look interesting and appear to work well for very large projects.

I have no doubt that any of these techniques, done well, provides excellent results. Tooling will likely determine the winner, for better or worse.

Community for F# Activity

My consistency with running the Community for F# over the last year or so has been lacking, to say the least. I’ve spread myself too thin working on various open source projects, mostly relating to pushing OWIN. Now that work is (mostly) done, I plan on re-focusing on driving the Community for F# early next year. I’ve already started trying to line up speakers for the first six months. Unfortunately, we’ll miss December and possibly even January, as Google changed their +Pages to My Business and dropped Hangouts support, so far as I can tell. I need to sort that out.

As exciting as that is to some of you, I am most excited about the new web application that is currently under development. It looks like a slightly broken version of the site you can currently find at http://c4fsharp.net/, but if you look through the issues, you will find a lot of exciting new features coming to you in 2016. Here’s a short list:

  1. Video feed – pull all F#-related videos from the YouTube and Vimeo channels into a searchable list
  2. User groups and Events – move the work Reed Copsey built to list F# user groups and events into a proper data store with an admin interface for people to add more; also, possibly pull events and groups from meetup.com and lanyrd.com
  3. Searchable dojo list – similar to the above for the list of dojos
  4. Community wishlist – allow users to submit and promote the things they would really like to see added to F# or its ecosystem; essentially an improved UserVoice
  5. Community ratings – add ratings or at least a “Like” button to every piece of content
  6. A fresh re-design of the site to better accommodate the content

Possibly the most exciting news is that this web app refresh will become a showcase for F# web development. I’ve started building the new web application with Suave and WebSharper, and I have plans to add Freya. I also started writing a GitBook that will explain how the site was developed and the decisions made in selecting each implementation, as well as why other technologies were not selected. I hope this book provides some guidance to those looking to build web apps with F# and struggling to understand how to proceed.

If you are interested in helping with any of these things, please reach out. Community for F# is driven by a handful of very busy volunteers, and we could definitely use more help. Not only do I have big dreams for improving the web app, but we need help managing the Twitter account, finding and scheduling speakers and hangouts, managing the much neglected LinkedIn page, and more.

I would like to especially thank Mathias Brandewinder and Reed Copsey for their help in expanding and pushing the Community for F# forward. I would also like to thank the presenters who have agreed to do Live Meetings and Hangouts with us.

Lastly, thank you to all the local user group organizers and presenters! We have been honored this year by those that allowed us to stream their content, especially the Portland F# Meetup Group, the San Francisco F# User Group, and the F# Seattle User Group. If you would like to partner with the Community for F# to stream your group’s meetings, please reach out.

Thanks for a terrific 2015, and here’s to an exciting 2016!

Web Application Composition

I find tragic that we have only the ever-growing monolith of HTML5 on which to build web applications. Sure, it includes some good stuff like canvas and svg, but overall I find it a monstrosity. Whatever happened to XML namespaces? I know, “XML! Run for the hills!” However, the use of namespaces allowed simpler markup formats to be intermixed, and it worked rather well though a bit tedious to write by hand. Ultimately, I think the need to write these things by hand, due to bad tooling, led to the death of XML as a beloved format. Unfortunately, XML namespaces were replaced with a hack the size of Facebook. And the world cheered.

The majority of programmers are completely fine with their hacks upon hacks. I applaud these Herculean efforts. We all want to build something, and the fastest way to do so is by using what’s available. Today, we have HTML5 + CSS + JavaScript. Why do we have those? We have them because the editors given to us in the past so completely failed to work as promised. HTML nested table elements for layout anyone? (I am so sorry for you if you are still dealing with this.) The promise of XML was that machines could read and manipulate it easily (and they can) and that developers could safely ignore it. Of course, as a text-based, readable format, developers could still work out any kinks, should any arise. Unfortunately, we had kinks arising everywhere, like we were living through the Walking Dead of XML documents.

In giving up the various, independent XML formats, such as MathML, SVG, XForms, XHTML, XSLT, etc., we lost a lot of potential in the form of composition. Remember, these are data interchange formats. Yes, even XHTML, contrary to the attempts to make it an application platform. Before, you could compose multiple formats together into one. (If you would like to see one, the W3C still has one lying around.) Today, you are at the mercy of the HTML5 committee and the browser makers. We also lost the option to use XSLT rather than CSS for styling, and CSS grew from a roughly layout-oriented spec to include content insertion and animation. These hacks work, but it’s like building a skyscraper with duct tape and baling wire.

We are not without hope today. Composition reappeared through various, mostly JavaScript-based technologies, such as Web Components and React. These aren’t a true replacement for what we lost, but they certainly help make up for what was lost. What’s interesting is that these rely on browsers’ JavaScript engines, which are really fast now, and yet they keep building on HTML5. Why? Why not create something even more basic?

Think about how we might be able to right the wrongs of the past with XML by making the browser more like an app container. What if you could build up one or more formats, be they markup, JS, etc. and compose them together? What if the basic building blocks were a new DOM that defined only a few simple primitives and some rules for inter-connecting formats? (I would like to link to the experiment of re-writing HTML5 as web components, but I cannot find the link anymore.)

I’m sure at this point that many of you have flipped back into thinking of HTML as a view layer. I continue to disagree with this notion, despite the massive additions made by HTML5. In the end, HTML is a data format, and the browser knows how to parse that format and render it visually. However, HTML is still a fantastic data format. In fact, Mike Amundsen wrote a book, Building Hypermedia APIs with HTML5 and Node, describing HTML’s adaptability as a data interchange format. I used to be excited about the possibility of sending XHTML+MathML+SVG payloads as data.

Unfortunately, JSON became the standard interchange format. JSON has a lot to like. It’s small, easy to create, easy to evaluate in a JavaScript application, etc. However, it has no semantics for how to merge or include multiple, derived formats. This is mostly a problem in that everyone sends application/json as the media type, so no machine could ever understand what was included. Newer efforts, such as HAL, Siren, UBER, and JSON-LD, also fail to address format composition.

I hope the situation changes. I would like to help. After Darrel Miller‘s talk at APIStrat 2015 in Austin, I downloaded the tools necessary to write an RFC to define media type composition. Unfortunately, I am still trying to figure out exactly how that might work. If you kept to a common base format, such as XML or JSON, things would probably work out alright. But what happens if you want to allow someone to mix a YAML document into their JSON? Things get tricky, especially considering JSON has no concept of links, so you can’t even add it as an external reference or import. Another likely use of this might be to provide a mechanism for code-on-demand for multiple languages. The possibilities really are limitless, especially if you wrap this back into my desire above to replace HTML in browsers with something much more primitive.

If this strikes a chord with you, please reach out in the comments. I would love to see this move forward, and I really need help. If you know of anyone else already attempting something similar, please let me know, as I would love to help.

Web APIs vs RPC

I keep finding a lot of developers who like to model and build web apps using HTTP as a transport for RPC. I don’t understand this but I thought I would do my best to provide a framework for understanding why one might choose to use the HTTP protocol rather than ignoring it and layering RPC on top.

I’m not terribly surprised that many developers find it hard to want to follow HTTP. The current frameworks do little to help with the HTTP protocol. Most server-side frameworks give you some sort of MVC approach that only represents HTTP methods, URIs, and possibly content negotiation. I hesitate to include the concept of HTTP resources, as most of those frameworks prefer to ignore the strict concept and use something conceptually closer to a database table. Those brave, few frameworks that do provide additional support of HTTP typically leave the details to the developer. While this may appear to be a good thing, freedom to define the protocol semantics ignores the fact that the HTTP RFCs provide a well-defined processing model. Why don’t more frameworks provide the tools developers need to leverage the HTTP processing model?

Is it any wonder that naysayers might easily argue that HTTP-based applications look very similar to RPC, perhaps adding additional complexity to hide RPC under a thin veneer of URI templates and HTTP methods?

I imagine it’s a surprise to many people to read the abstract from the 1992 definition of HTTP:

HTTP is a protocol with the lightness and speed necessary for a distributed collaborative hypermedia information system. It is a generic stateless object-oriented protocol, which may be used for many similar tasks such as name servers, and distributed object-oriented systems, by extending the commands, or “methods”, used.

If this were more obvious, don’t you think developers — the majority of whom are in love with OOP — would want to embrace this protocol? The history of the RPC-over-HTTP trend appears to stem from the fact that HTTP, running on port 80, was typically a reliable port on which to send network traffic because most environments opened port 80 by default and infrastructure teams were a bit picky about opening up random ports for custom applications.

How might one approach HTTP-as-distributed-OO protocol? To start, I could imagine a client proxy to a server object. The client object looks and acts like the server object but makes all its calls over the network. Would you want to expose each of the object’s methods as HTTP method extensions? Well, you don’t really need to do so. The HTTP protocol defines POST for most purposes, and you can include any additional details such as the object’s method name, much as you might when using .send in Ruby. For those who favor a more type-safe approach and use a language like F#, you might find a sum type, or discriminated union, useful:

If this is starting to sound like SOAP and WS-*, you are not wrong. The thing is, SOAP was not terribly bad. SOAP and WS-* went wrong in layering additional protocol semantics on top of HTTP, which already has built-in support for many of those additional pieces. However, I’m not voting for a return to SOAP (though there are some aspects to tools like WCF that would be quite nice to have).

Wouldn’t it be nice to be able to declare method characteristics such as idempotency and safety and allow tooling to correctly negotiate that the appropriate method should use one of GET, PUT, or DELETE instead? What if you could define a graph of resources that know each other exist (by means of hypermedia) and that could correctly coordinate distributed workflows that would again be able to have correct client proxy representations driven through hypermedia?

This is still only scratching the surface of HTTP, of course. Do you really care about status codes? Headers? I bet you don’t. You may claim to care, but ultimately you probably care b/c most frameworks you use either do it wrong or entirely ignore. Consider a tool that would take this distributed OO style and then correctly negotiate representations for your client app, take care of conflicts, deal with cache invalidation, etc. Wouldn’t that be nice? Such tools exist. Take a look at webmachine and liberator. I’m working on one now for F#/.NET called Freya with Andrew Cherry. Granted, I don’t think any of those tools tackle the concept of a distributed OO framework, but that should be something we can build on top.

Taking this a bit further, wouldn’t it be nice to be able to generate both client and server boilerplate off of a central design document or even a media type definition? Swagger offers something like this. I think we can do more in this direction, and I’m excited to see where we land.

What do you think? Is this a lot of nonsense, or should we keep digging back into HTTP and attempting to leverage it more? I’m also interested in your thoughts as to whether other protocols, such as Web Sockets and Service Bus layers will eventually replace HTTP as our dominant protocol (or at least network port).

Special thanks to Darrel Miller for his review and feedback on this post.