Web APIs vs RPC

I keep finding a lot of developers who like to model and build web apps using HTTP as a transport for RPC. I don’t understand this but I thought I would do my best to provide a framework for understanding why one might choose to use the HTTP protocol rather than ignoring it and layering RPC on top.

I’m not terribly surprised that many developers find it hard to want to follow HTTP. The current frameworks do little to help with the HTTP protocol. Most server-side frameworks give you some sort of MVC approach that only represents HTTP methods, URIs, and possibly content negotiation. I hesitate to include the concept of HTTP resources, as most of those frameworks prefer to ignore the strict concept and use something conceptually closer to a database table. Those brave, few frameworks that do provide additional support of HTTP typically leave the details to the developer. While this may appear to be a good thing, freedom to define the protocol semantics ignores the fact that the HTTP RFCs provide a well-defined processing model. Why don’t more frameworks provide the tools developers need to leverage the HTTP processing model?

Is it any wonder that naysayers might easily argue that HTTP-based applications look very similar to RPC, perhaps adding additional complexity to hide RPC under a thin veneer of URI templates and HTTP methods?

I imagine it’s a surprise to many people to read the abstract from the 1992 definition of HTTP:

HTTP is a protocol with the lightness and speed necessary for a distributed collaborative hypermedia information system. It is a generic stateless object-oriented protocol, which may be used for many similar tasks such as name servers, and distributed object-oriented systems, by extending the commands, or “methods”, used.

If this were more obvious, don’t you think developers — the majority of whom are in love with OOP — would want to embrace this protocol? The history of the RPC-over-HTTP trend appears to stem from the fact that HTTP, running on port 80, was typically a reliable port on which to send network traffic because most environments opened port 80 by default and infrastructure teams were a bit picky about opening up random ports for custom applications.

How might one approach HTTP-as-distributed-OO protocol? To start, I could imagine a client proxy to a server object. The client object looks and acts like the server object but makes all its calls over the network. Would you want to expose each of the object’s methods as HTTP method extensions? Well, you don’t really need to do so. The HTTP protocol defines POST for most purposes, and you can include any additional details such as the object’s method name, much as you might when using .send in Ruby. For those who favor a more type-safe approach and use a language like F#, you might find a sum type, or discriminated union, useful:

If this is starting to sound like SOAP and WS-*, you are not wrong. The thing is, SOAP was not terribly bad. SOAP and WS-* went wrong in layering additional protocol semantics on top of HTTP, which already has built-in support for many of those additional pieces. However, I’m not voting for a return to SOAP (though there are some aspects to tools like WCF that would be quite nice to have).

Wouldn’t it be nice to be able to declare method characteristics such as idempotency and safety and allow tooling to correctly negotiate that the appropriate method should use one of GET, PUT, or DELETE instead? What if you could define a graph of resources that know each other exist (by means of hypermedia) and that could correctly coordinate distributed workflows that would again be able to have correct client proxy representations driven through hypermedia?

This is still only scratching the surface of HTTP, of course. Do you really care about status codes? Headers? I bet you don’t. You may claim to care, but ultimately you probably care b/c most frameworks you use either do it wrong or entirely ignore. Consider a tool that would take this distributed OO style and then correctly negotiate representations for your client app, take care of conflicts, deal with cache invalidation, etc. Wouldn’t that be nice? Such tools exist. Take a look at webmachine and liberator. I’m working on one now for F#/.NET called Freya with Andrew Cherry. Granted, I don’t think any of those tools tackle the concept of a distributed OO framework, but that should be something we can build on top.

Taking this a bit further, wouldn’t it be nice to be able to generate both client and server boilerplate off of a central design document or even a media type definition? Swagger offers something like this. I think we can do more in this direction, and I’m excited to see where we land.

What do you think? Is this a lot of nonsense, or should we keep digging back into HTTP and attempting to leverage it more? I’m also interested in your thoughts as to whether other protocols, such as Web Sockets and Service Bus layers will eventually replace HTTP as our dominant protocol (or at least network port).

Special thanks to Darrel Miller for his review and feedback on this post.

Working with Non-Compliant OWIN Middleware

Microsoft.Owin, a.k.a. Katana, provides a number of very useful abstractions for working with OWIN. These types are optional but greatly ease the construction of applications and middleware, especially security middleware. The Katana team did a great job of ensuring their implementations work well with Microsoft.Owin as well as the standard OWIN MidFunc described below. However, not all third-party middleware implementations followed the conventions set forth in the Katana team’s implementations. To be fair, OWIN only standardized the middleware signature earlier this year, and the spec is still a work in progress.

At some point in the last year, I discovered the HttpMessageHandlerAdapter was not, in fact, OWIN compliant. I learned from the Web API team they would not be able to fix the problem without introducing breaking changes, so I came up with an adapter that should work with most implementations based on OwinMiddleware that didn’t expose the OWIN MidFunc.

The solution is rather simple, and the Katana source code provides one of the necessary pieces in its AppFuncTransition class. The adapter wraps the OwinMiddleware-based implementation and exposes the OWIN MidFunc signature. After creating the adapter specifically for HttpMessageHandlerAdapter, I created a generic version I think should work for any OwinMiddleware. (Examples in both C# and F#.)

I hope that helps someone else. If there’s interest, I can package this up and place it on NuGet, though it seems a lot of effort for something so small. The source code above is released under the Apache 2 license, in keeping with the use of AppFuncTransition used above from the Katana source.

Bundling and Minification with Web Essentials

Pta.Build.WebEssentialsBundleTask A few months ago, the Tachyus web application used a C# + F# web application approach to separate the front-end HTML, CSS, and JavaScript from the back-end F# ASP.NET Web API application. With this configuration, we introduced Web Essentials to bundle and minify our CSS and JavaScript at build time within Visual Studio. To simplify deployments and better group related and shared code, we decided to merge the back-end with another Web API application, which used the F# MVC 5 project template. We originally tried using CORS, which worked great in almost every environment. Unfortunately, the one environment in which we ran into trouble was our staging/production environment. Since we are building an internal-only API, we haven’t spent the extra effort to make our APIs evolvable; therefore, we decided to just merge all three projects together into the F# web project. This worked rather well, except for Web Essentials. We abandoned Web Essentials, as well as any form of bundling or minification at that time.

Fast forward to last week: we again split the front-end application out into a separate, C# project. We did this for several reasons:

  • We ran into trouble trying to remote debug the F# web project
  • The project had grown to the point that it was quite large, and it made sense to separate the two for maintenance
  • It’s common even in other languages to separate the front-end into a separate folder or project as different teams are often responsible for the different apps, which is not quite our case but close
  • We wanted to clean up our Angular code so that we had less Angular spread throughout and more standard JavaScript; for this we wanted to use bundling again

I was able to add Web Essentials back into the solution since we were again using a C# project. However, this had its own challenges, specifically in the form of communication to the rest of the team that they would need to install Web Essentials in order for their updates to take effect.

Fortunately, my colleague Anton Tayanovskyy recently found and pointed me toward the WebEssentialsBundleTask, which is a MSBuild task that will run the Web Essentials transformations at runtime depending on the build configuration, i.e. Debug or Release. This tool provides explicit script references for Debug builds and a bundled (and minified, if desired) version for Release builds. It seems to only require the presence of a Web Essentials-style .bundle file to work, so I would expect this to work equally well with a F#-only solution, though I have yet to try that. The WebEssentialsBundleTask has its own issues, though. It will modify your index.html file whenever it runs, so you must make sure to revert changes you don’t want to keep. We rarely change our index.html file since nearly everything is built in the form of Angular directives or templates. Nevertheless, you should consider the cost to your own project.

You may wonder why we didn’t just build a simple FAKE task. After all, whitespace in JavaScript is relatively meaningless, so a very simple concat + remove could probably get the job done, especially since we use ; where applicable. I definitely considered this option, as well as creating new FAKE tasks built around a node.exe using tools like grunt.js or gulp.js. In the end, these all seemed like overkill with the availability of Web Essentials, at least until we had evaluated whether WE would work for our purposes. We are still evaluating. What are you using? Did you find this helpful?

Web API and Dynamic Data Access

In .NET Rocks episode 855, Jeff Fritz commented on ASP.NET Web API being somewhat confusing in terms of its intended use. I don’t tend to agree, but I thought I would address one point he made in particular: that Web API is perhaps just another form of repository.

Web API is much more than a repository. And yes, it is indeed a protocol mapping layer. As Uncle Bob once noted, a web or api front end is just a mapping layer and is not really your application.

In many cases, however, one could argue that a web-api-as-repository is a fairly solid use case. OData is a great example. However, I was thinking of yet another argument I’ve heard for dynamic languages: when you are just going from web to database and back, you are not really working with types.

In that spirit, I set out to write a simple Web API using SQL and JSON with no explicit class definitions. You can see the results in this gist:

I used Dapper to simplify the data access, though I just as well could have used Massive, PetaPoco, or Simple.Data. Mostly I wanted to use SQL, so I went with Dapper.

I also model bind to a JObject, which I immediately cast to dynamic. I use an anonymous object to supply the values for the parameters in the SQL statements, casting the fields from the dynamic object to satisfy Dapper.

All in all, I kinda like this. Everything is tiny, and I can work directly with SQL, which doesn’t bother me one bit. I have a single class to manage my data access and API translation, but the ultimate goal of each method is still small: retrieve data and present it over HTTP. That violates SRP, but I don’t mind in this case. The code above is not very testable, but with an API like this I’d be more inclined to do top level testing anyway. It’s just not deep enough to require a lot of very specific, low-level testing, IMHO.

Also, note again that this is just retrieving data and pushing it up through an API. This is not rocket science. An F# type provider over SQL would give a good enough sanity check. Why bother generating a bunch of types?

Which brings up another point for another post: what would I do if I needed to add some logic to process or transform the data I retrieved?

As a future exercise, I want to see what it would take to cap this with the Web API OData extensions. That could be fun.

Nested Controllers in the Latest Web API Source

I just happened to drop over to the ASP.NET Web Stack and noticed that nested controllers are now allowed in Web API! This is terrific news for you F# developers who, like me, think that grouping related functionality, including controllers, within modules is a convenient practice. Note, that this is only in the source. You won’t be able to update your NuGet packages to get this functionality, though it is easy to replace the offending DefaultHttpControllerTypeResolver with this updated copy:

Replace DefaultHttpControllerTypeResolver
1
config.Services.Replace(typeof(IHttpControllerTypeResolver), new MyHttpControllerTypeResolver())

The Origin of RESTful URLs

For at least the past year, I have repeatedly found my appreciation for the literal offended by the term “RESTful URLs.” I recently spent a bit of time trying to explain how this term is oxymoronic on the Web API Forums. While URIs are important as a means of identifying unique resources, REST doesn’t specify any other requirements for URIs. I often note that a RESTful URI could be a GUID. While this is certainly not very meaningful to humans, it satisfies the REST constraints.

While pondering nested resources in Web API yet again, I realized where I think this term originates. Ruby on Rails used the term “RESTful routing” to describe their approach to building Controllers along the line of resources and using a convention to route controllers in a hierarchy based on the controller name. The goal, I think, was to correctly model and mount resources at unique URIs. However, you get what I would call “pretty URLs” for free.

If you use the term “RESTful URLs,” please stop. While RESTful Routing makes sense, RESTful URLs are just nonsense. Do use “RESTful Routing.” Do use “Pretty URLs.” Just don’t confuse your terms. Thanks!