Web APIs vs RPC

I keep finding a lot of developers who like to model and build web apps using HTTP as a transport for RPC. I don’t understand this but I thought I would do my best to provide a framework for understanding why one might choose to use the HTTP protocol rather than ignoring it and layering RPC on top.

I’m not terribly surprised that many developers find it hard to want to follow HTTP. The current frameworks do little to help with the HTTP protocol. Most server-side frameworks give you some sort of MVC approach that only represents HTTP methods, URIs, and possibly content negotiation. I hesitate to include the concept of HTTP resources, as most of those frameworks prefer to ignore the strict concept and use something conceptually closer to a database table. Those brave, few frameworks that do provide additional support of HTTP typically leave the details to the developer. While this may appear to be a good thing, freedom to define the protocol semantics ignores the fact that the HTTP RFCs provide a well-defined processing model. Why don’t more frameworks provide the tools developers need to leverage the HTTP processing model?

Is it any wonder that naysayers might easily argue that HTTP-based applications look very similar to RPC, perhaps adding additional complexity to hide RPC under a thin veneer of URI templates and HTTP methods?

I imagine it’s a surprise to many people to read the abstract from the 1992 definition of HTTP:

HTTP is a protocol with the lightness and speed necessary for a distributed collaborative hypermedia information system. It is a generic stateless object-oriented protocol, which may be used for many similar tasks such as name servers, and distributed object-oriented systems, by extending the commands, or “methods”, used.

If this were more obvious, don’t you think developers — the majority of whom are in love with OOP — would want to embrace this protocol? The history of the RPC-over-HTTP trend appears to stem from the fact that HTTP, running on port 80, was typically a reliable port on which to send network traffic because most environments opened port 80 by default and infrastructure teams were a bit picky about opening up random ports for custom applications.

How might one approach HTTP-as-distributed-OO protocol? To start, I could imagine a client proxy to a server object. The client object looks and acts like the server object but makes all its calls over the network. Would you want to expose each of the object’s methods as HTTP method extensions? Well, you don’t really need to do so. The HTTP protocol defines POST for most purposes, and you can include any additional details such as the object’s method name, much as you might when using .send in Ruby. For those who favor a more type-safe approach and use a language like F#, you might find a sum type, or discriminated union, useful:

type SampleResourceMethod =
| Create of AsyncReplyChannel<unit>
| SetTimestamp of DateTimeOffset * AsyncReplyChannel<unit>
type SampleResource() =
let agent = MailboxProcessor<_>.Start(fun inbox ->
let rec loop () = async {
let! msg = inbox.Receive()
match msg with
| Create ch ->
// …
ch.Reply()
return! loop()
| SetTimestamp (dt, ch) ->
// …
ch.Reply()
return! loop() }
loop())
member x.Create(): Async<unit> =
agent.PostAndAsyncReply (fun ch -> Create ch)
member x.SetTimestamp(dt: DateTimeOffset): Async<unit> =
agent.PostAndAsyncReply (fun ch -> SetTimestamp(dt, ch))
/// Generic Send, similar to Ruby's send method.
member x.Send(msg: SampleResourceMethod) =
agent.PostAndAsyncReply msg

If this is starting to sound like SOAP and WS-*, you are not wrong. The thing is, SOAP was not terribly bad. SOAP and WS-* went wrong in layering additional protocol semantics on top of HTTP, which already has built-in support for many of those additional pieces. However, I’m not voting for a return to SOAP (though there are some aspects to tools like WCF that would be quite nice to have).

Wouldn’t it be nice to be able to declare method characteristics such as idempotency and safety and allow tooling to correctly negotiate that the appropriate method should use one of GET, PUT, or DELETE instead? What if you could define a graph of resources that know each other exist (by means of hypermedia) and that could correctly coordinate distributed workflows that would again be able to have correct client proxy representations driven through hypermedia?

This is still only scratching the surface of HTTP, of course. Do you really care about status codes? Headers? I bet you don’t. You may claim to care, but ultimately you probably care b/c most frameworks you use either do it wrong or entirely ignore. Consider a tool that would take this distributed OO style and then correctly negotiate representations for your client app, take care of conflicts, deal with cache invalidation, etc. Wouldn’t that be nice? Such tools exist. Take a look at webmachine and liberator. I’m working on one now for F#/.NET called Freya with Andrew Cherry. Granted, I don’t think any of those tools tackle the concept of a distributed OO framework, but that should be something we can build on top.

Taking this a bit further, wouldn’t it be nice to be able to generate both client and server boilerplate off of a central design document or even a media type definition? Swagger offers something like this. I think we can do more in this direction, and I’m excited to see where we land.

What do you think? Is this a lot of nonsense, or should we keep digging back into HTTP and attempting to leverage it more? I’m also interested in your thoughts as to whether other protocols, such as Web Sockets and Service Bus layers will eventually replace HTTP as our dominant protocol (or at least network port).

Special thanks to Darrel Miller for his review and feedback on this post.

3 thoughts on “Web APIs vs RPC

  1. With regards to hypermedia, it feels like semantic technologies really realise the benefits of “hypermedia integration” of systems. One big hurdle I see is exactly the one you touch upon: current frameworks really won’t make hypermedia easy and consequently building semantic web on top of them isn’t easy, which would be the killer feature to adopt hypermedia.

    OData, for good or worse, promotes automagically this in a sorts of way while also offering SOAP-like tooling. That is, it provides a way to add semantic information to data, respects hypermedia principles (mostly), and providers means and automation to link pieces data together in some way transparently so that developers need not to think about it (well, if one has generated the OData client from the metadata description) and it allows something like “API explorer” to be generated automatically from the out-of-band API description (which goes somewhat against the idea of hypermedia, but the necessary data could be provided inline too): http://pragmatiqa.com/xodata/.

    That being written, OData, especially prior to v4, isn’t perfect, as evidenced by the high-profile moves away by Nuget and Stackoverflow, but it looks like it’s stumbling to the right direction. There has been at least one meetup about the convergence of JSON-LD and OData: http://lists.w3.org/Archives/Public/public-linked-json/2013Oct/0028.html. Some other inspiration plain with OData could be had from http://blogs.msdn.com/b/odatateam/archive/2015/01/09/restier-a-turn-key-solution-to-build-odata-services.aspx.

    Liked by 1 person

Comments are closed.