Web Application Composition

I find tragic that we have only the ever-growing monolith of HTML5 on which to build web applications. Sure, it includes some good stuff like canvas and svg, but overall I find it a monstrosity. Whatever happened to XML namespaces? I know, “XML! Run for the hills!” However, the use of namespaces allowed simpler markup formats to be intermixed, and it worked rather well though a bit tedious to write by hand. Ultimately, I think the need to write these things by hand, due to bad tooling, led to the death of XML as a beloved format. Unfortunately, XML namespaces were replaced with a hack the size of Facebook. And the world cheered.

The majority of programmers are completely fine with their hacks upon hacks. I applaud these Herculean efforts. We all want to build something, and the fastest way to do so is by using what’s available. Today, we have HTML5 + CSS + JavaScript. Why do we have those? We have them because the editors given to us in the past so completely failed to work as promised. HTML nested table elements for layout anyone? (I am so sorry for you if you are still dealing with this.) The promise of XML was that machines could read and manipulate it easily (and they can) and that developers could safely ignore it. Of course, as a text-based, readable format, developers could still work out any kinks, should any arise. Unfortunately, we had kinks arising everywhere, like we were living through the Walking Dead of XML documents.

In giving up the various, independent XML formats, such as MathML, SVG, XForms, XHTML, XSLT, etc., we lost a lot of potential in the form of composition. Remember, these are data interchange formats. Yes, even XHTML, contrary to the attempts to make it an application platform. Before, you could compose multiple formats together into one. (If you would like to see one, the W3C still has one lying around.) Today, you are at the mercy of the HTML5 committee and the browser makers. We also lost the option to use XSLT rather than CSS for styling, and CSS grew from a roughly layout-oriented spec to include content insertion and animation. These hacks work, but it’s like building a skyscraper with duct tape and baling wire.

We are not without hope today. Composition reappeared through various, mostly JavaScript-based technologies, such as Web Components and React. These aren’t a true replacement for what we lost, but they certainly help make up for what was lost. What’s interesting is that these rely on browsers’ JavaScript engines, which are really fast now, and yet they keep building on HTML5. Why? Why not create something even more basic?

Think about how we might be able to right the wrongs of the past with XML by making the browser more like an app container. What if you could build up one or more formats, be they markup, JS, etc. and compose them together? What if the basic building blocks were a new DOM that defined only a few simple primitives and some rules for inter-connecting formats? (I would like to link to the experiment of re-writing HTML5 as web components, but I cannot find the link anymore.)

I’m sure at this point that many of you have flipped back into thinking of HTML as a view layer. I continue to disagree with this notion, despite the massive additions made by HTML5. In the end, HTML is a data format, and the browser knows how to parse that format and render it visually. However, HTML is still a fantastic data format. In fact, Mike Amundsen wrote a book, Building Hypermedia APIs with HTML5 and Node, describing HTML’s adaptability as a data interchange format. I used to be excited about the possibility of sending XHTML+MathML+SVG payloads as data.

Unfortunately, JSON became the standard interchange format. JSON has a lot to like. It’s small, easy to create, easy to evaluate in a JavaScript application, etc. However, it has no semantics for how to merge or include multiple, derived formats. This is mostly a problem in that everyone sends application/json as the media type, so no machine could ever understand what was included. Newer efforts, such as HAL, Siren, UBER, and JSON-LD, also fail to address format composition.

I hope the situation changes. I would like to help. After Darrel Miller‘s talk at APIStrat 2015 in Austin, I downloaded the tools necessary to write an RFC to define media type composition. Unfortunately, I am still trying to figure out exactly how that might work. If you kept to a common base format, such as XML or JSON, things would probably work out alright. But what happens if you want to allow someone to mix a YAML document into their JSON? Things get tricky, especially considering JSON has no concept of links, so you can’t even add it as an external reference or import. Another likely use of this might be to provide a mechanism for code-on-demand for multiple languages. The possibilities really are limitless, especially if you wrap this back into my desire above to replace HTML in browsers with something much more primitive.

If this strikes a chord with you, please reach out in the comments. I would love to see this move forward, and I really need help. If you know of anyone else already attempting something similar, please let me know, as I would love to help.


Web APIs vs RPC

I keep finding a lot of developers who like to model and build web apps using HTTP as a transport for RPC. I don’t understand this but I thought I would do my best to provide a framework for understanding why one might choose to use the HTTP protocol rather than ignoring it and layering RPC on top.

I’m not terribly surprised that many developers find it hard to want to follow HTTP. The current frameworks do little to help with the HTTP protocol. Most server-side frameworks give you some sort of MVC approach that only represents HTTP methods, URIs, and possibly content negotiation. I hesitate to include the concept of HTTP resources, as most of those frameworks prefer to ignore the strict concept and use something conceptually closer to a database table. Those brave, few frameworks that do provide additional support of HTTP typically leave the details to the developer. While this may appear to be a good thing, freedom to define the protocol semantics ignores the fact that the HTTP RFCs provide a well-defined processing model. Why don’t more frameworks provide the tools developers need to leverage the HTTP processing model?

Is it any wonder that naysayers might easily argue that HTTP-based applications look very similar to RPC, perhaps adding additional complexity to hide RPC under a thin veneer of URI templates and HTTP methods?

I imagine it’s a surprise to many people to read the abstract from the 1992 definition of HTTP:

HTTP is a protocol with the lightness and speed necessary for a distributed collaborative hypermedia information system. It is a generic stateless object-oriented protocol, which may be used for many similar tasks such as name servers, and distributed object-oriented systems, by extending the commands, or “methods”, used.

If this were more obvious, don’t you think developers — the majority of whom are in love with OOP — would want to embrace this protocol? The history of the RPC-over-HTTP trend appears to stem from the fact that HTTP, running on port 80, was typically a reliable port on which to send network traffic because most environments opened port 80 by default and infrastructure teams were a bit picky about opening up random ports for custom applications.

How might one approach HTTP-as-distributed-OO protocol? To start, I could imagine a client proxy to a server object. The client object looks and acts like the server object but makes all its calls over the network. Would you want to expose each of the object’s methods as HTTP method extensions? Well, you don’t really need to do so. The HTTP protocol defines POST for most purposes, and you can include any additional details such as the object’s method name, much as you might when using .send in Ruby. For those who favor a more type-safe approach and use a language like F#, you might find a sum type, or discriminated union, useful:

If this is starting to sound like SOAP and WS-*, you are not wrong. The thing is, SOAP was not terribly bad. SOAP and WS-* went wrong in layering additional protocol semantics on top of HTTP, which already has built-in support for many of those additional pieces. However, I’m not voting for a return to SOAP (though there are some aspects to tools like WCF that would be quite nice to have).

Wouldn’t it be nice to be able to declare method characteristics such as idempotency and safety and allow tooling to correctly negotiate that the appropriate method should use one of GET, PUT, or DELETE instead? What if you could define a graph of resources that know each other exist (by means of hypermedia) and that could correctly coordinate distributed workflows that would again be able to have correct client proxy representations driven through hypermedia?

This is still only scratching the surface of HTTP, of course. Do you really care about status codes? Headers? I bet you don’t. You may claim to care, but ultimately you probably care b/c most frameworks you use either do it wrong or entirely ignore. Consider a tool that would take this distributed OO style and then correctly negotiate representations for your client app, take care of conflicts, deal with cache invalidation, etc. Wouldn’t that be nice? Such tools exist. Take a look at webmachine and liberator. I’m working on one now for F#/.NET called Freya with Andrew Cherry. Granted, I don’t think any of those tools tackle the concept of a distributed OO framework, but that should be something we can build on top.

Taking this a bit further, wouldn’t it be nice to be able to generate both client and server boilerplate off of a central design document or even a media type definition? Swagger offers something like this. I think we can do more in this direction, and I’m excited to see where we land.

What do you think? Is this a lot of nonsense, or should we keep digging back into HTTP and attempting to leverage it more? I’m also interested in your thoughts as to whether other protocols, such as Web Sockets and Service Bus layers will eventually replace HTTP as our dominant protocol (or at least network port).

Special thanks to Darrel Miller for his review and feedback on this post.

The Real MVC in Web Apps

I recently stumbled across Peter Michaux‘s article MVC Architecture for JavaScript Applications. I’ve followed Peter’s writing in the past and was surprised I’d missed that and several others you can find on his site, including the more recent Smalltalk MVC Translated to JavaScript. (While you don’t have to go read them now, I recommend you do so.) Peter focuses his attention on JavaScript applications. I’m going to extend this a bit and tie in a bunch of other things I’ve been reading to show that 1) the current trend appears to be heading back in this direction (albeit under different names) and 2) that MVC need not focus entirely on the client side.

First, though, I should establish that I think the use of “MVC” to describe server-side frameworks is a complete misnomer. (I’ve written about this previously.) The “MVC” claimed by Rails, ASP.NET MVC, and others is based on Model 2. While visually related, the reality is that they have very little in common since “Views” are mere templates and not the component object originally used in Smalltalk’s MVC (see Peter’s articles above).


About a year ago, I was building SPAs with AngularJS when I came across this post from David Nolen: The Future of JavaScript MVCs. I’d heard and enjoyed David talking about Clojure at LambdaJam earlier in the year, so I paid attention. At that time, I’d not yet heard of React, but I quickly started looking into it. React claimed to offer a paradigm that more closely matched functional programming; an attractive feature for an F# developer such as myself.

At that time, I still hadn’t really thought through exactly why I was building apps the way I was building them. Clients liked the perceived responsiveness in SPAs, and that was sufficient reason, despite some of the complexity involved. Also, AngularJS had, to that point, been a perfect fit for many of the forms-over-data style apps I was writing, and though I was uncertain about mixing binding logic into my markup, I’d started accepting it as the way forward.

When Facebook announced Flux, I was a bit more excited, as I liked the single-direction flow, though I thought some of the pieces in their pattern were a bit much. Around that time, I also found mithril, which looked like a nice mix of AngularJS and React with potentially better performance and a smaller footprint. Nevertheless, I stayed the course with AngularJS, as I needed a really good reason to switch frameworks.

A few months later, we picked up Kendo UI in order to use some of its components in our app. At that point, and combined with some limitations I ran into with Angular, as well as its learning curve preventing other team members from quickly contributing to our front-end app, we made a decision to start moving away from Angular. We have not made a furious effort to do so, and we have not selected a new framework. We may not.

As I was investigating potential target frameworks, I stumbled across the Futurice article Reactive MVC and the Virtual DOM and shortly thereafter was told about Reflux. While almost entirely the same thing, the former espoused that it was roughly equivalent to MVC, while the latter claimed to “eschew” MVC. Which was true?

I then stumbled upon Peter’s articles, as noted above. Indeed, both Reflux and the Futurice MVI approaches are rough equivalents of the classic Smalltalk MVC with the additional constraints of each component communicating only in one direction. In the Futurice case, MVI also makes all pieces observable, not just the model and simplifies the Controller to a UI event to domain event mapper, more or less.

Model-View-Controller and OOP

Hold on a second. Isn’t MVC an OOP pattern? Wasn’t React + Flux supposed to offer a functional paradigm? Isn’t Om, David Nolen’s ClojureScript wrapper of React, supposed to show that even more?

Well, here we have to define OOP. Smash Company recently posted an article about OOP that tries to deconstruct the definition and shows that OOP is a term that can mean many different things. I’m not trying to be pedantic, but I also know we will not be able to move forward without a clear definition. For the purposes of this post, we will adopt Alan Kay’s definition:

OOP to me means only messaging, local retention and protection and hiding of state-process, and extreme late-binding of all things. It can be done in Smalltalk and in LISP.

If we focus on OOP as a pattern focused on components communicating via message passing and allow that MVC is an OOP pattern, then indeed, Reflux and MVI are both really just slight variations of MVC. In particular, the use of RxJS in the MVI approach makes this most clear.

Cool. So roughly 30 years after MVC was a good idea for building UIs, we find we agree. This reminds me of Bret Victor’s “Future of Programming” talk:

Is that it, then? Pick a “real” MVC framework or set of libraries and rewrite everything? If you want a “real” MVC framework, do you need to pick one of the above? What about Peter’s Maria, which resulted from his work on trying to uncover the “real” MVC in JavaScript?

First of all, you are asking two different things. AngularJS and Ember both offer models that can fit this approach okay, and both can work with React (last I checked, though I’ve never used Ember and am not certain as to this claim). Alas, it’s just not that simple.


I neglected to mention that while looking through all these options, I ran into a little problem: most approaches assert that some specific component library is the “right” one to use. In particular, the virtual DOM approaches appear to be popular. However, where does that leave Web Components, component toolkits like Kendo UI, or DOM manipulation libraries like D3.js? How can you mix these into your virtual DOM library? For that matter, is a virtual DOM really necessary? I found Twitter’s Flight after beginning to prototype a similar idea and then finding a link to Flight in an article about CSP in JavaScript.

Right. Well, that certainly complicates things. Must you choose? What if you find some components in various libraries that would really work well for you? Must you really give up the ones not in your selected library? While you might appreciate using a consistent library or approach, I would argue you do not have to give up this flexibility.

OOP Revisited

So many people think OOP is directly at odds with FP. This is simply false. F#’s MailboxProcessor is an example of an agent, or actor-like, construct that maintains its own internal state and responds to messages. It may send its own messages or emit events to allow others to respond to its own processing. Doesn’t that sound familiar? If not, scroll back up to Alan Kay’s description of OOP.

In short, OOP is a strategy for designing in the large. This works exceedingly well with FP in the middle / small. Further, this sort of encapsulation means that you can create View abstractions around any and all of the above component libraries without worrying about them working directly with one another. Want Kendo UI, React, and Flight components to render side by side? No problem. You merely need a common abstraction or protocol through which they can communicate back to a common dispatcher and model. RxJS, as used in the MVI example above, is a good example of a library that may be useful. You can also use Maria or a handful of other libraries.

Fortunately, I’m not the only one thinking about things this way, at least in the sense of framework lock-in versus libraries. Andy Walpole just posted his thoughts in 2015: The End of the Monolithic JavaScript Framework. He does a great job showing the rise and fall in appreciation for monolithic frameworks. Chris Love has long been advocating for micro-libraries over frameworks. These don’t progress necessarily in the direction of MVC, and I don’t think I’ve yet covered why MVC could really be beneficial….

Distributed, Reactive MVC

If I recall correctly, Peter notes that the model is not the thing retrieved from a server request. I agree with the literal meaning of this statement, though I think he probably meant a bit more. (Apologies if I interpreted your meaning incorrectly, Peter.) Until I started writing this article, I was in full agreement with Peter’s assertion. You see, in classic, Smalltalk MVC, you had only a client. Actually, that’s not quite right. You had a world, an environment. The whole thing is available to you interactively. In such a programming environment, anything can be a Model, and if you read Peter’s article linked above, you would have caught that this was baked into Smalltalk intentionally. It’s reasonable to conclude then that the Model was the encapsulation of the domain.

We’re not talking about Smalltalk anymore, but the world of browser-based applications. Very little data is available within a browser-based application unless all the data is passed along with the markup. I’ve built apps that way in the past, and I bet you have done so, as well. However, we have learned new tricks that allow us to lazily retrieve data when and as necessary using tools like XMLHttpRequest. We have introduced distribution.


If you watched Bret Victor’s video, you may remember seeing mention of the Actor model. When you think of the Actor model, what’s the first programming language you think of? I hope you said Erlang, mostly because of its history. I cannot remember where I first heard the story, but Joe Armstrong said that they created Erlang based on the ideas of Smalltalk while waiting on their first Smalltalk machine to arrive. In the end, they liked what they had built in the interim and never used the Smalltalk machine. (If anyone can find a reference to this, I’ll add it here.)

Here we can make a connection between Erlang and Smalltalk, between the Actor model and OOP. We can therefore confidently assert that OOP can involve distribution in its model (again following the definition from Kay above). Since MVC is an OOP pattern, we can further assert that MVC allows for distribution.

This raises a new question: even if all the players in MVC can be distributed, which should be distributed? This is a tough one. Let’s think together about how we build UIs. Using the desktop metaphor I raised in my previous post, we see that we could place multiple Views side-by-side within a single window. Following the Windows Store example, we could make the case for each View running in its own window. Also, depending on your definition of View, you may define something as small as a TextBox as a View, as it certainly qualifies as a component. You may therefore have any number of combinations. In most scenarios, though, I think we can safely assume that these will always exist on the client.

Let’s move on to Controllers. We could probably put these anywhere, but if we want to stick with a minimal definition like that used in the MVI pattern described in the Futurice article, I would argue that Controllers would best live close to their observed Views. You can certainly publish raw UI events across a network connection to be translated elsewhere, but this will mean you are primarily using RPC and may have to pass a lot of UI related context along with the event. I do like the idea of Controller as Intent, and I don’t like RPC much (more on this in just a bit), so I would argue the Controller / Intent also fits best in the client side. (Note: if you are building networked, client-side apps over a transport protocol, my argument falls flat, and I think you have more options as to where you might run this piece of the pattern.)

So far, I’ve stuck with my statement above and followed along quite nicely (I think) with Peter’s argument about MVC being a client-side pattern. However, we now need to discuss Models. Following the MVI terminology, Models receive Intent do some work and publish their revised state. I think we need some examples to understand distribution of Models.

Most single-player games can keep Model state quite close to its respective View and Controller. The state is most likely coordinated with a more global app state, as well (similar to the $scope hierarchy in Angular, actually.) However, once you start getting into distributed, multi-player games, you begin to see patterns like Flux (compared with Doom 3 in the linked video).

React Flux

What I find interesting is that in the case of Flux, the Intent (or Action in Flux / Reflux terms) communicates with the distributed part of the program. The Model (or Store in Flux / Reflux) remains a purely client-side concern. I won’t say this is wrong; I just wonder if this is really the right idea.

A few months ago, I was trying to visually conceptualize a similar idea. My original drawing had different terms related to my company’s product, but the general idea reduces nicely to a distributed, reactive form of the MVI variant of MVC:

I think either of the above are relatively good approaches. I’m partial to my drawing, but I certainly don’t think either are wrong. Nevertheless, I would like to dive a bit deeper into the Model and see where that leads.

In Search of the Model

What is the Model, really? I don’t know that I’ve ever seen a really clear explanation. The early MVC Models were simple, UI related concepts, but we seem to have grown the complexity of our apps far beyond that level of simplicity. The DDD and CQRS architectures provide some interesting ideas for defining a Model. Do these correlate with the Model in MVC? I don’t know for sure. In any case, you won’t typically represent either of these directly over HTTP.

Constraining this back to the topic of web-based applications. Web applications run over HTTP by definition. Anyone familiar with the HTTP 0.9 definition? What magic words do you happen to see there?

HTTP is a protocol with the lightness and speed necessary for a distributed collaborative hypermedia information system. It is a generic stateless object-oriented protocol, which may be used for many similar tasks such as name servers, and distributed object-oriented systems, by extending the commands, or “methods”, used.

Well, how about that? Maybe we are onto something! The only problem is that, in spite of the numerous clones of simple web API libraries, HTTP is quite a complex beast. Have you seen the state machine model?

ForGET HTTP State Machine Diagram for HTTP

Sure, you can avoid this diagram and roll your own way, but you lose a lot of the benefits of a standard protocol along the way, including a lot of the options available to understand what to do if something changes unexpectedly.

Most importantly, though, the diagram emphasizes the importance of leveraging HTTP resources as the Model for your application. Fortunately, tools like web machine and, for .NET, Freya exist to ease the challenges of trying to build the model yourself for each application. (I’ll be posting more on Freya later, but if you are interested in a sample, you can find the TodoBackend implementation on GitHub.) Even better, web machine correctly models REST, putting to … rest — excuse the pun — all the arguments and letting you focus on following the protocol. If you want to learn more, I recommend Sean Cribbskeynote from RestFest:

As I noted above, I plan to delve deeper into this specific concept further in the near future, so I’ll leave it for now.


I set out to show that MVC, the “real” MVC is in fact a pattern to which a lot of libraries and people are converging, whether they realize it or not. I think I’ve shown that, as well as that by adopting a more general pattern approach, you can leverage many of these component libraries’ styles in the same application, side-by-side. I’m curious to know your thoughts and whether anyone else has or is trying something similar. What have you found? Are you finding that such an approach works well?

I also set out to show that MVC can include an aspect of distribution. I think I highlighted this, as well, primarily in showing some of the original design goals of HTTP and aspects of OOP, and therefore MVC, that enable distribution. I don’t think I have a firm handle on exactly how to make this work, but I’m actively pursuing several aspects of distribution, both in terms of Model proxies in the client and building a web machine port for F# in Freya. What do you think? Is the Model proxy a good idea? Again, has anyone tried this approach and found it successful? Do you prefer the approach taken by Flux instead?

Please join the conversation! I’m very curious to know others’ opinions, including yours!

To SPA, or not to SPA?

The hot topic in JavaScript right now typically involves which SPA framework one should use. These discussions tend to focus on one of the following:


Do you really need everything a framework provides? Frameworks typically attempt to provide everything someone may want, which means you are probably loading more JS than you need or want to use.


You have to maintain your own code, and frameworks can often reduce your code. However, in some cases you may find you are maintaining the framework code, either while discovering a bug or once the maintainers of the framework have gone on to greener pastures.


This can go either way. Frameworks often have developers who love to optimize, so you can gain in terms of runtime performance. However, you are also sacrificing modularity and giving in to potentially large, initial file uploads when the app is refreshed or started.


In many cases, your application’s security is dependent on the solidity of the framework on which you build. If you don’t use a framework, you must deal with security aspects yourself. Of course many security issues may be addressed by small libraries that focus on that specific problem. This is never an easy problem, so no matter which direction you take, you need to plan for and build security mechanisms.

When you need a SPA

The best use of a SPA is for heavy interactions against a single concern. A good example would be to the Windows Store model, where you may find multiple applications work well together, but few apps do a lot of unique tasks. Good examples are GMail, Asana, Outlook.com, etc. These apps all do one thing, so a SPA makes sense there.

One point in particular stands out as odd to me. Why do we want to leverage browser history as a model for marking favorites or driving UI navigation? If you are doing that, you are merely moving things like rendering to the client side and away from the server. I don’t suppose there’s anything wrong with that, but is it really worth it? (Note: I think it’s fine to provide a unique app state link, but that should likely look like a SHA1 hash or GUID, not a Cool URI.)

Recall that desktop applications display multiple screens within a single window. I think that should be the goal of a SPA. In that case, bookmarking, layouts, etc. have little to do with the browser URL. If you want those, you should probably build in an app-specific mechanism for bookmarking or layout state; the browser provides poor substitutes.

When you don’t need a SPA

Am I suggesting that most apps should return to a state of no JavaScript? Certainly not! If you look at tools like Atlassian’s JIRA, you will find that most navigations use a page refresh. The only static page elements that don’t change much are the top nav bar, so you hardly notice any flicker, yet you still get a rich, interactive experience. The complexity of SPA routing, however, is removed entirely. Were you to take such an approach, you could more easily adopt an approach like progressive enhancement, as the app is not dependent upon JavaScript for its very core.

When you don’t want the complexity but don’t want to refresh static elements

You may also go a long way using a tool like pjax, which provides a means of asynchronously retrieving and replacing content using server-generated markup. This is actually quite a good use of HTTP’s hypermedia mechanism.


What do you think? Have we generally taken to using SPA too far? Should we scale back? Do you wish you would have considered your options more on your current or previous applications? What will you use in future applications?

Working with Non-Compliant OWIN Middleware

Microsoft.Owin, a.k.a. Katana, provides a number of very useful abstractions for working with OWIN. These types are optional but greatly ease the construction of applications and middleware, especially security middleware. The Katana team did a great job of ensuring their implementations work well with Microsoft.Owin as well as the standard OWIN MidFunc described below. However, not all third-party middleware implementations followed the conventions set forth in the Katana team’s implementations. To be fair, OWIN only standardized the middleware signature earlier this year, and the spec is still a work in progress.

At some point in the last year, I discovered the HttpMessageHandlerAdapter was not, in fact, OWIN compliant. I learned from the Web API team they would not be able to fix the problem without introducing breaking changes, so I came up with an adapter that should work with most implementations based on OwinMiddleware that didn’t expose the OWIN MidFunc.

The solution is rather simple, and the Katana source code provides one of the necessary pieces in its AppFuncTransition class. The adapter wraps the OwinMiddleware-based implementation and exposes the OWIN MidFunc signature. After creating the adapter specifically for HttpMessageHandlerAdapter, I created a generic version I think should work for any OwinMiddleware. (Examples in both C# and F#.)

I hope that helps someone else. If there’s interest, I can package this up and place it on NuGet, though it seems a lot of effort for something so small. The source code above is released under the Apache 2 license, in keeping with the use of AppFuncTransition used above from the Katana source.

Texas UIL Computer Science Competition Should Use JavaScript

As a former competitor in Texas’ UIL Computer Science competition, I thought I would check in and see how it had evolved. I have a desire to, at some point, try to spend some volunteer time at a local high school working with Computer Science students, and I thought this might give me a view into what I might expect to find.

I admit I was quite disappointed to find Java the introductory language of choice. This is, after all, the state where Dijkstra presided, in Austin no less. How could Texas have gone so wrong? I knew Texas adopted C++ right after I graduated (from Pascal). I had no idea it had switched to Java.

What’s wrong with Java? Well, let’s start with complexity. Java is supposedly an OOP language. Alan Kay’s message passing and atoms definition of OOP makes sense. It’s the same argument Joe Armstrong uses to describe the Actor model used in Erlang and based on physics. Java, and its kind that includes C++ and C#, follows a very different form I tend to call “Class-Oriented.” Class-oriented programming uses things like inheritance and design patterns to solve problems that are created by using class-oriented programming. An outside observer should be able to see this clearly in the number of books published on the topic, especially when one takes the time to investigate the contents and learn that most contradict others.

For years, Rice and MIT taught their first-year computer science students LISP. They’ve now, sadly, departed from that tradition. However, I strongly believe teaching at such a raw, powerful level would both provide immediate job candidacy and a firmer grip on the actual mechanics of computing, rather than an understanding of OOP, which is much less useful than advertised. Those who understand LISP tend to understand algorithms and abstractions better and have a much closer appreciation for what languages do once they’ve been parsed. Isn’t this what our education system should be encouraging?

Okay, let’s say LISP is out. What about Python? It’s pretty popular and has supplanted LISP in both Rice and MIT (I think). I would actually suggest JavaScript, myself. Sure, it has warts, but 1) it’s the most ubiquitous language in the world, 2) can be written and run in any browser, 3) provides immediate feedback, and 4) would offer every student the possibility of getting a part- or even full-time job during summer months or upon graduation.

If the selection of Java is related to industry use, which I can only suppose is the case, then JavaScript should be the top choice. It has so many interesting facets and can be used to teach most if not all styles of programming. It even has roots in Scheme and Self, one the LISP and the other the OOP.

Please, at least consider it.

P.S. If a dynamic language, or JavaScript specifically, doesn’t satisfy, what about a typed, multi-paradigm language like F# or OCaml?

Bundling and Minification with Web Essentials

Pta.Build.WebEssentialsBundleTask A few months ago, the Tachyus web application used a C# + F# web application approach to separate the front-end HTML, CSS, and JavaScript from the back-end F# ASP.NET Web API application. With this configuration, we introduced Web Essentials to bundle and minify our CSS and JavaScript at build time within Visual Studio. To simplify deployments and better group related and shared code, we decided to merge the back-end with another Web API application, which used the F# MVC 5 project template. We originally tried using CORS, which worked great in almost every environment. Unfortunately, the one environment in which we ran into trouble was our staging/production environment. Since we are building an internal-only API, we haven’t spent the extra effort to make our APIs evolvable; therefore, we decided to just merge all three projects together into the F# web project. This worked rather well, except for Web Essentials. We abandoned Web Essentials, as well as any form of bundling or minification at that time.

Fast forward to last week: we again split the front-end application out into a separate, C# project. We did this for several reasons:

  • We ran into trouble trying to remote debug the F# web project
  • The project had grown to the point that it was quite large, and it made sense to separate the two for maintenance
  • It’s common even in other languages to separate the front-end into a separate folder or project as different teams are often responsible for the different apps, which is not quite our case but close
  • We wanted to clean up our Angular code so that we had less Angular spread throughout and more standard JavaScript; for this we wanted to use bundling again

I was able to add Web Essentials back into the solution since we were again using a C# project. However, this had its own challenges, specifically in the form of communication to the rest of the team that they would need to install Web Essentials in order for their updates to take effect.

Fortunately, my colleague Anton Tayanovskyy recently found and pointed me toward the WebEssentialsBundleTask, which is a MSBuild task that will run the Web Essentials transformations at runtime depending on the build configuration, i.e. Debug or Release. This tool provides explicit script references for Debug builds and a bundled (and minified, if desired) version for Release builds. It seems to only require the presence of a Web Essentials-style .bundle file to work, so I would expect this to work equally well with a F#-only solution, though I have yet to try that. The WebEssentialsBundleTask has its own issues, though. It will modify your index.html file whenever it runs, so you must make sure to revert changes you don’t want to keep. We rarely change our index.html file since nearly everything is built in the form of Angular directives or templates. Nevertheless, you should consider the cost to your own project.

You may wonder why we didn’t just build a simple FAKE task. After all, whitespace in JavaScript is relatively meaningless, so a very simple concat + remove could probably get the job done, especially since we use ; where applicable. I definitely considered this option, as well as creating new FAKE tasks built around a node.exe using tools like grunt.js or gulp.js. In the end, these all seemed like overkill with the availability of Web Essentials, at least until we had evaluated whether WE would work for our purposes. We are still evaluating. What are you using? Did you find this helpful?