A New Web for .NET: FubuMVC and Chad’s Response to AR Considered Harmful

I caught a few interesting posts this week that reminded me of an unfinished series on .NET web frameworks I started in 2009. The series, as it was initially planned, ties in very well with a recent post from Chad Myers, who responded to another post claiming the ActiveRecord (and Rails) Considered Harmful.

Before I go any further, I want to make very clear that I’m a fan of Rails and think that it is an incredibly useful framework for building certain applications. I think the same is true of ASP.NET MVC. I’ve used and use both. However, I don’t think that means these are the best approach for tackling HTTP.

Chad’s post was very good. I agree with almost all of his points. In particular, I think he hit several very important points, the biggest of which was his emphatic quote:

INHERITANCE IS FAIL FOR MOST DESIGNS, BUT NEVER MORE SO THAN IN WEB/MVC FRAMEWORKS!

Then from his list of criteria for a “perfect” web framework:

  • Completely compositional: 100% agree
  • [Resource]-based: I take a slightly different direction here. We talk about “models”, but if you are really thinking about HTTP, you should be thinking in terms of resources. They are similar in many ways, notably in the sorts of “behaviors” (in FubuMVC terms) that should be composed with the “model.” Honestly, it’s the “model” term itself that seems totally overloaded or at least at the wrong layer when talking about a framework. If you have a model, you’re probably really dealing with something more like DDD than the HTTP application layer.
  • Not have any notion of controllers: Agree. In MVC-style frameworks, they are sort of necessary, but if you break away from MVC, you’ll find there are better alternatives.
  • Action-based: I’m all about using functions, or delegates, as 'Request -> 'Response handlers. I also agree that these should generally be kept simple, relying on compositional helpers, middlewares, or whatever you want to call them to perform model-binding and response generation. Granted, this is slightly different than what Chad is talking about, and I’ll cover my thoughts on this in more detail below.
  • Minimized code-in-view: templates are helpful, but the ability to add a lot of logic is just asking for trouble. I really like the tags-as-code approach taken by WebSharper and Wing Beats.
  • Independent of other frameworks: Definitely limit other dependencies and play nice with others.
  • Testing as a first-class citizen: At least allow whatever the consumer writes to easily write real unit tests. Generally, the only reason you can’t do this is related to hosting, so make sure the hosting model is not necessary to test the application.

Okay, so I obviously skipped some items. I don’t think the other items are bad; I just don’t think they are necessary for a “perfect” web framework.

  • Total adherence to principles: This sounds really good. I don’t disagree, really, but exactly which principles? Principles sometimes change. Some make sense in one approach to software development and are sort of non-issues in others. Look at the description of SOLID and its relevance to F#. You’ll note that many of the principles are sort of non-issues.
  • Statically typed: I don’t really see this as a requirement. HTTP is inherently dynamic. You could build a useful, fairly complex HTTP application using only dictionaries. Reliance upon type serialization is misguided. How do you generate accurate Atom or HTML from a static type? You don’t. You need a template or some other custom output. At the same time, I have grown to like static type-checking, espeically in F#, and F# 3.0’s Type Providers prove you can do a lot, even type check regular expressions. I just don’t see this as a strict requirement.
  • Based on IoC: I love IoC in C# applications. I can’t imagine life without it. However, I’ve found that I don’t even think about IoC when writing F# applications. It’s just not a concern. Higher-order functions, computation expressions, and other functional-style approaches generally render IoC a non-issue. So again, I don’t really see this as a requirement but a case of “use the right tool for the job.”
  • Mostly model-based conventional routing: This is a totally fine solution. I take issue with having any single prescription for routing. There are so many ways to handle this, and in the end it doesn’t matter. If you stick to a resource-oriented approach, which perhaps is the goal Chad is getting at, then your routing should be pretty straightforward and simple. One thing many routers miss is the connection between different methods assigned to a single uri. So far as I know, OpenRasta and Web Api are the only frameworks that return the appropriate HTTP status codes for missing methods on an uri or unmatched Accept header, among other things. FubuMVC may allow this through a convention.
  • One Model In, One Model Out: When using MVC, I generally like this approach. However, I again don’t see this as a requirement. I’m not convinced a model is always necessary. Again, HTTP is quite dynamic, and it’s sometimes easier to work directly with a request or pull out several different “objects” from a request to help generate a response. Though perhaps 'Request and 'Response are models and the point is moot. 🙂

Now for the biggest divergence: I generally prefer to be explicit rather than lean on conventions. In C#, I typically go for conventions because of the amount of noise required by the C# syntax. With F#, I’m able to be explicit without suffering syntax noise overload. The same is true for many dyanmic languages like Ruby and Python. So I disagree with both the Conventional top-to-bottom and Zero touch points. Again, I’m not fully opposed, but I definitely don’t see these as requirements at the framework level. Sure, include it as an option, especially if, as in FubuMVC, you can redefine the conventions to suit your needs. This could just mean use the right tool for the job, or it could mean you should take a simpler approach and build your own conventions as wrapper functions.

If you think this sounds like a prescription to write a lot more code, I’ll show a comparison:

Two extra lines (in this instance) and it’s likely with a more complex example you wouldn’t see much of a difference in terms of line count. I doubt this is true with C#, so don’t take this as a statement that this is the “one, true way.” I like conventions when using C#, just as I like IoC in C#.

In any case, definitely check out FubuMVC as an alternative to MVC. There’s a lot to like, and the FubuMVC team has done an incredible job building a terrific platform. I’ll try to highlight a few others in the upcoming weeks, but while you’re looking at Fubu, also checkout OpenRasta and ServiceStack.

A New Web for .NET: FubuMVC and Chad's Response to AR Considered Harmful

I caught a few interesting posts this week that reminded me of an unfinished series on .NET web frameworks I started in 2009. The series, as it was initially planned, ties in very well with a recent post from Chad Myers, who responded to another post claiming the ActiveRecord (and Rails) Considered Harmful.

Before I go any further, I want to make very clear that I’m a fan of Rails and think that it is an incredibly useful framework for building certain applications. I think the same is true of ASP.NET MVC. I’ve used and use both. However, I don’t think that means these are the best approach for tackling HTTP.

Chad’s post was very good. I agree with almost all of his points. In particular, I think he hit several very important points, the biggest of which was his emphatic quote:

INHERITANCE IS FAIL FOR MOST DESIGNS, BUT NEVER MORE SO THAN IN WEB/MVC FRAMEWORKS!

Then from his list of criteria for a “perfect” web framework:

  • Completely compositional: 100% agree
  • [Resource]-based: I take a slightly different direction here. We talk about “models”, but if you are really thinking about HTTP, you should be thinking in terms of resources. They are similar in many ways, notably in the sorts of “behaviors” (in FubuMVC terms) that should be composed with the “model.” Honestly, it’s the “model” term itself that seems totally overloaded or at least at the wrong layer when talking about a framework. If you have a model, you’re probably really dealing with something more like DDD than the HTTP application layer.
  • Not have any notion of controllers: Agree. In MVC-style frameworks, they are sort of necessary, but if you break away from MVC, you’ll find there are better alternatives.
  • Action-based: I’m all about using functions, or delegates, as 'Request -> 'Response handlers. I also agree that these should generally be kept simple, relying on compositional helpers, middlewares, or whatever you want to call them to perform model-binding and response generation. Granted, this is slightly different than what Chad is talking about, and I’ll cover my thoughts on this in more detail below.
  • Minimized code-in-view: templates are helpful, but the ability to add a lot of logic is just asking for trouble. I really like the tags-as-code approach taken by WebSharper and Wing Beats.
  • Independent of other frameworks: Definitely limit other dependencies and play nice with others.
  • Testing as a first-class citizen: At least allow whatever the consumer writes to easily write real unit tests. Generally, the only reason you can’t do this is related to hosting, so make sure the hosting model is not necessary to test the application.

Okay, so I obviously skipped some items. I don’t think the other items are bad; I just don’t think they are necessary for a “perfect” web framework.

  • Total adherence to principles: This sounds really good. I don’t disagree, really, but exactly which principles? Principles sometimes change. Some make sense in one approach to software development and are sort of non-issues in others. Look at the description of SOLID and its relevance to F#. You’ll note that many of the principles are sort of non-issues.
  • Statically typed: I don’t really see this as a requirement. HTTP is inherently dynamic. You could build a useful, fairly complex HTTP application using only dictionaries. Reliance upon type serialization is misguided. How do you generate accurate Atom or HTML from a static type? You don’t. You need a template or some other custom output. At the same time, I have grown to like static type-checking, espeically in F#, and F# 3.0’s Type Providers prove you can do a lot, even type check regular expressions. I just don’t see this as a strict requirement.
  • Based on IoC: I love IoC in C# applications. I can’t imagine life without it. However, I’ve found that I don’t even think about IoC when writing F# applications. It’s just not a concern. Higher-order functions, computation expressions, and other functional-style approaches generally render IoC a non-issue. So again, I don’t really see this as a requirement but a case of “use the right tool for the job.”
  • Mostly model-based conventional routing: This is a totally fine solution. I take issue with having any single prescription for routing. There are so many ways to handle this, and in the end it doesn’t matter. If you stick to a resource-oriented approach, which perhaps is the goal Chad is getting at, then your routing should be pretty straightforward and simple. One thing many routers miss is the connection between different methods assigned to a single uri. So far as I know, OpenRasta and Web Api are the only frameworks that return the appropriate HTTP status codes for missing methods on an uri or unmatched Accept header, among other things. FubuMVC may allow this through a convention.
  • One Model In, One Model Out: When using MVC, I generally like this approach. However, I again don’t see this as a requirement. I’m not convinced a model is always necessary. Again, HTTP is quite dynamic, and it’s sometimes easier to work directly with a request or pull out several different “objects” from a request to help generate a response. Though perhaps 'Request and 'Response are models and the point is moot. 🙂

Now for the biggest divergence: I generally prefer to be explicit rather than lean on conventions. In C#, I typically go for conventions because of the amount of noise required by the C# syntax. With F#, I’m able to be explicit without suffering syntax noise overload. The same is true for many dyanmic languages like Ruby and Python. So I disagree with both the Conventional top-to-bottom and Zero touch points. Again, I’m not fully opposed, but I definitely don’t see these as requirements at the framework level. Sure, include it as an option, especially if, as in FubuMVC, you can redefine the conventions to suit your needs. This could just mean use the right tool for the job, or it could mean you should take a simpler approach and build your own conventions as wrapper functions.

If you think this sounds like a prescription to write a lot more code, I’ll show a comparison:

Two extra lines (in this instance) and it’s likely with a more complex example you wouldn’t see much of a difference in terms of line count. I doubt this is true with C#, so don’t take this as a statement that this is the “one, true way.” I like conventions when using C#, just as I like IoC in C#.

In any case, definitely check out FubuMVC as an alternative to MVC. There’s a lot to like, and the FubuMVC team has done an incredible job building a terrific platform. I’ll try to highlight a few others in the upcoming weeks, but while you’re looking at Fubu, also checkout OpenRasta and ServiceStack.

Iteratee in F# – Part 1

I’ve been playing with Iteratees lately in my work with
Dave on fracture-io.

The Iteratee module used in this post is part of the FSharpx library and provides a set of types and functions for building compositional, input processing components.

A fold, by any other name

A common scenario for I/O processing is parsing an HTTP request message. Granted, most will rely on ASP.NET, HttpListener, or WCF to do this for them. However, HTTP request parsing has a lot of interesting elements that are useful for demonstrating problem areas in inefficient resource usage using other techniques. For our running sample, we’ll focus on parsing the headers of the following HTTP request message:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
let httpRequest : byte [] = @"GET /some/uri HTTP/1.1
Accept:text/html,application/xhtml+xml,application/xml
Accept-Charset:ISO-8859-1,utf-8;q=0.7,*;q=0.3
Accept-Encoding:gzip,deflate,sdch
Accept-Language:en-US,en;q=0.8
Cache-Control:max-age=0
Connection:keep-alive
Host:stackoverflow.com
If-Modified-Since:Sun, 25 Sep 2011 20:55:29 GMT
Referer:http://www.bing.com/search?setmkt=en-US&q=don't+use+IEnumerable%3Cbyte%3E
User-Agent:Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.4 (KHTML, like Gecko) Chrome/16.0.889.0 Safari/535.4

<!DOCTYPE HTML PUBLIC ""-//W3C//DTD HTML 4.01//EN"" ""http://www.w3.org/TR/html4/strict.dtd"">
<html>
<head>
...
</head>
<body>
...
</body>
</html>"B

When parsing an HTTP request message, we most likely want to return not just a list of strings but something more descriptive, such as an abstract syntax tree represented as a F# discriminated union. A perfect candidate for taking in our request message above and producing the desired result is Seq.fold.

1
val fold : ('State -> 'a -> 'State) -> 'State -> seq<'a> -> 'State

The left fold is a very useful function. It equivalent to the Enumerable.Aggregate extension method in LINQ. This function takes in a state transformation function, an initial seed value, and a sequence of data, and produces a final state value incrementally.

Looks like we’re done here. However, there are still problems with stopping here:

Composition

You can certainly use function composition to generate an appropriate state incrementing function, but you would still have the problem of being able to pause to delay selecting the appropriate message body processor.

Early termination

Suppose you really only ever want just the message headers. How would you stop processing to free your resources as quickly as possible? Forget it. Fold is going to keep going until it runs out of chunks. Even if you have your fold function stop updating the state after the headers are complete, you won’t get a result until the entire data stream has been processed.

Iteratees

The iteratee itself is a data consumer. It consumes data in chunks until it has either consumed enough data to produce a result or receives an EOF.

An iteratee is based on the fold operator with two differences:

  1. The iteratee may complete its processing early by returning a Done state. The iteratee may return the unused portion of any chunk it was passed in its Done state. This should not be confused with the rest of the stream not yet provided by the enumerator.

  2. The iteratee does not pull data but receives chunks of data from an “enumerator”. It returns a continuation to receive and process the next chunk. This means that an iteratee may be paused at any time, either by neglecting to pass it any additional data or by passing an Empty chunk.

Inspiration

For those interested in reading more, there are any number of articles and examples. Here are a few I’ve referenced quite frequently:

The Scalaz version arguably comes closest to the F# implementation, but I have generally based my implementations on the Haskell source. I’ve got two implementations currently, a version based on the enumerator package, and a continuation-passing style version based on the iteratee package.

The iteratee is composed of one of two states, either a done state with a result and the remainder of the data stream, or a continue state containing a continuation to continue processing.

1
2
3
type Iteratee<'el, 'a> =
    | Done -> 'a * Stream<'el>
    | Continue -> (Stream<'el> -> Iteratee<'el, 'a>)

The Stream type used here is not System.IO.Stream but yet another discriminated union to represent different stream chunk states:

1
2
3
4
type Stream<'el> =
    | Chunk -> 'el  // A chunk of data
    | Empty         // An empty chunk represents a pause and can be used as a transition in compositions
    | EOF           // Denotes that no chunks remain

Some Iteratees

The following are some sample iteratees using the List<’a> type in F#. While not the most efficient, they are easy to express concisely and are thus better for illustrating the use of iteratees.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
let length<'a> : Iteratee<'a list, int> =
    let rec step n = function
        | Empty | Chunk [] -> Continue (step n)
        | Chunk x -> Continue (step (n + x.Length))
        | EOF -> Done(n, EOF)
    in Continue (step 0)

let peek<'a> : Iteratee<'a list, 'a option> =
    let rec inner =
        let rec step = function
            | Empty | Chunk ([]:'a list) -> inner
            | Chunk(x::xs) as s -> Done(Some x, s)
            | EOF -> Done(None, EOF)
        Continue step
    in inner

let head<'a> : Iteratee<'a list, 'a option> =
    let rec inner =
        let rec step = function
            | Empty | Chunk ([]:'a list) -> inner
            | Chunk(x::xs) -> Done(Some x, Chunk xs)
            | EOF -> Done(None, EOF)
        Continue step
    in inner

let drop n =
    let rec step n = function
        | Empty | Chunk [] -> Continue <| step n
        | Chunk str ->
            if str.Length < n then
                Continue <| step (n - str.Length)
            else let extra = List.skip n str in Done((), Chunk extra)
        | EOF -> Done((), EOF)
    in if n <= 0 then empty<_> else Continue (step n)

let private dropWithPredicate pred listOp =
    let rec step = function
        | Empty | Chunk [] -> Continue step
        | Chunk x ->
            match listOp pred x with
            | [] -> Continue step
            | x' -> Done((), Chunk x')
        | EOF as s -> Done((), s)
    in Continue step

let dropWhile pred = dropWithPredicate pred List.skipWhile
let dropUntil pred = dropWithPredicate pred List.skipUntil

let take n =
    let rec step before n = function
        | Empty | Chunk [] -> Continue <| step before n
        | Chunk str ->
            if str.Length < n then
                Continue <| step (before @ str) (n - str.Length)
            else let str', extra = List.splitAt n str in Done(before @ str', Chunk extra)
        | EOF -> Done(before, EOF)
    in if n <= 0 then Done([], Empty) else Continue (step [] n)

let private takeWithPredicate (pred:'a -> bool) listOp =
    let rec step before = function
        | Empty | Chunk [] -> Continue (step before)
        | Chunk str ->
            match listOp pred str with
            | str', [] -> Continue (step (before @ str'))
            | str', extra -> Done(before @ str', Chunk extra)
        | EOF -> Done(before, EOF)
    in Continue (step [])

let takeWhile pred = takeWithPredicate pred List.span
let takeUntil pred = takeWithPredicate pred List.split

let heads str =
    let rec loop count str =
        match count, str with
        | (count, []) -> Done(count, EOF)
        | (count, str) -> Continue (step count str)
    and step count str s =
        match str, s with
        | str, Empty -> loop count str
        | str, (Chunk []) -> loop count str
        | c::t, (Chunk (c'::t')) ->
            if c = c' then step (count + 1) t (Chunk t')
            else Done(count, Chunk (c'::t'))
        | _, s -> Done(count, s)
    loop 0 str

// readLines clearly shows the composition allowed by iteratees.
// Note the use of the heads and takeUntil combinators.
let readLines =
    let toString chars = String(Array.ofList chars)
    let newlines = ['r';'n']
    let newline = ['n']
    let isNewline c = c = 'r' || c = 'n'
    let terminators = heads newlines >>= fun n -> if n = 0 then heads newline else Done(n, Empty)
    let rec lines acc = takeUntil isNewline >>= fun l -> terminators >>= check acc l
    and check acc l count =
        match l, count with
        | _, 0 -> Done(Choice1Of2 (List.rev acc |> List.map toString), Chunk l)
        | [], _ -> Done(Choice2Of2 (List.rev acc |> List.map toString), EOF)
        | l, _ -> lines (l::acc)
    lines []

Enumerators

Enumerators feed data into iteratees, advancing their state until they complete.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
// Feed the EOF chunk to the iteratee in order to force the result.
let rec enumEOF = function
    | Done(x,_) -> Done(x, EOF)
    | Continue k ->
        match k EOF with
        | Continue _ -> failwith "enumEOF: divergent iteratee"
        | i -> enumEOF i

// Run the iteratee and either return the result or throw an exception for a divergent iteratee.
let run i =
    match enumEOF i with
    | Done(x,_) -> x
    | Continue _ -> failwith "run: divergent iteratee"

// Feed the data stream to the iteratee as a single chunk.
// This is useful for testing but not conserving memory.
// val enumeratePure1Chunk :: 'a list -> Enumerator<'a list,'b>
let enumeratePure1Chunk str i =
    match str, i with
    | [], _ -> i
    | _, Done(_,_) -> i
    | _::_, Continue k -> k (Chunk str)

// Feed data to the iteratee in chunks of size n.
// val enumeratePureNChunk :: 'a list -> int -> Enumerator<'a list,'b>
let rec enumeratePureNChunk n str i =
    match str, i with
    | [], _ -> i
    | _, Done(_,_) -> i
    | _::_, Continue k ->
        let x, xs = List.splitAt n str in enumeratePureNChunk n xs (k (Chunk x))

// Feed data to the iteratee one value at a time.
//val enumerate :: 'a list -> Enumerator<'a list,'b>
let rec enumerate str i =
    match str, i with
    | [], _ -> i
    | _, Done(_,_) -> i
    | x::xs, Continue k -> enumerate xs (k (Chunk [x]))

Reading an HTML Request by Lines

Back to our earlier example of reading HTTP requests, let’s see how we might read it by lines. I’ll add a few caveats. This is not encapsulating the Stream well as we are using a memory stream. I could wrap this up in an enumerateMemoryStream function that takes all the bytes to read (which will likely happen soon). The example below is using the Iteratee.Binary module, not the Iteratee.List module.

1
2
3
4
5
6
7
8
9
10
let runIteratee() =
    let sw = Stopwatch.StartNew()
    let result =
        [ for _ in 1..10000 do
            use stream = new MemoryStream(httpRequest)
            yield! match enumStream 128 stream readLines |> run with
                   | Choice1Of2 x -> x
                   | Choice2Of2 y -> y ]
    sw.Stop()
    printfn "Iteratee read %d lines in %d ms" result.Length sw.ElapsedMilliseconds

So what?

System.IO.Stream-based processing

The System.IO.Stream type should be familiar to anyone who has ever worked with I/O in .NET. Streams are the primary abstraction available for working with streams of data, whether over the file system (FileStream) or network protocols (NetworkStream). Streams also have a nice support structure in the form of TextReaders and TextWriters, among other, similar types.

Using the standard Stream processing apis, you might write something like the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
let rec readConsecutiveLines (reader:System.IO.StreamReader) cont =
    if reader.EndOfStream then cont []
    else let line = reader.ReadLine()
         if System.String.IsNullOrEmpty(line) then cont []
         else readConsecutiveLines reader (fun tail -> cont (line::tail))

let readFromStream() =
    let sw = System.Diagnostics.Stopwatch.StartNew()
    let result =
        [ for _ in 1..10000 do
            use stream = new System.IO.MemoryStream(httpRequest)
            use reader = new System.IO.StreamReader(stream)
            yield readConsecutiveLines reader id ]
    sw.Stop()
    printfn "Stream read %d in %d ms" result.Length sw.ElapsedMilliseconds

readFromStream()

What might be the problems with this approach?

Blocking the current thread

This is a synchronous use of reading from a Stream. In fact, StreamReader can only be used in a synchronous fashion. There are no methods that even offer the Asynchronous Programming Model (APM). For that, you’ll need to read from the Stream in chunks and find the line breaks yourself. We’ll get to more on this in a few minutes.

Poor composition

First off, the sample above is performing side effects throughout, and side-effects don’t compose. You might suggest using standard function composition to create a complete message parser, but what if you don’t know ahead of time what type of request body you’ll receive? Can you pause processing to select the appropriate processor? You could certainly address this with some conditional branching, but where exactly do you tie that into your current logic? Also, the current parser breaks on lines using the StreamReader’s built in logic. If you want to do parsing on individual lines, where would you tie in that logic?

Memory consumption

In this example, we are not taking any care about our memory use. To that end, you might argue that we should just use StreamReader.ReadToEnd() and then break up the string using a compositional parser like FParsec. If we don’t care about careful use of memory, that’s actually quite a good idea. However, in most network I/O situations – such as writing high-performance servers – you really want to control your memory use very carefully. StreamReader does allow you to specify the chunk size, so it’s not a complete case against using StreamReader, which is why this is a last-place argument. And of course, you can certainly go after Stream.Read() directly.

IEnumerable-based processing

How can we add better compositionality and refrain from blocking the thread? One way others have solved this problem is through the use of iterators. A common example can be found on Stack Overflow. Iterators allow you to publish a lazy stream of data, either one element at a time or in chunks, and through LINQ or F#’s Seq module, chain together processors for the data.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
// Read the stream byte by byte
let readStreamByByte (stream: System.IO.Stream) =
    seq { while true do
            let x = stream.ReadByte()
            if (int x) < 0 then ()
            else yield x }

// Read the stream by chunks
let readStreamByChunk chunkSize (stream: System.IO.Stream) =
    let buffer = Array.zeroCreate<byte> chunkSize
    seq { while true do
            let bytesRead = stream.Read(buffer, 0, chunkSize)
            if bytesRead = 0 then ()
            else yield buffer }

// When you are sure you want text by lines
let readStreamByLines (reader: System.IO.StreamReader) =
    seq { while not reader.EndOfStream do
            yield reader.ReadLine() }

Three options are presented. In each, I’m using a Stream or StreamReader, but you could just as easily replace those with a byte[], ArraySegment<byte>, or SocketAsyncEventArgs. What could be wrong with these options?

Lazy, therefore resource contentious

Iterators are pull-based, so you’ll only retrieve a chunk when requested. This sounds pretty good for your processing code, but it’s not a very good situation for your sockets. If you have data coming in, you want to get it moved out as quickly as possible to free up your allocated threads and pinned memory buffers for more incoming or outgoing traffic.

Lazy, therefore non-deterministic

Each of these items is lazy; therefore, it’s impossible to know when you can free up resources. Who owns the Stream or StreamReader passed into each method? When is it safe to free the resource? If used immediately within a function, you may be fine, but you’d also be just as well off if you used a list or array comprehension and blocked until all bytes were read. (The one possible exception might be when using a co-routine style async pattern.)

Summary

Iteratees are able to resolve a number of issues related to general stream processing, as well as either lazy, e.g. seq<_>- or Async<_>-based, non-determinism. I’m sure you are wondering why I didn’t dive deeper in explaining the iteratees above. In the next part, we’ll explore a more idiomatic .NET solution for creating iteratees. We’ll then decompose the iteratees above and cover each in more detail. In the meantime, please checkout FSharpx for additional information.

Re: unREST

Tried commenting on the unREST blog post, but I kept getting an error that I was trying to post illegal content.

  1. You equate the HTTP version of REST with REST itself. That is inaccurate. I think what you don’t like is HTTP.

  2. I’ve never understood why SOAP didn’t drive directly off of TCP. It doesn’t really use HTTP except for transport, which is really useless.

  3. I’m all for different application protocols. Build your own indeed. You are quite right that, in many cases, another person’s uniform interface is not the best. These can be REST or not, that’s really up to you.

  4. I find REST breathtakingly simple. I like it, so I apparently disagree with you, and I’m alright with that.

Here’s another question related to Frank What should…

Here’s another question related to Frank: What should I do about converting existing platforms’ request and response types? I had originally planned on creating my own Request and Response types, but the cost to convert back and forth may or may not be worth it. I either have to create the conversions or a different set of operators per platform.

Functional programming really lends itself to the either, but the latter is a bit more exciting. Why? Take a look at Mauricio’s latest post on Figment. He’s refactored to the Reader monad. Let’s say Frank uses the Reader and Writer monads as a public API for managing web applications. I could then allow developers to use Figment as their implementation on top of ASP.NET and build additional adapters for WebSharper, WCF, OWIN, etc. Frank would then provide a simple, functional, DSL for routing and composing web applications, even using components built on different platforms… which has been the goal all along.

The best benefit, imho, is that developers already familiar with various request/response abstractions can continue using those abstractions. I don’t provide you with yet another HTTP abstraction. This is a significant departure from all other web frameworks of which I know. I suppose the benefit is balanced against the ease of building the adapter using the monad. Time to put functional programming to the test. I am quite certain that functional programming will win.

I’ve been working on the Frank syntax lately…

I’ve been working on the Frank syntax lately. Following some help from @talljoe, Frank maps paths from a resource level rather than each HTTP method. Frank can then take advantage of combinators that add HEAD and OPTIONS based on the HTTP methods explicitly supported by the developer … something missing from most web frameworks, the notable exception being OpenRasta.

Below are samples of two approaches: function and resource. Both are quite noisy at the moment, but I plan to continue reducing the footprint.

Thoughts?

I haven’t had a lot of time recently…

I haven’t had a lot of time recently to do much with OWIN, WCF Web API, Frack, Frank, or any of the projects I’ve been working on lately. I’m staring to pick back up on Frack, which will be undergoing some heavy refactoring, mostly for performance.

I’m also going to be finishing out the HTTP parser (finally). The big hold up so far has been trying to figure out how to do it in pieces and keep track of entire messages. Instead, I’m just going to go the old-fashioned route of expecting a complete message and parsing from beginning to end. (I should have started there.)

Frank will get a complete rewrite. Why? At this point, I’ve almost completely forgotten how or why I did some things. That’s never a good sign. It’s also a lot more complex than I was ever intending. I might arrive at some similarities, but I think a rewrite will serve it well.

In the meantime, I’ve been working on porting some sites to WebSharper. I love it. The only things I’m trying to work out are how to pull in markdown and .fsx files to render content for a lightweight git-based cms. I’m also trying to see how this might run on top of Frack.