**travels through space**

Time passes, and space with it. I don't like airplanes overmuch, so I caught a train up to Paris last weekend, and from there Kate & I took a plane to Dublin to visit Jan Jaime Jingle Hemmett Schmidt.

Dublin was lovely, lovely -- performing for us in weather and in song, in light and dark -- the dark being the fine stout, of course. I couldn't pick a favorite part, though the one that keeps popping into mind is Newgrange, a 5000-year-old neolithic passage tomb, forty-some meters in diameter, aligned so that a shaft of light would pass into the chamber on the winter solstice.

That image aligns nicely with my Mumford readings of late. Before the construction of such artifacts, either as sticks planted in the ground or as massive monuments, *time did not exist* -- at least, not as we know it now, as an outside thing ticking on discretely without us -- as opposed to the spurting yet continuous flow experienced within.

I spent the rest of last week hanging out in deserted August Paris, the city-time of open-air cinema and ambles in unpeopled streets. But I won't lie: I spent a fair piece of it indoors, hack on the mind.

**travels through time**

At my last dispatch, I was in 1987, implementing flat closures, described in Dybvig's dissertation.

The recent spatial travel to Dublin, unaccompanied by the laptop, left my mind unusually clear. So spatially back in Paris on Wednesday I left again for 2002, implementing *Fixing Letrec*. The important contribution of that paper was a systematic, direct way to translate mutually recursive lambda expressions to a generalized form of the fixed-point operator, also known as the Y combinator. Quoth the prophets Waddell, Sarkar & Dybvig:

The transformation converts

letrecexpressions into an equivalent mix oflet,set!, andfixexpressions. Afixexpression is a variant ofletrecthat binds only unassigned variables tolambdaexpressions. It represents a subset ofletrecexpressions that can be handled easily by later passes of a compiler.In particular, no assignments through external variables are necessary to implement mutually recursive procedures bound by

fix. Instead, the closures produced by afixexpression can be block allocated and “wired” directly together. This leaves thefix-bound variables unassigned, thus simplifying optimizations such as inlining and loop recognition.

fixis identical to thelabelsconstruct handled by Steele’s Rabbit compiler and theYoperator of Kranz’s Orbit compiler and Rozas’ Liar compiler.

For me, this passage was tantalizing, but I didn't quite understand the optimizations that a transformation to `fix` allowed. Yes, I had read Steele's dissertation a number of times, but Guile's compiler doesn't do a CPS rewrite, so it was unclear how some of the optimizations applied.

But my recent "flat closure" work had pointed out one optimization, the "block allocation" mentioned in the above paragraph. Consider two mutually recursive functions:

(define (analyze n) (define (even? n) (or (= n 0) (odd? (- n 1)))) (define (odd? n) (or (= n 1) (even? (- n 1)))) (cond ((< n 0) 'negative-number) ((even? n) 'even-number) (else 'odd-number)))

Ignore for the moment the dubious "analysis" that this function performs. (Can you spot the bug?) What I want to focus on are the internal definitions, `even?` and `odd?`. Using `define` in a nested context like this is simply sugar around `letrec`, so this is equivalent to:

(define (analyze n) (letrec ((even? (lambda (n) (or (= n 0) (odd? (- n 1))))) (odd? (lambda (n) (or (= n 1) (even? (- n 1)))))) (cond ((< n 0) 'negative-number) ((even? n) 'even-number) ((odd? n) 'odd-number)))

Here we see the mutual recursion expressed more clearly. Looking more closely at the `even?` and `odd?` bindings, we see that each has one free variable:

(lambda (n) (or (= n 0) (odd? (- n 1)))) ^ odd? is free in even? (lambda (n) (or (= n 1) (even? (- n 1)))) ^ even? is free in odd?

If `even?` and `odd?` are compiled as mutually recursive closures (a point to which I will return shortly), they need to capture each other's bindings -- kindof a chicken-and-egg problem, no?

In the lambda calculus, this problem is solved via the Y combinator, which basically involves passing the functions of interest to another function, which in turn invokes the functions of interest, *passing themselves as their arguments*. It's pretty neat. Thomas Holubar and Jos Koot have been discussing this very topic at FLIBUG, incidentally (see Thomas's slides on the need for Y for a brief introduction).

**origins of letrec**

To continue the digression, I understand that the `letrec` construct was originally defined by Peter Landin in a 1965 paper, A Correspondence between ALGOL 60 and Church's Lambda-Notation. He uses a very schemely intermediate language to model ALGOL 60's semantics with sugar on top of a slightly extended lambda calculus.

In the following discussion, from the paper, Landin describes how mutually recursive definitions and labels (`goto` targets) can be expressed using `letrec`. Take *φn* to mean an arbitrary expression; italics are mine.

Definitions can also arise from the block-body -- their definees being the labels that are local to the block. These are defined in terms of locals, including each other, and they may be referred to by procedure and switch declarations. Hence labels must be grouped with switches and procedures as a single simultaneously recursive definition. The overall treatment of a block is therefore as follows.

[On the left we have ALGOL 60, on the right Landin's "Applicative Expression" intermediate language.]begin let a=φ1 real a; and A=φ2 array A φ2; let rec P=φ3 procedure P φ3; and S=φ4 switch S φ4; and L=φ6 φ5; and M=φ7 L: φ6; φ5 M: φ7; end

[Here is the translation into the lambda calculus. Note theY!]{λ(a,A).{λ(P,S,L,M).φ5} [Yλ(P,S,L,M).(φ3,φ4,φ6,φ7)]} [φ1,φ2]

Applying `Y` to mutually recursive procedures/labels *binds them together*, allowing them to see themselves. So you see, `Y` was in `letrec` from the very beginning.

Note that labels, denoting basic blocks, are modelled as procedures of 0 arguments. In the light of Steele's 1977 contribution, which I will discuss shortly, Landin's 1965 paper was particularly prescient. Landin says:

The treatment of jumps springs from the observation that the symbol '

go to' in ALGOL 60 is redundant, and could be ignored by a preprocessor. That is to say, there is a considerable similarity between labels and the identifiers of parameterless nontype procedures. It is possible to use the same "calling mechanism" for both, leaving any difference to be made by the thing that is "called"...It might therefore be supposed that labels can be eliminated formally by considering each labelled segment of program as a parameterless procedure declaration (and hence as a definition whose defiens is a λ()-expression).

Steele turns this isomorphism around, saying instead:

In general, procedure calls may be usefully thought of as GOTO statements which also pass parameters, and can be uniformly encoded as JUMP instructions. This is a simple, universal technique... Our approach results in all tail-recursive procedure calls being compiled as iterative code, almost without trying, for it is more a matter of the code generation strategy than of a specific attempt to remove recursions.

Landin passed away in June of this year.

**mischief managed**

Anyway, back to 2002. Scheme's `letrec` construct is more general than Landin's `let rec`; it allows arbitrary expressions on the right-hand side, not just procedures. So the first trick Waddell shows is how to transform `letrec` into "an equivalent mix of `let`, `set!`, and `fix`". Once he has the `fix`, which is the same as `Y`, we need to compile it -- and that's where we were before longjmping back to 1965.

In the lambda calculus, a very minimal language, often there are more efficient ways to implement higher-order constructs, such as `fix`.

So `fix`-bound procedures need to capture an environment including themselves, fine. The standard way (in Scheme now, not the lambda calculus) is to bind e.g. `even?` and `odd?` (as in our example above) to empty "boxes", then `set!` the contents of the boxes to the procedures. The `fix`-bound lambda-expressions have bound the right variables, and after the boxes have been filled in via `set!`, those expressions hold each other's value: mischief managed. Like this:

(let ((even? #f) (odd? #f)) (set! even? (lambda (n) (or (= n 0) (odd? (- n 1))))) (set! odd? (lambda (n) (or (= n 1) (even? (- n 1))))) ...)

But this is not ideal, because it forces the `fix`-bound lambda expressions to be allocated on the heap, in boxes, whereas in fact they are never `set!` in the source text.

We can do better. Since the lambda expressions don't actually run until after the bindings have been made, and thus don't reference their free bindings until then, we can allocate them on the stack, *then* fix up their free variables, mutating the closures in place.

Thus we don't introduce extraneous `set!` constructs into the code. This simplifies inlining and loop detection, as Waddell notes, and also allows the `fix`-bound variables to be allocated on the stack instead of in boxes, removing an indirection at runtime.

**even better**

But we can do even better than that. In this example, we still allocate a closure for `even?` and for `odd?` -- but we can avoid that too. Notice that `even?` and `odd?` are always called in tail position with respect to the `letrec` construct. Landin tells us that labels, the targets of `go to`, may be viewed as procedures of 0 arguments, with respect to the lambda calculus; Steele goes the other way, in his 1977 paper, Debunking the 'Expensive Procedure Call' Myth, or, Procedure Call Implementations Considered Harmful, or, Lambda: The Ultimate GOTO. (If you program, and want just one article to read out of all of these, read this paper.)

So, for a tail-call to a `fix`-bound procedure, we can simply render the bindings of the procedures inline to the parent procedure, jump over them, then enter the body of the `fix`. Any tail-call to a `fix`-bound procedure compiles a `goto` -- after setting its arguments. Since the procedure is lexically bound, and never `set!`, we know exactly where those arguments are going to be, so we set them directly -- no need to shuffle them around.

So an empty loop:

(letrec ((loop (lambda () (loop)))) (loop))

Looks like this:

;; jump over definition of `loop' 0 (br :L124) ;; -> 16 ;; definition of `loop' 8 (br :L125) ;; -> 8 ;; letrec body 16 (br :L125) ;; -> 8

(You get this by typing `,x (lambda () (let loop () (loop)))` into the Guile REPL.)

A loop counting down from 100 looks like this:

scheme@(guile-user)> ,x (lambda () (let lp ((n 100)) (or (zero? n) (lp (1- n))))) Disassembly of #<program b742e6d0 at <unknown port>:51:3 ()>: 0 (br :L141) ;; -> 32 8 (local-ref 0) ;; `n' 10 (make-int8:0) ;; 0 11 (ee?) 12 (local-set 1) ;; `t' 14 (local-ref 1) ;; `t' 16 (br-if-not :L142) ;; -> 24 19 (local-ref 1) ;; `t' 21 (return) 24 (local-ref 0) ;; `n' 26 (sub1) 27 (local-set 0) ;; `n' 29 (br :L143) ;; -> 8 32 (make-int8 100) ;; 100 34 (local-set 0) 36 (br :L143) ;; -> 8

I mean, that's pretty damn good. We could do better if we reordered the blocks a bit, and compiled the conditional branch `br-if-not` into something more specific, but still -- pretty tight for compiling scheme to bytecode.

(The ``t'`, if you are curious, is a result of the expansion of `or`.)

**conclusions**

Guile has applied Waddell's "Fixing Letrec" strategy to transform Scheme's general `letrec` into more primitive constructs, including `fix`. `fix`-bound lambda expressions that need to be allocated as closures are now faster and cons less, given that they aren't allocated in boxes, and an important subset of `fix` expressions is now rendered as inline code and wired together using `goto`, a pleasant illustration that indeed, lambda is the ultimate goto.

I didn't think I would get to this kind of bytecode this fast, but once the simplification to `fix` was made, the rest of the optimizations were obvious. But the hack continues, there's always more to do. Expect to see this out in Guile's 1.9.2 release that's coming on Saturday. Happy hacking!