Timeline for Y combinator and tail call optimizations
Current License: CC BY-SA 3.0
6 events
| when toggle format | what | by | license | comment | |
|---|---|---|---|---|---|
| Jul 17, 2013 at 19:51 | comment | added | Ingo | @DanD. in the same article you linked, it reads further down that GHC "does a number of optimisations on that representation, before finally compiling it into real machine code (possibly via C using GCC)." | |
| Jul 17, 2013 at 17:10 | comment | added | Dan D. | Actual it does use graph reduction: "GHC compiles to the spineless tagless G-machine (STG). This is a notional graph reduction machine (i.e., a virtual machine that performs graph reductions as described above)." From... For more on the STG machine, see Simon Peyton Jones's Implementing lazy functional languages on stock hardware: the Spineless Tagless G-machine. | |
| Jul 17, 2013 at 16:35 | comment | added | Ingo | @DanD. Thanks for the links, I'll try them later in a postscript aware browser. Sure there is something to learn for me. But, are you sure that compiled haskell does graph reduction? I doubt this. | |
| Jul 17, 2013 at 16:29 | comment | added | Dan D. | As to the last paragraph, consider Haskell which at its heart uses graph reduction to do lazy evaluation. But my favorite is optimal reduction which always takes the path in the Church-Rosser lattice with the least reductions to full normal form. Such as appears in Asperti and Guerrini's The Optimal Implementation of Functional Programming Languages. Also see BOHM 1.1. | |
| Jul 17, 2013 at 16:27 | comment | added | Dan D. | The Y-combinator is like retying a knot that keeps getting untied after each use. Most systems short cut this and tie the knot at the meta-level such that it never needs to be retied. | |
| Jul 17, 2013 at 15:04 | history | answered | Ingo | CC BY-SA 3.0 |