Jump to content

Talk:Haskell/Denotational semantics

Page contents not supported in other languages.
Add topic
From Wikibooks, open books for an open world

Comment by Apfelmus

[edit source]
  • Look of ⊥ in a mathematical context, is preferred, whereas in a code context, should be the way to go.
  • Draw real pictures showing the various poset structures. Especially the ones with the recursive data type definitions!
  • It's very cool that semantics can partly be explored in Haskell itself. Are there paragraphs where this opportunity was forgotten?
  • Improve examples and exercises. Most have been invented very quickly and are not very good. Those that are, are most likely ones that I remembered from different textbooks. Exercises where one must repeat or slightly modify the explanation from the text are helpful for a first understanding but should be held in low quantities. True insight can be only be achieved by tasks where one has to twist one's mind. Train your brain, don't train a pattern.
  • Add more exercises and examples? A spot where readers frequently get stuck is clearly a good place for additional examples and exercises. Hopefully, these will emerge at this Talk-page.

Apfelmus 19:39, 22 December 2006 (UTC)Reply

Denotational semantics

[edit source]

Imperative languages can certainly be given denotational semantics. It's just that a direct semantics doesn't work anymore. You need to model the store, and usually continuations as well.

Furthermore imperative languages can be given other non-operational semantics such as Floyd-Hoare-Dijkstra style axiomatic semantics. You (Apfelmus) may be interested to search the EWD archive for what Dijkstra had to say about operational reasoning, most of it in regard to imperative programming. It was I who removed the erroneous and misleading paragraph mentioning imperative languages. I apologise for not adding an edit summary this was due to my stupidity not rudeness.
Of course, you are right. Indeed, just by using a MonadState, one already gives denotational semantics to imperative stuff. I somehow wanted to pin down that reasoning is much easier in purely functional languages because the semantics are very direct. So I just denied denotational semantics for imperative stuff altogether and hoped nobody would complain :) Compared to pure functional programs, it doesn't matter much whether I have operational ("machine, change your state!") or denotational ("plan of how machine state changes") semantics, the reasoning doesn't get much easier. Denotational semantics tries to recapture as much pure stuff as possible from the operational semantics, similar in power to the finding of a suitable loop invariant. Ah, and thanks for the EWD link. Apfelmus 22:29, 27 December 2006 (UTC)Reply

forall a . a

[edit source]

The type 'forall a . a' doesn't have to give an error. It can also loop forever. (Which is an error, but not observable.)

Fixed, "any function of the polymorphic type forall a . a must denote ⊥". Why didn't I chose that compelling formulation in first place? Apfelmus 21:04, 27 December 2006 (UTC)Reply

Mistake in the definition of  ?

[edit source]

Isn't there a mistake in the definition, which states that " denotes that a is more defined than b" ? It seems inconsistent with "every concrete number is more defined than : ". Shouldn't it be "less defined" ?

I was going to point this out too. How about " denotes that b is more defined than a"? 71.202.203.117 00:34, 27 December 2006 (UTC)Reply

Oops, it's a mistake of course. Fixed. At the same time, I reverted "completely equal programs" to "equal", because of the subtetly that programs can be syntactically equal as well. Of course, they can be syntactically different and still denote the same "mathematical object". Apfelmus 20:47, 27 December 2006 (UTC)Reply

Love it!

[edit source]

Has really helped my understanding. Thankyou.

I did some images, they're perhaps a bit big, anyone who cares can resize them. DavidHouse 17:32, 26 December 2006 (UTC)Reply

Non-strict constructors

[edit source]

"because we can write a function that tells them apart"

f (Just _) = 4 f _ = 7 

"As f ignores the contents of the Just constructor, f (Just _|_) is 4 but f _|_ is 7." This argument is truly ingenious but unfortunately wrong :) With the above definition, we have f ⊥ = ⊥. The stated properties cannot hold because they violate monotonicity. The fact that Just ⊥ is different from ⊥ cannot be "proved", it's a design decision. How can this be formulated nicely? The author above clearly wanted to appeal to a "proof" by example like in the case of const 1. Apfelmus 13:38, 28 December 2006 (UTC)Reply

What's currently in the article is clearly wrong, as you point out, but the fact that f _|_ = _|_ and f (Just _|_) = 4 should be enough to achieve the author's point, that _|_ and (Just _|_) are distinct. -Chinju

Approximations of the Factorial Function

[edit source]

The sentence "That is, it transforms less defined functions to more defined ones." seems to mean that for all x which isn't true.

You're right. I just put in a clarification that hopefully makes more sense. Stereotype441 15:28, 8 February 2007 (UTC)Reply

Denotional semantics & Equational reasoning

[edit source]

I just found an interesting article Fast and loose reasoning is morally correct. Basically it says that even though having Bottom results in inconsistent logic, with a few simple constraints equational proofs still work. I wonder whether some of it can be useful here. I'll be away the rest of the weekend but then will try to read the paper more carefully. I also wonder if the subject of w:paraconsistent logic has any applicability here. </clueless newbie> 64.160.39.153 05:28, 4 March 2007 (UTC)Reply

Hello, sorry for this late response. I planned a chapter Haskell/Equation reasoning that covers reasoning formally about Haskell programs. As you point out, it would be great to have a subsection based on the article you mentioned that shows how equational reasoning keeps working with ⊥ although it usually assumes that all programs terminate. Thanks for your suggestion :) I don't know enough about paraconsistent logic, but equational reasoning that ignores ⊥ is like calculating with infinitesimal numbers: done right, it works, done wrong, it doesn't :). Related to that, I'd like also to outsource the infinite lists from this chapter to a place where all forms of program proofs (equational, denotational semantics, coinduction) live together, maybe even as a separate chapter. -- apfeλmus 12:10, 24 April 2007 (UTC)Reply

Continuity

[edit source]

The text says that for any type a, any monotone function f :: a -> Integer must be continuous. Why? I can only prove . Why there is not a type a and a sequence of values of type a such that, say, for any and ?

In the light of your comment, the argument in the text indeed doesn't show anything.
The fundamental question I wanted to answer is that although the choice allowing or disallowing continuity is an arbitrary choice, every programming language can only formulate continous function. What magic makes this possible? I think that one can of course prove for every given language (given by its programming constructs like case, if .. then .. else, recursion and so on) that only continous functions can be written down. However, I feel that this is not the right way to prove it because this property is universal for all programming languages, hence there ought to be a proof working for all languages at the same time.
If the type a has only chains with finitely many elements, continuity is granted for free. But for example, Integer->Integer has infinite chains, one being
.
Of course, . Now, imagine a non-continuous function f :: (Integer->Integer) -> Integer with the requirements


Can this funtion be written down in Haskell? No, for it would have to test it's function argument at all possible value and check whether the outcome is 0. Clearly, this would take an infinite amount of time. What I had in mind when writing the text is that the transition of from to 0 must happen at some finite point in the chain, the decision whether it's or 0 may not be postponed to infinity. This is due to the finite nature of computation. With this in mind, the current text argues that the image of any function has in essence only finite chains so that the decision about the result value can indeed be done after a finite number of steps.
Of course, this whole argument isn't formally correct, but I think it hints on what's going on. I currently don't know a correct argument.
Clearly, the text needs a rewrite, though I'm unsure on how to do that. -- apfeλmus 11:31, 24 April 2007 (UTC)Reply

I have an informal counterexample (maybe), but I haven't the knowledge to formalize it. It is inspired by Fran (indeed, I found this book when trying to understand Fran semantics). Let us define

data Time = Exact Float | Interval Float Float

A value Exact t means "time is t" and a value Interval t1 t2 with means "time is in between t1 and t2". Thus, according to this meaning, we define Interval t1 t2 Exact t for any and Interval t1 t2 Interval s1 s2 iff . Now consider the sequence xn = Interval 0 (1/n), which converges to Exact 0, and consider the function f = g . len where

len Interval t1 t2 = t2 - t1

len Exact t = 0

and g is any function such that g 0 = 0 and g x = ⊥ for any . This is only an informal example, I don't know if it is valid.

I don't think it's a counterexample because the meaning of is fixed as "less defined than". I mean, neither Interval t1 t2 Exact t nor Interval t1 t2 Exact t hold. In other words, is quite different. Your example basically tries to introduce the real numbers into denotational semantics by redefining . Of course, not all functions on real numbers are continuous anymore. (In fact, they all are continuous again in intuitionistic logic but that leads too far here)
Also note that Float is a bad choice, because there is a smallest nonzero floating point number, i.e. the sequence xn doesn't converge to Exact 0. But you can use an exact representation of rational numbers instead to make at least this convergence work.
-- apfeλmus 15:50, 26 April 2007 (UTC)Reply

Not all Functions in Strict Languages are Strict

[edit source]

I think this section is wrong. The reason that the lifting trick works is not because functions are not strict in function types, but because lambda terms like (\x:Int.1) or (\x:Int._|_) are in normal form and so cannot take a step of evalution. Other terms of type Int->Int, like (\x:Int->Int.x)(\x:Int.x) will be reduced.

In fact (ignoring type classes), evaluation in a Haskell-like language ought to be completely independent of type information, which can be erased and forgotten once type checking is completed. In a strict language, if const1 (undefined :: Int) = _|_, then const1 (undefined :: Int->Int) = _|_.

The following Standard ML code supports this:

exception Undefined fun const1 (x:int->int) = 1 const1 (raise Undefined) 

In the above code, const1 has type (int->int)->int and raise Undefined, which has equivalent behaviour to Haskell's undefined, has type int->int.

The line const1 (raise Undefined) gives an exception (bottom), not 1.

A strict language shown in chapter 14 of TAPL has an 'error' term, very similar to Haskell's 'undefined'. It also has this behaviour.

Rebooted 15:32, 6 September 2007 (UTC)Reply

Good point. The problem is that ML has "two" undefined functions, namely
raise Undefined :: Int -> Int
and
λx.raise Undefined :: Int -> Int
or better
fix (λf x.f x) :: Int -> Int
In other words, we can distinguish between a function that is an exception and a function that raises an exception on every value. Put differently, the function type is lifted by an additional ⊥ ~= raise Undefined and every function is indeed strict with respect to this ⊥. Unfortunately, this ⊥ is useless for fixpoint iteration which depends on ⊥ = λx.⊥ (as defined in the beginning of the chapter).
So, yes, the section is not correct but the denotational semantics of call-by-value are even trickier than the section is wrong :-) I'm unsure what to do. I wanted the section to somehow show the () -> a trick which can simulate call-by-name in a call-by-value language. Hm, maybe it doesn't belong to denotational semantics at all?
-- apfeλmus 16:59, 6 September 2007 (UTC)Reply
I'd recommend removing the section and replacing it with one entitled "Emulating non-strict behavior in strict languages" which just demonstrates the trick.
Rebooted 15:31, 9 September 2007 (UTC)Reply
On second thought, I'm not sure whether this solves the problem because the issue is even larger now: the discussion of "strict language" is wrong, too. I mean, the whole chapter implicitly assumes that λx.⊥=⊥ which doesn't work for call-by-value. Also, the trick works still thanks to some kind of non-strictness, namely the fact that f (λx.⊥) =/= ⊥ in contrast to f ⊥ = ⊥.
It's probably best to look up the exact denotational semantics of call-by-value in a paper/book to figure out how (or whether) to present strict semantics.
--apfeλmus 08:33, 10 September 2007 (UTC)Reply
Self-note: fixpoints in call-by-value are fine, it's just that the iterated functional is not strict in the value (and thus not internal to the language). In other words, a function fix that starts the iteration with ⊥ would be rather useless in a call-by-value language.
--apfeλmus 20:05, 6 October 2007 (UTC)Reply

Partiality of (in "Strict functions")

[edit source]

FIXME: defines a totally ordered set (toset) on [[Integer]], so we have either or or both ; if both were true, then 2 = 5, which would be effectively a contradiction, but we can still have either or where one of these two predicates is true and the other is false. Such thing is easy to define by defining a semantic to where x and y are both non-undefined integers, using a "friend" toset on [[Integer]]-{undefined} (such as ). So 2 and 5 can be compared with .

Isn't there a confusion here between the toset and the poset  ?

-- 90.50.99.233

Eh? The main point about (i.e. ) is that it's not a total ordering, exemplified in Haskell/Denotational semantics#Partial Functions and the Semantic Approximation Order. The choice that neither nor hold is intentional, the semantic approximation order only says whether one value (like ⊥) is an approximation to another one (like 1). The usual ordering of integers is irrelevant here.
--apfeλmus 20:02, 6 October 2007 (UTC)Reply

Definition of

[edit source]

The definition of seems pretty counter-intuitive for me: Are not at equally defined?

still means "equal". In other words, is "the same value, less defined" but not "some values ordered by some degree of definedness". apfeλmus 08:54, 18 June 2008 (UTC)Reply

There seem to be several people confused about the details of the relation (there's a chat about it under the 'Partial Functions and Approximation Order' heading, and I found it non-obvious upon first reading as well). Perhaps it would be clearer if the relation were defined in one go instead of being spread over several examples - e.g. iff "you can replace parts of b with to get a" Yet Another Haskell User (discusscontribs) 12:40, 21 August 2020 (UTC)Reply

Functions with several Arguments

[edit source]

Should not the code associated with "We see that it is strict in y depending on whether ..." be

cond True x ⊥ = x; cond False x ⊥ = ⊥ 

instead the current one? --AndresSR (talk) 21:46, 28 January 2008 (UTC)Reply

That's right, thanks :-) Fixed now. -- apfeλmus 08:23, 29 January 2008 (UTC)Reply


take 3 nats

[edit source]
take 2 (map (+1) nats) = take 3 nats 

Is this really true? Doesn't the left expression evaluate to [2, 3] while the right expression evaluates to [1, 2, 3]?

--Md2perpe (talk) 00:03, 17 February 2008 (UTC)Reply

Oops, that's wrong of course. I'm going to fix that. -- apfeλmus 22:01, 20 February 2008 (UTC)Reply

Recursive data types and infinite lists diagram

[edit source]

In the illustration of the data type

 data List = [] | () : List 

The value ():[] is shown below ():_|_. Is this a mistake? Thanks.

Oops, it indeed is. -- apfeλmus 21:11, 8 May 2008 (UTC)Reply

Naïve Sets are unsuited for Recursive Data Types

[edit source]

Nevermind, didn't read carefully enough 76.105.193.33 (talk) 13:45, 22 September 2008 (UTC)Reply

Is g in section 3.1 a monotone functional?

[edit source]

Take f such that f(2)=1 and f = _|_ everywhere else. Now g(f)(2) = 2*f(1) = 2 * _|_ = _|_ , which is less defined than f(2) ?

Oh. Right, thank you for pointing this out. Looks like there's a fundamental flaw in the argument presented. I'll fix that.
-- apfeλmus 08:06, 12 October 2008 (UTC)Reply
Ah wait, no that's no problem. Monotone doesn't mean that the result is more defined than the orginal (), it means that a larger orgininal will be turned into a larger result (). So, the fixed point iteration only works if we start with .
-- apfeλmus 12:24, 12 October 2008 (UTC)Reply

Functions Integer -> Integer is a (d)cpo

[edit source]

I don't understand why that monotone sequences may be of infinite length implies functions Integer -> Integer is a (d)cpo.




Define



every chain of functions Integer -> Integer has a least upper bound.
Therefore, functions Integer -> Integer is a (d)cpo. --LungZeno (talk) 03:55, 7 November 2008 (UTC)Reply

What the text wants to say is the following: if monotone sequences are always finite, like in the case of Integer, then it's clear that the ordering is complete. Of course, this doesn't work for Integer->Integer, but there is a (slightly) different argument that does show that the partial functions are a cpo, too.
apfeλmus 09:51, 7 November 2008 (UTC)Reply
Does "finite" mean that number of element in a monotone sequence is finite?
Or mean that number of monotone sequences in Integer is finite?
--LungZeno (talk) 11:10, 7 November 2008 (UTC)Reply
I saw the explaination.
You mean that number of elements in every monotone sequence in Integer is finite.
--LungZeno (talk) 11:42, 7 November 2008 (UTC)Reply
My proof is wrong. Because
Only prove that every chain of functions Integer -> Integer has a minimum upper bound.
Changing my proof to
every chain of functions Integer -> Integer has a least upper bound.
--LungZeno (talk) 12:24, 7 November 2008 (UTC)Reply

Why is Nothing less defined than Just True?

[edit source]

And why is it at the same level as Just _|_ ? --Colin-adams (talk) 21:03, 22 November 2008 (UTC)Reply

It's not. Nothing and Just True are incomparable, and Just ⊥ is less defined than Just True.
In the diagrams, "every link between two nodes specifies that the one above is more defined than the one below" (Section 2.2). Should there be a reminder in the section about data types?
-- apfeλmus 10:09, 23 November 2008 (UTC)Reply

Haskell Recursion Example

[edit source]

I think it might be useful to point out that the "let x = f x in x" construction for the "fix" function actually uses the same least fixpoint semantics under the hoods. Thus "x = f x" semantically means that "x = fix(g)", where g = \x -> f x, leading to x = lfp(f); Thus you are using the same semantics for recursive functions to denote recursive elements.

I think some extra explanation such as above is really useful as I can see many people becoming puzzled by reading this "let x = f x in x" construction because there could be many fixpoint of f and it is not clear that the haskell semantics for functions are the same for these values. jjanssen (talk) 18:51, 2 April 2009 (UTC)Reply

Sounds good. A new subsection on how the discussion applies to Haskell (touching on the examples fib n = fib (n-1) + fib (n-2), fix and let x = f x in) in the recursion chapter is probably the best way to incorporate that.
-- apfeλmus 20:51, 2 April 2009 (UTC)Reply

Request additional Algebraic Data Types examples

[edit source]

More examples of Data types(such as bool, lifted bool,unit, nat) and operations (lift, cartesian conjunction, square, disjunctive join) in the form of trees (one picture has the weight of a thousand words..) would help understand the topic. Can more examples be added? --194.1.130.37 (talk) 09:48, 16 November 2010 (UTC)Reply

"Recursive Data Types and Infinite Lists", the diagram

[edit source]

There is a mistake in the diagram in the chapter "Recursive Data Types and Infinite Lists": the nodes "():⊥" and "():[]" are swapped. Does anyone have a source code for that diagram? --Beroal (discusscontribs) 15:07, 23 January 2011 (UTC)Reply

I think it was an SVG file; maybe it's still available? In any case, I very much like the diagrams by E.Z. Yang. Maybe we can ask him to release them with a CC license?
--apfeλmus 08:16, 27 January 2011 (UTC)Reply

Syntax and semantics

[edit source]

The word semantics is usually singular, much like physics: “Denotational semantics gives meanings to programs,” “this language has this semantics,” etc. The CS literature is consistent on this point AFAIK. (The word can also be plural: “We consider several possible semantics, including call-by-value and call-by-name.” But that's not the usage anywhere in this text.)

Luke Maurer (discusscontribs) 08:18, 2 February 2011 (UTC)Reply

A "wrong" section?

[edit source]

I just noticed that someone added a brisk "This section is wrong." note to the top of the "Not all Functions in Strict Languages are Strict". I don't know enough theory to evaluate that statement; so could anyone more knowledgeable than me tell us whether that section is wrong (and, if so, why)? Thanks, Duplode (discusscontribs) 04:51, 25 April 2012 (UTC)Reply

Section missing

[edit source]

In Section 6.3 "Naïve Sets are unsuited for Recursive Data Types" there's a link to Naïve Sets are unsuited for Recursive Definitions, but it does not exist.

The section was renamed to What to choose as Semantic Domain?. The link is now fixed. Duplode (discusscontribs) 03:14, 11 April 2014 (UTC)Reply

Stumbling block: Partial Functions and Approximation Order

[edit source]

Under "Partial Functions and Approximation Order", I found the section discussing , rather indigestible. We have the para:

This is contrasted to ordinary order predicate , which can compare any two numbers. A quick way to remember this is the sentence: "1 and 2 are different in terms of information content but are equal in terms of information quantity". That's another reason why we use a different symbol: .


"A quick way to remember this" -- What is "this" in this sentence? Does it pertain to the immediately preceding ? Or is it trying to describe  ? And what is content vs quantity?

Further, the sentence "1 and 2 are [...in some sense equal, in another sense not equal...]" is followed by this:

neither nor hold, but holds.

Huh? How does this statement using , where both expressions involving 1 and 2 do not hold, relate to the previous sentence which says that something about 1 and 2 does hold?

Hopefully someone can clarify this. Gwideman (discusscontribs) 06:17, 5 February 2013 (UTC)Reply


I am similarly confused by this section. First it defines as:

... will denote that either is more defined than or both are equal (and so have the same definedness).

A little later we read:

neither nor hold,

I would think that given the definition of that both and would hold. Aren't and equally defined, so both the relations hold?

Thanks in advance. --Dougc (discusscontribs) 11:57, 13 April 2013 (UTC)Reply


disclamer: I'm no expert

If both and would hold, then the antisymmetry law of posets would be broken, since

Antisymmetry: if both and hold, then and must be equal: .

So you either don't define and or you say both are false. In both cases you break a law of the total ordered sets, which seems to be the reason Integer is a poset instead of a total ordered set.

Maybe you could just deny that Integer is a poset(or total order) and make true, but I guess that would be unneccecarily complicated.

--89.246.167.57 (discuss) 19:48, 14 July 2013 (UTC)Reply

Why is f0 = _|_ and why do we have to start at f0?

[edit source]

Section: Approximations of the Factorial Function.

And why ?

is not true here.

For example or

--89.246.167.57 (discuss) 20:30, 14 July 2013 (UTC)Reply

Interpreting factorial as a set

[edit source]

The article currently says:

Now that we have a means to describe partial functions, we can give an interpretation to recursive definitions. Lets take the prominent example of the factorial function whose recursive definition is

Although we saw that interpreting this directly as a set description may lead to problems, …

When did we saw that? What is that referring to? Is it another article? (In that case, I think a link should be there.) Or is it something that was removed since? (Then this should be rewritten.) Or did I just miss something? Svick (discusscontribs) 22:04, 19 March 2014 (UTC)Reply

The reference is to the discussion about recursive functions in Haskell/Denotational semantics#What to choose as Semantic Domain?. I reworded the passage slightly to make it more obvious. Duplode (discusscontribs) 03:18, 11 April 2014 (UTC)Reply

Bottom type vs. Bottom value

[edit source]

Someone should note the difference between the bottom value (without type) of this article and the bottom type (requiring subtyping) from Wikipedia: Bottom type. At first I assumed they were the same and couldn't follow the point of creating a bottom value requiring definedness to avoid solving the halting problem, when you could just create a bottom type without value.

On a tangential note, the type with no values (sans bottom) is usually known in Haskell as Void. --Duplode (discusscontribs) 00:29, 11 May 2019 (UTC)Reply

Approximations of the Factorial Function: Use of next/previous

[edit source]

There are two ways to iterate the factorial function:

The first is the operational approach:

The second is the denotational approach:

The operational approach iterates from towards while the denotational approach iterates from towards . In both cases the function call order remains unchanged: call the next function with diminishing , i.e. . What changes is the definition of 'next' or 'previous'. In the operational case, next is towards . In the denotational case, next is towards . Try as an exercise recreating the steps using:

In this sentence:

we calculate a sequence of functions with the property that each one consists of the right hand side applied to the previous one

the use of 'previous' corresponds to the denotational approach. Intepreted operationally it seems that the function call order has to be reversed, which means:

This is wrong! The correct solution is simply:

This mistake isn't possible with the factorial function because its call order can't be inverted (possibly related to the fact that it converges).

I believe that sentence of the text can be made clearer by replacing "one" with "function", so I'm doing that. --Duplode (discusscontribs) 05:50, 6 May 2019 (UTC)Reply

Monotonicity Inverse

[edit source]

It is also crucial that the inverse does not hold:

For example, does not imply . If that were the case, then , and these examples would not be valid:

Convergence Terminology

[edit source]

A directed-complete partial order refers to a partial order in which every directed subset ("every monotone sequence") is complete ("has a least upper bound"). A dcpo with a least element is termed a pointed directed-complete partial order, which this book abbreviates simply as cpo ("we will only meet dcpos which have a least element which are called complete partial orders (cpo)"). See Wikipedia: Complete Partial Order.

Continuity refers to Scott continuity. See Wikipedia: Scott Continuity.

Interpretation as Least Fixed Point: Multiple fixed points

[edit source]

The example with multiple fixed points is not an assignment of to (that would use rather than ); rather it defines a a fixed-point equality (i.e., an equation):

,

The corresponding solution would be a fixed point to:

,

Here are two fixed-point solutions (thanks to monochrom). The first has a correspondence from f n = if n == 0 then 1 else f (n+1) and is also the least fixed point:

This second solution is clearly more defined than the first:

Strict Functions: power2

[edit source]

The question regarding power2 was somewhat vague. Yes a proof via is possible, but what does a proof involving fixed point iteration mean? If you iterate this functional:

It converges to power2. If it wasn't for the if-then-else stop condition, it would simply converge to . Unfortunately these iterations don't answer the question; rather it was intended to simply to iterate power2 starting with the input .

Recursive Data Types and Infinite Lists: iterate ones in Haskell

[edit source]

I think it's interesting to see a fixed-point iteration of ones in Haskell:

ones :: [Integer] ones = 1 : ones g :: [Integer] -> [Integer] g x = 1 : x x0 :: [Integer] x0 = undefined (f0:f1:f2:f3:f4:fs) = iterate g x0 

Recursive Data Types and Infinite Lists: Exercise #4 possible solution

[edit source]

Hidden assumption

[edit source]

Expanding the definition of factorial:

at some point we have to assume that , e.g., in:

Is this something obvious?

(Bartosz (discusscontribs) 19:34, 7 September 2019 (UTC))Reply

Fortunately, almost exactly this gets explained a few paragraphs later, in the 'Strict Functions Section', which shows why n + ⊥ = ⊥.
Unfortunately, it does come a few paragraphs too late. Yet Another Haskell User (discusscontribs) 12:22, 24 August 2020 (UTC)Reply

Stumbling Block: sup function reads like argmax

[edit source]

The `sup` function for finding the supremum of a set is used in the "Convergence" section without explaining what it does.

On first reading, it looks like it just selects a maximum element from a (potentially infinite) set (i.e. `sup`=`argmax`), whereas in fact one of its critical properties is that it gets to draw its results from some broader set than that which it is passed.

The term 'least upper bound' is mentioned a few times, but doesn't help all that much for a first-time reader since if you don't know it has a precise technical meaning, it's very easy to just consider it a synonym for 'maximum'.

I suggest adding a few words to emphasize that the supremum is allowed to be from outside the set in question. Yet Another Haskell User (discusscontribs) 12:48, 21 August 2020 (UTC)Reply

'Interpretation as Least Fixed Point' subsection seems out of place

[edit source]

The whole of the 'Recursive Definitions as Fixed Point Iterations' section seems like its purpose is to assemble all the mathematical machinery needed to give recursive functions a solid denotation (it even says as much at the start - "Now that we have the means to describe partial functions, we can give an interpretation to recursive definitions. ").

Once we get to the end of reading the 'Convergence' subsection, it seems like the problem is solved; i.e. we can define the denotation of a recursive function to be the supremum of a particular infinite set of partial functions.

As such, it's quite confusing to read on and find that there is a whole other (quite long) subsection about fixed point iterations and continuity.

It's not at all clear to me why it's necessary to have this subsection at all - the preceding text seems to offer a satisfying enough explanation.

Is the subsection there just because it happens to be interesting that recursive functions satisfy a fixed-point equation? Or is the subsection necessary to prove some details (e.g. existence, uniqueness) which the previous explanation doesn't?

It would be helpful to either add a short intro explaining why we can't just stop after the 'Convergence' (+'Bottom includes non-termination') subsections, or to move the subsection into the 'Other Selected Topics' section where it wouldn't break the article's flow as much). Yet Another Haskell User (discusscontribs) 13:26, 21 August 2020 (UTC)Reply

After doing some further reading, I get the impression that:
- The supremum-based definition is actually OK for defining the dentotation
- Interpretation as a fixed point is mainly of use for manually simplifying the denotation of particular expressions to something usable in proofs about (e.g.) program correctness
Is this the case? Yet Another Haskell User (discusscontribs) 12:28, 26 August 2020 (UTC)Reply

Is 'converges' the best term to use?

[edit source]

This may be nit-picking, but using the word 'converges' (e.g. "... the sequence converges to the factorial function..") seems to suggest that there is a Metric Space [1] involved.

I know next to nothing about metric spaces, but the general idea seems to be that 'points' (here, partial functions) need to have 'distances' between them (difference in definedness?). And that it's necessary to have all of this (plus some axioms) in order to be using the word 'converges' precisely.

While this may well be the case, it seems a bit beyond the scope of this article.

How about using a more informal word like 'approaches'? Yet Another Haskell User (discusscontribs) 14:10, 21 August 2020 (UTC)Reply

Monotonicity section could be clearer

[edit source]

More defined arguments will yield more defined values:

In particular, a monotone function with is constant: for all . Note that here it is crucial that etc. don't hold.

made me overthink. It would be clearer to sketch the logic explicitly like

implies . Since , values as well-defined as a proper integer must equal it . Consequently, .

L0mars01 (discusscontribs) 18:20, 14 May 2025 (UTC)Reply