Journal

Sign in to follow this  
  • entries
    122
  • comments
    246
  • views
    89566

Strict or Lazy

Sign in to follow this  
Daerax

317 views

Can someone please clarify this for me.

I have come to enjoy this thing - a boon of pure functional programming - called Lazy Evaluation. In general lazy evaluation allows for a very flexible style of programming and allows one to implement very closely to specification. The ability to simply create and handle infinite data types is really quite useful, especially if one is implementing something mathematical -- say illustrating cantor's diagnolization theorem. Or creating your own new number system - hyperreals (this will be my first proper project in Haskell, implement the Hyperreals in it, Ill document my progress as I go).

Edited in [ As a simple example of the concept, I ran across this thread in the C# Workshop where I learned that for boolean types; expressions utilizing OR, AND, etc are evaluated lazily when the ||, && - conditional operators are used as opossed to strictly when |, & - the logical operators are used.]

But apparently Lazy Evaluation is bad? In general I mean, not with respect to the above example. It makes your program slower and harder to reason about I hear. It also uses alot of memory.

But i do not understand how a method for deferred evaluation creates slower programs. Maybe something to do with the compiler? As for harder to reason about I can only assume that refers to the compiler and its ability to make optimizations on your code since it wont know what happens in certain code sequences? I do not think it means harder for the person since Lazy evaluation allows your code to actually stay closer to your human reasoning (assuming you have been training it to reason efficiently).

However, the language Clean is Lazy and yet still manages to produce quite fast code. Does lazy evaluation make your program require more memory? Explanations anyone? I can only guess that the only real reason for avoidance of Lazy is for want of freedom, Lazy forces one to program in a descriptive or functional manner. Something that does not lend itself well to all problems.
Sign in to follow this  


8 Comments


Recommended Comments

Your language does not have to have lazy evaluation to support infinite data structures. You just need to support coinductive datatypes as well as inductive datatypes. Charity does this.

The type of finite lists is something like this:


data List a where
Nil :: List a
Cons :: a -> List a -> List a

And the type of streams is something like this:


codata Stream a where
Head :: Stream a -> a
Tail :: Stream a -> Stream a


An inductive/algebraic type is defined by its constructors and is an initial algebra for an endofunctor. A coinductive/coalgebraic type is defined by its destructors and is a final coalgebra for an endofunctor.

The problem with Haskell is that it does not distinguish between inductive and coinductive data. Like false and zero, streams and lists are different concepts and should have different types (here is a recent blog post on this). You could put them in a type class to allow writing code that operates on both, though.


The cause for efficiency problems is that a thunk (which is either the value, or a function used to get the value if it has not yet been evaluated) must be associated with each value, causing extra memory use and a decrease in speed. It is also harder to reason about the efficiency of code because of the more unpredictable evaluation order. A tail-recursive implementation is usually the most efficient in a strict language, but it is not always so in a lazy language. Big-oh based reasoning is not as helpful either.

Share this comment


Link to comment
I suspected it would be you who replied. Once again, you enlighten me on my journey to self-learn computer science. I am thankful for your help.

Quote:
Original post by Rebooted
Your language does not have to have lazy evaluation to support infinite data structures. You just need to support coinductive datatypes as well as inductive datatypes. Charity does this.

The type of finite lists is something like this:


data List a where
Nil :: List a
Cons :: a -> List a -> List a

And the type of streams is something like this:


codata Stream a where
Head :: Stream a -> a
Tail :: Stream a -> Stream a


Charity. Is that language still being actively developed? There seems to be no community for it, scant documentation and some of the people who are working on it dont seem to have updated their webpage. In 10 years. or more.

Quote:
An inductive/algebraic type is defined by its constructors and is an initial algebra for an endofunctor. A coinductive/coalgebraic type is defined by its destructors and is a final coalgebra for an endofunctor.


So they are duals of each other then. I can only form a vague appreciation of this based on knowing what a functor is and the concept of initial and terminating objects. The significance of this though is lost on me, I will clarify this in time though. The wiki said something about fold being an initial algebra, so what there is an arrow from fold to every algebraic type?

Quote:
The problem with Haskell is that it does not distinguish between inductive and coinductive data. Like false and zero, streams and lists are different concepts and should have different types (here is a recent blog post on this). You could put them in a type class to allow writing code that operates on both, though.


Why is this bad? I can see the difference between streams and lists, the difference is glaring even if subtle. Not many things can be the same as their opposite. For programming though, when does this distinction become important? How does Haskell do coinductive data? I can only guess that Haskell does not quite do both but in fact implements something that is at least a homomorphism of the union of Inductive and coinductive data.

Quote:
The cause for efficiency problems is that a thunk (which is either the value, or a function used to get the value if it has not yet been evaluated) must be associated with each value, causing extra memory use and a decrease in speed. It is also harder to reason about the efficiency of code because of the more unpredictable evaluation order. A tail-recursive implementation is usually the most efficient in a strict language, but it is not always so in a lazy language. Big-oh based reasoning is not as helpful either.


Big Oh. Oh, that kind of reasoning. I hope i never have to write fast programs that require me to know it. Looks dull.

In what kind of situations will lazy evaluation compromise the efficiency of tail recursion?

Share this comment


Link to comment
Quote:
I suspected it would be you who replied. Once again, you enlighten me on my journey to self-learn computer science. I am thankful for your help.

Thanks [smile]

Quote:
Charity. Is that language still being actively developed? There seems to be no community for it, scant documentation and some of the people who are working on it dont seem to have updated their webpage. In 10 years. or more.
No, I don't think anyone is still developing it. It had some good ideas that I think will end up in future languages though.

Quote:
The wiki said something about fold being an initial algebra, so what there is an arrow from fold to every algebraic type?
Every algebraic type has a fold/catamorphism which acts as the destructor for breaking down values you have built up from the constructors. For lists, this is foldr. Charity generates the fold for an algebraic type automatically, and pattern matching is just a special case of fold - it is syntactic sugar (if you add general recursion as in Haskell then pattern matching becomes the more primitive operation because you can only write structurally recursive functions as folds). Coalgebraic types on the other hand come with unfold construction operations for building up values you will later break down using the destructors.

Quote:
Why is this bad? I can see the difference between streams and lists, the difference is glaring even if subtle. Not many things can be the same as their opposite. For programming though, when does this distinction become important?
That blog post gives some motivation. Basically when you write a function that works on a list, you're usually writing a function that works on one of lists or streams. Lists and streams have different interfaces.


Quote:
How does Haskell do coinductive data? I can only guess that Haskell does not quite do both but in fact implements something that is at least a homomorphism of the union of Inductive and coinductive data.
I just mean for example it bundles up both streams and lists into one list type. I don't mean you can represent every coalgebraic type in Haskell.

Quote:
In what kind of situations will lazy evaluation compromise the efficiency of tail recursion?

Basically, tail recursion is not optimized out because the tail call is not immediately evaluated. Instead, a thunk is built that makes the call when it is forced.


Quote:
Also, what are your thoughts on Clean? Cheers!
I've not used it but it looks nice, basically the same as Haskell except it uses uniqueness types to support effects and keep referential transparency instead of the IO monad. I don't see a reason to switch to it though.

Uniqueness types are a form of substructural types (as in substructural logics, like linear logic), which are another important concept that will probably end up in future languages. Some really impressive stuff -- like type systems which can verify code is free of deadlock -- is possible through substructural typing.

Share this comment


Link to comment
Quote:
Original post by Rebooted
Quote:
How does Haskell do coinductive data? I can only guess that Haskell does not quite do both but in fact implements something that is at least a homomorphism of the union of Inductive and coinductive data.
I just mean for example it bundles up both streams and lists into one list type. I don't mean you can represent every coalgebraic type in Haskell.


I know you meant that, thats why I said homomorphism and not isomorphism :D.

Quote:
Quote:
In what kind of situations will lazy evaluation compromise the efficiency of tail recursion?

Basically, tail recursion is not optimized out because the tail call is not immediately evaluated. Instead, a thunk is built that makes the call when it is forced.


Thanks for the explanation.

Quote:
Quote:
Also, what are your thoughts on Clean? Cheers!
I've not used it but it looks nice, basically the same as Haskell except it uses uniqueness types to support effects and keep referential transparency instead of the IO monad. I don't see a reason to switch to it though.

Uniqueness types are a form of substructural types (as in substructural logics, like linear logic), which are another important concept that will probably end up in future languages. Some really impressive stuff -- like type systems which can verify code is free of deadlock -- is possible through substructural typing.


Well, monads in Haskell are a bit unnatural and fairly different from their category theoretic motivations. They do not quite fit in the language and seemed to have been shoehorned in/bolted on. Like objects on C in C++. I dont know anything about uniqueness typing but perhaps it is a more natural means for side-effects? Also Clean seems to produce fast and efficient code especially compared to Haskell. Those seem fair reasons to take it up.

Share this comment


Link to comment
I'm pretty fond of monads in general - Either and Maybe for example provide a very neat way of dealing with errors, and the "DSLs" built with monads for stuff like logic programming and parsing are also very nice. You don't need language support for that though; and Haskell has none, except for the do notation. Clean has type classes so you could easily write a Monad class and write definitions for Either, Maybe, etc. You could do it in OCaml too, using functors rather than type classes.

I don't have any love for the special built-in IO monad though, it does feel sort of bolted on. Maybe substructural typing or comonads would be better for effects.

BTW, what are monads used for in category theory?

Share this comment


Link to comment
It is true that not all monads we meet in haskell are for the purpose of side-effects. Do not get me wrong, I think the use of monads in haskell is brilliant and elegant as a solution. It just still seems like a loophole type of solution. For the purposes of side-effects though, I still feel them to be a bit displaced. The monads in category theory though, are a bit more general than in haskell. Undoubtedly you have likely met “monad-like things” much earlier than you think. As early as grade school probably.

A monoid structure is formed of objects paired with unit and multiplication (similar to cartesian crossings in concept) morphisms. The natural numbers are monoid objects in the category of sets with identity and an operation (addtion) from . Monads are a similar structure save our category is that of endofunctors, functors from a category to itself and our morphisms are natural transformations (functor mappings which keep shape/structure). Monoids and monads especially are important in that they tell us about how the arrows of a category can modify each other while maintaining structure within a category. Monoidal categories are a generalization of cartesian closed categories where the restriction of closed and cartestian closure is removed. The study of monads can be seen as a generalization of abstract algebra. In terms of adjoints, they are a way to relate the structure of different categories.

In Haskell, monads are really strong monads and the categories are cartesian closed. The unit and multiplication (join) transformations are not precisely so, they are only such up to natural isomorphism. So they are more or less the same things but not quite (for example they leverage the concept of polymorphic functions while in categories they are much simpler: natural transformations are merely functor morphisms). By interpreting the lambda calculus on monads one can then construct sections of the language where functions are not merely from value to value but from value to computations. Passing not just values back and forth but as well state. Emulating side effects as a result. For example the identity natural transformation to assign value.

Share this comment


Link to comment
It is true that not all monads we meet in haskell are for the purpose of side-effects. And you are right monads are just structures – I wonder if I could implement them in Nemerle. Do not get me wrong, I think the use of monads in haskell is brilliant and elegant as a solution. It just still seems like a loophole type of solution. For the purposes of side-effects, I still feel them to be a bit displaced. The monads in category theory though, are a bit more general than in haskell. Undoubtedly you have likely met “monad-like things” much earlier than you think. As early as grade school probably.

A monoid structure is formed of objects paired with unit and multiplication (similar to cartesian crossings in concept) morphisms. The natural numbers are monoid objects in the category of sets with identity and an operation (addtion) from . Monads are a similar structure save our category is that of endofunctors, functors from a category to itself and our morphisms are natural transformations (functor mappings which keep shape/structure). Monoids and monads especially are important in that they tell us about how the arrows of a category can modify each other while maintaining structure within a category. Monoidal categories are a generalization of cartesian closed categories where the restriction of closed and cartestian closure is removed. The study of monads can be seen as a generalization of abstract algebra. In terms of adjoints, they are a way to relate the structure of different categories.

. In Haskell, monads are really strong monads and the categories are cartesian closed. You can interpret monads in terms of list comprehensions in Haskell. The unit and multiplication (join) transformations are not precisely so, they are only such up to natural isomorphism. So they are more or less the same things but not quite (for example they leverage the concept of polymorphic functions while in categories they are much simpler: natural transformations are merely functor morphisms). By interpreting the lambda calculus on monads one can then construct sections of the language where functions are not merely from value to value but from value to computations. Passing not just values back and forth but as well state. Emulating side effects as a result. For example the identity natural transformation to assign value.

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now