Lisp Pimpin'

Started by
387 comments, last by Mercury 18 years, 7 months ago
Quote:Original post by flangazor
What have we learned here?


I think Lisp should be evaluated in the context of modern, high-level programming languages like ML, rather than comparing it to C++.

Compared to languages like ML, Haskell, Ruby and so on, Lisp has a bigger user base, more existing code and a wider range of free compilers. In particular, Lisp is probably better for embedding DSLs where performance is important but not critical (i.e. faster than a naive interpreter but not as fast as a real compiler).

However, Lisp does not statically check code as much as languages like ML and Haskell, it can be considerably slower, it is harder to optimise, arguably harder to read and it is considerably more verbose (particularly for code of comparable performance).
Advertisement
Quote:Original post by random_thinker
[...]Each implementation seems to have it's own ffi[...]
This is because there is no such thing as a 'Common Lisp FFI'. If you want a unified interface to all the implementation-specific FFIs, you can use the previously mentioned UFFI library

"Walk not the trodden path, for it has borne it's burden." -John, Flying Monk
Quote:Original post by jdh30
Quote:Original post by flangazor
What have we learned here?


I think Lisp should be evaluated in the context of modern, high-level programming languages like ML, rather than comparing it to C++.

Compared to languages like ML, Haskell, Ruby and so on, Lisp has a bigger user base, more existing code and a wider range of free compilers. In particular, Lisp is probably better for embedding DSLs where performance is important but not critical (i.e. faster than a naive interpreter but not as fast as a real compiler).

However, Lisp does not statically check code as much as languages like ML and Haskell, it can be considerably slower, it is harder to optimise, arguably harder to read and it is considerably more verbose (particularly for code of comparable performance).


It's true that a declarative functional language such as lisp or Haskell shouldn't be compared to C++. That's because C++ is a systems programming language where speed is critical, as in a high level language such as Haskell is more aimed at the computation and at the thinking of a correct algorithm, and since it's declarative, by that definition, we can conclude that what goes underneath it's not really something that interest us... Results are what interest and so does algorithms....

As for lisp I don't know, but Haskell is a language that has an implementation which allows to compile. If you use the GHC (Glasgow-Haskell compiler) you can compile Haskell programs, but it can also be interpreted (and it is in this form that it is most used) in GHCi, Hugs, NHC and a couple more of compilers/interpreters. I also think that NHC can compile, but i'm not sure.

if anyone wants to see Haskell....
Quote:Original post by jdh30
Quote:Original post by flangazor
What have we learned here?

I think Lisp should be evaluated in the context of modern, high-level programming languages like ML, rather than comparing it to C++.

I'd compare it to modern, high-level programming languages like Java, or Python.
Quote:
However, Lisp does not statically check code as much as languages like ML and Haskell,

Of course that's neither true nor false. Languages don't check anything; implementations do. Some implementations of Lisp perform more checking than others.

The best you can say is that ML requires ML implementations to perform more static type checking than Common Lisp. That's true, but it also imposes limitations.

The relaxed attitude of the Lisp standard towards compile-time checking allows simple implementations to remain simple, but also allows for efficiency-oriented implementations to provide comprehensive static checks.

Additionally, some of the checks that ML performs don't make sense from the perspective of Lisp. Many 'type errors' that ML detects simply aren't errors in Lisp.
Quote:
it is harder to optimise,

I don't know about that.

I'm a Lisp novice who knows next-to-nothing about raytracing or writing efficient Lisp code (or writing efficient code in general for that matter), and I optimised the ray tracer in several hours spread over just over a week.

And the final optimisations that actually had an effect (inlining and using scratch variables) are simple rule-of-thumb optimisations which, had I any formal training in Lisp, I would certainly already have known about.
Quote:
arguably harder to read

It's only harder to read for someone who finds ML easier to read. I find myself in the opposite situation.

Additionally, it's inarguably harder to modify ML's syntax if you find it doesn't intuitively fit the particular program you're trying to write.
Quote:
and it is considerably more verbose (particularly for code of comparable performance).

My final tracer has only 1.1 as many words in it as the equivalent OCaml tracer, and if I follow the same hideous code layout policy as you observed in your code, only 1.6 as many lines.

Furthermore, it is well known that Lisp's conciseness wins out for large programs. This is precisely because of macros. Functions compress certain expressions into (generally) far shorter expressions, without (usually) sacrificing clarity. Macros may be viewed as generalisations of this principle. They can be applied to a wider class of expressions. In particular, macros can be applied to factor out expressions which are dependent upon context.
Quote:Original post by Extrarius
Quote:Original post by random_thinker
[...]Each implementation seems to have it's own ffi[...]
This is because there is no such thing as a 'Common Lisp FFI'. If you want a unified interface to all the implementation-specific FFIs, you can use the previously mentioned UFFI library


Agreed, but the only 'semi-cross platform UFFI' is either the Allegro or Lispworks one. The others are Linux-based, and the UFFI is not available for CLISP. But Allegro and Lispworks has its own, complete ffi, so we are really back to square one. In essence, you're better off using the Allegro or Lispworks ffi, as these are as 'cross-platform' as you'll get. The UFFI is really 'cross-implemetation' rather than 'cross-platform'.

[Edited by - random_thinker on August 28, 2005 5:49:22 AM]
--random_thinkerAs Albert Einstein said: 'Imagination is more important than knowledge'. Of course, he also said: 'If I had only known, I would have been a locksmith'.
Quote:Original post by k0ns0l3It's true that a declarative functional language such as lisp or Haskell shouldn't be compared to C++. That's because C++ is a systems programming language where speed is critical...

while this is true, it's hardly deniable that C++ isn't only used for low level programming. at least i don't consider nowadays applications and (PC-)games lowlevel.
eg, the boost library, over which many C++ coders are drooling all the time, is for the most part trying to pimp up C++ to a higher level. just have a look at some "C++ vs <higher language>" discussion, there is always the argument, that you can do everything and much more and much better with funky C++ libraries.
------------------------------------------------------------Jawohl, Herr Oberst!
Quote:Original post by random_thinker
Agreed, but the only 'semi-cross platform UFFI' is either the Allegro or Lispworks one. The others are Linux-based, and the UFFI is not available for CLISP. But Allegro and Lispworks has its own, complete ffi, so we are really back to square one. In essence, you're better off using the Allegro or Lispworks ffi, as these are as 'cross-platform' as you'll get. The UFFI is really 'cross-implemetation' rather than 'cross-platform'.

ISTR that there are some patches to make CLISP work with UFFI. There might be more info about it on CLISP mailing-list. Then there are projects like CFFI which look promising and support wide range of implementations. I get the feeling that CL is just starting to take off wrt library base and esp. foreign libraries.
---from www.yellow-hut.com/blog
Quote:Original post by Nathan Baum
Quote:Original post by jdh30
Quote:Original post by flangazor
What have we learned here?

I think Lisp should be evaluated in the context of modern, high-level programming languages like ML, rather than comparing it to C++.

I'd compare it to modern, high-level programming languages like Java, or Python.


Python makes me cringe. I'd probably prefer Lisp to Java because it is roughly as fast but more expressive, and probably not much more verbose.

Quote:
Quote:
However, Lisp does not statically check code as much as languages like ML and Haskell,

Of course that's neither true nor false. Languages don't check anything; implementations do. Some implementations of Lisp perform more checking than others.

The best you can say is that ML requires ML implementations to perform more static type checking than Common Lisp. That's true, but it also imposes limitations.


Yes. In the case of OCaml, the implementation implicitly defines the language.

Quote:
The relaxed attitude of the Lisp standard towards compile-time checking allows simple implementations to remain simple, but also allows for efficiency-oriented implementations to provide comprehensive static checks.


Unfortunately, it is so difficult for a Lisp compiler to optimise Lisp code to be as fast as many other languages that none of the free Lisp compilers are capable of doing that.

Stalin can do it for Scheme code. However, it takes 1,000x longer to compile than ocamlopt.

Quote:
Additionally, some of the checks that ML performs don't make sense from the perspective of Lisp. Many 'type errors' that ML detects simply aren't errors in Lisp.


Yes. Static type checking is definitely a trade-off.

Quote:
Quote:
it is harder to optimise,

I don't know about that.

I'm a Lisp novice who knows next-to-nothing about raytracing or writing efficient Lisp code (or writing efficient code in general for that matter), and I optimised the ray tracer in several hours spread over just over a week.


I think we should "compare time spent to optimise the code to run at a given speed". I can well believe that it is just as easy to apply any given optimisation to Lisp code as it is to any other program. Indeed, it is probably easier thanks to macros.

However, people optimise code to make it run fast enough. So we need to compare the performances of the different implementations.

First working Lisp implementations were 10-100x slower than my first working OCaml implementation.

I applied a few low-level optimisations to the OCaml to get it into its current state. I manually unboxed a sphere type, inlining "vec * float" into the variant type constructors "Sphere" and "Group". I used records instead of 3-tuples to represent vectors as I know ocamlopt will unbox float records but not tuples.

Of the Lisp implementations, only 2 (both Juho Snellman's, one of the authors of the SBCL compiler) are faster than my first OCaml on either of my machines.

Given that only a compiler writer has been able to write a Lisp implementation as fast as the (basically unoptimised) OCaml, I'd say it is a lot harder to optimise Lisp to achieve a given speed than it is to optimise OCaml.

Quote:
And the final optimisations that actually had an effect (inlining and using scratch variables) are simple rule-of-thumb optimisations which, had I any formal training in Lisp, I would certainly already have known about.


I've made similar optimisations to the OCaml and it can be made a lot faster as well.

Quote:
Quote:
arguably harder to read

It's only harder to read for someone who finds ML easier to read. I find myself in the opposite situation.


Ok. Lisp has prefix operators but OCaml has separate "+" and "+." operators.

Quote:
Additionally, it's inarguably harder to modify ML's syntax if you find it doesn't intuitively fit the particular program you're trying to write.


True.

Quote:
Quote:
and it is considerably more verbose (particularly for code of comparable performance).

My final tracer has only 1.1 as many words in it as the equivalent OCaml tracer, and if I follow the same hideous code layout policy as you observed in your code, only 1.6 as many lines.


What about bytes?

Quote:
Furthermore, it is well known that Lisp's conciseness wins out for large programs.


I believe Lisp and OCaml both have this advantage over C. That doesn't make for a clear winner between Lisp and OCaml though.

Quote:
This is precisely because of macros. Functions compress certain expressions into (generally) far shorter expressions, without (usually) sacrificing clarity. Macros may be viewed as generalisations of this principle. They can be applied to a wider class of expressions. In particular, macros can be applied to factor out expressions which are dependent upon context.


Higher-order functions do that in OCaml. I don't know why Lisp programmers prefer macros to HOFs but I've heard that the vast majority of uses of macros are equivalent to HOFs.

Macros can do more than HOFs, of course. But I'd bet you write functions vastly more often than you define new syntaxes.

The only data point I have for long Lisp vs OCaml programs is a pair of Mathematica implementations.

Richard Fateman wrote a Mathematica implementation (MockMMA) in 20kLOC of Lisp. It was roughly 10x less code and 10x faster than the real Mathematica.

I wrote one in OCaml. The first working version (a simple interpreter) was only 800LOC but it was 5x slower than the real Mathematica. I then wrote a 2kLOC JIT compiler than is 600x faster than the real Mathematica.

The functionality of my code is roughly the same as that of Richards but mine is 10x shorter (in lines) and vastly faster.

So I see no evidence that Lisp is asymptotically more concise than OCaml, as you imply.
Real LISP Pimpin'

This topic is closed to new replies.

Advertisement