Lisp Pimpin'

Started by
387 comments, last by Mercury 18 years, 8 months ago
Quote:Original post by CoffeeMug
Quote:Original post by Tron3k
Yes, good point, but the thing is that I am not fully revealing my cards yet: See, the REAL point of me making an MMORPG is so that I can have a virtual world in which to create little AI units. I want to experiment with some AI ideas I have. Hence, Lisp [wink]

How is that related to boilerplate networking code?
Firstly, it is related in that I don't want to use two languages at once. It is too much trouble.

Secondly, I'm a do-it-yourselfer. I want to build everything up from pure Win32 API and OpenGL.

Thirdly, I'm doing this project to popularize Lisp and bring it into the mainstream. My major tagline is to say, "Yes, I did all this, the whole client and server, ALL IN LISP!"

And finally, this is not a *serious* MMORPG project. I can see 30 users online as the absolute maximum - and that only if I'm lucky! This project is for my own fun. How hard is Winsock, anyway? (Not very hard, as I know from my previous online RPG. [grin])
“[The clergy] believe that any portion of power confided to me, will be exerted in opposition to their schemes. And they believe rightly: for I have sworn upon the altar of God, eternal hostility against every form of tyranny over the mind of man” - Thomas Jefferson
Advertisement
Quote:Original post by Erik S. Andersson
Quote:Original post by Nathan Baum
1. Functions can be redefined at runtime. This means they must use a consistent calling convention, which means that if you pass scalar values (i.e. integers or floats) to a function, it must be wrapped in a pointer. You can remove this inefficiency by declaring the function inline, but then you also lose the utility of redefining it.

This was something I was thinking about when I wrote a simple Scheme. Don't you think it possible to save dependency links between functions and recompile functions that depend on a function that's been changed?

That's entirely possible, and is something I've also considered. Function redefinition will be an efficiency issue in any implementation which doesn't practice dynamic recompilation, and since I don't know of an implementations which do, I'd tend to avoid doing it unless I particularly needed to.

Having said that, the Common Lisp standard allows function calls to be inlined when they appear in the file which contains the function's definition. In principle, I didn't need to inline any of the functions in my tracer.
Quote:
In many cases the changes you make don't affect the "signature" of the function anyway,

I'm not keen on redefining functions at runtime, unless perhaps they're imported from an external library, so I don't have any clue as to how often a function changes its signature. I'd guess that you're absolutely right, though.
Quote:Original post by Nathan Baum
Quote:Original post by jdh30
Quote:
For this simple example, you might wonder why we wouldn't just have sum and stable-sum, say. The power of the 'special' variable is that the call to sum doesn't need to be directly within the let. If we put a call to some function which does, amongst other things, summing, that sum will be done with accurate semantics. That means we can use the same function for fast and accurate calculation, and that function doesn't need to know which we're doing.

How is that different from having a global boolean determining which of fast and accurate is used inside "sum"?

It depends what you mean.

If you're refering to the fact that *float-semantics* isn't a boolean, then the difference is that there could be other semantics. There might be a :very-accurate semantic which converts all the floats to rationals before summing them, and then converts the result back to a float.

If you're asking "why not just a global variable, of whatever type", then *float-semantics* is a global variable.


I think that is roughly equivalent to using a global polymorphic variant in OCaml:

let float_accuracy = `accurate...let sum = match float_accuracy with    `accurate -> ...  | `fast -> ...


The "float_accuracy" is immutable, of course. If you wanted to change it inside a function then you'd probably pass it as state. Haskell can probably handle this more elegantly using monads.

Quote:
The main advantage of special variables over 'normal' global variables from other languages is convenience. When a special variable appears in a let binding list, the old value is stored away and is guaranteed to be restored when the let returns.

Assuming that *foo* is a special variable,

(let ((*foo* bar))  (do-something))


is a convenient shorthand for

(let ((old-*foo* *foo*))  (setf *foo* bar)  (unwind-protect    (do-something)    (setf *foo* old-*foo*)))


IMHO, it's a misfeature that one uses let to do this, since the semantics of let on a special variable are entirely different from the semantics of let on a normal variable. Lisp attempts to resolve this issue by recommending that special variables be named like *name*, and normal variables never be named like that. I'd prefer something like special-let or slet.


Yes, that is a bit weird. In OCaml, the "let" keyword is used for both statements and definitions:

# let x=2;;x : int = 2# let x=2 in x;;- : int = 2


That can get confusing for the same reason.

Quote:
Quote:
Polymorphic variants are not used much in real OCaml programs. A notable exception is their use in "shorthand" code. For example, the lablGL bindings to OpenGL replace the huge C enumeration GL_* with polymorphic variants. These are then succinct, type inferred and statically checked. Is there a simple Lisp equivalent that can do that?

That depends upon exacly what the variants are like. There is a simple Lisp way to check for invalid uses of GL_* constants at compile time.

(deftype gl:primitive-type ()  '(member gl:POINTS           gl:LINES gl:LINE-STRIP gl:LINE-LOOP           gl:TRIANGLES gl:TRIANGLE-STRIP gl:TRIANGLE-FAN           gl:QUADS gl:QUAD-STRIP           gl:POLYGON))(defun gl:begin (type)  (declare (gl:primitive-type type))  ...)(gl:begin 'gl:LINE)


Here, SBCL and CMUCL warn that 'GL:LINE isn't one of the members of the types. It can be more sophisticated than that, because a particular symbol isn't restricted to belonging to more than one type. This is particularly useful here, because some of the GL_ constants are used in different ways by different functions.


Yes, that is exactly the benefit provided by polymorphic variants. I guess another advantage of polymorphic variants is the ability to carry data. For example, lablGL's GlTex module:

type parameter =  [ `border_color of Gl.rgba  | `generate_mipmap of bool  | `mag_filter of [ `linear | `nearest ]  | `min_filter of GlTex.filter  | `priority of Gl.clampf  | `wrap_s of GlTex.wrap  | `wrap_t of GlTex.wrap ]


Can you do the equivalent in Lisp, statically checked?

Quote:
An example of a constant with two even more unrelated meanings is FRONT, which is both a culling mode and a color buffer. In Lisp, gl:FRONT can easily be a member of the gl:culling-mode type and the gl:color-buffer type.


Yes. Similarly with lablGL that is `front, which appears in more than one type.

Quote:
Quote:
Quote:
With just tuples, the compiler couldn't detect that you were accidentally using a "float * float * float * float * float" (a torus -- x; y; z; outer radius; inner radius) as though it were a "float * float * float * float * float" (a cylinder -- x; y; z; radius; height).

Yes, exactly. This is precisely how you leverage static type checking.

It appears you're saying that you use static type checking to make sure you can tell the difference between the two. It isn't static type checking which does that: just type checking in general.


No. I am saying that ML guarantees to statically verify the correctness of your code in this case. In contrast, Lisp may try statically check the code but is perfectly within its rights to postpone the checking until run time, when it may or may not actually occur.

Quote:
Lisp would never confuse a torus for a cylinder if you'd been sensible and declared a type for each, and in general Lisp will warn you if it can determine at compile-time that you're using something as though it were a torus and a sphere.


ML will always catch such errors at compile time. You can work around the static type checking but it requires effort, so code is normally thoroughly checked. In some cases, it helps to sit down and think about how you could remove run-time checks.

Quote:
Quote:
One problem with records in OCaml is that field names can shadow each other:

# type vec2 = {x:float; y:float };;type vec2 = { x : float; y : float; }# type vec3 = {x:float; y:float; z:float };;type vec3 = { x : float; y : float; z : float; }# {x=1.; y=2.};;Some record field labels are undefined: z


But that is the price you pay for brevity. So if I wanted to use records I'd divide the declarations into separate modules (Torus and Cylinder). Convention is to have a type called "t" in each module.

Lisp also has this limitation, in that accessors for different structures cannot have the same name. Like in OCaml, you need to use objects to do that. However, Lisp also allows a structure to include a single structure.


Ok. I think you can do something almost equivalent in SML with records but in OCaml you'd probably have to use objects. Unfortunately, objects are much less efficient than records.

Quote:
The included structure's slots are then available under the same name as the old slots.

(struct (vec2 (:conc-name nil))  x y)(struct (vec3 (:conc-name nil) (:include vec2))  z)


You can only include a single structure. This means you avoid any tricky issues of where a slot might be stored, such as you would have with multiple-inheritance: you know the vec2 slots are stored before the vec3 slots. This makes it easier to optimise slot access.


Ah, ok.

Quote:
Quote:
Quote:
The real problem is that ML is being rude: in this aspect at least, it forces you to choose between efficiency and clarity. In Lisp, low-level optimisation is enabled not by changing the code, but by adding to it.

I'm not quite sure what the difference is here. Can you give examples of optimisations that require you to change the code in OCaml but add to it in Lisp?

None that work for me, unfortunately.


:-)

Quote:
In principle, a dynamic-extent declaration may allow a Lisp compiler to perform some optimisations that the OCaml compiler might not be able to do. Naturally, it doesn't work in any Lisp compiler I currently have access to.

That may not be true, though. My experimention shows that OCaml is not as dumb about values with dynamic extent as you believe:

let rec times count fn =  let vl = fn count in  if count > 0 then times (count - 1) fn else vl;;let p = ref 0;;let rec zoo x = [ ((p:= (!p+1)); !p) ];;let rec xoo x = let p = zoo x in p;;times 1000000000 xoo;;!p;


In this code, I've had the list allocated by zoo go through several layers of indirection. Nonetheless, memory usage is constant. OCaml's GC doesn't appear to kick in until the computer's running out of memory, so it's not simply it being very efficient at GC: either the list is being recycled after each call to xoo (except the last), or the list is not being constructed at all. Either way, OCaml isn't being dumb.


I believe that is a consequence of generational garbage collection, rather than region analysis. The ordinary GC code invoked at the end of the function is looking and finding that the data can be deleted immediately. If you make the data live a little longer then it will probably go into the old generation and will suddenly be GC'd much less efficiently.

Quote:
On the hunch that the inefficiency in my implementation of the tracer is mainly caused by consing inside ray_sphere and intersect, I jostled things about so as to avoid said consing.

The particular changes are: ray_sphere now uses a scratch vector for displacing the ray to the sphere's origin; intersect only allocates one hit structure, aux scribbles over it as necessary. I suspect that OCaml does the same thing automatically: it can tell when a particular structure isn't being used any more and will reuse it if it can.


Again, I think this is done implicitly by an efficient GC rather than explicitly by program analysis. But the result is almost the same. I believe MLton has a much more sophisticated GC, so it may well be doing the kind of analysis that you're talking about. Also, the Scheme compiler Stalin does region analysis, IIRC.

Quote:
let rec f n l = if n > 0                then match l with                     | []     -> f (n - 1) [n]                     | h :: t -> f (n - 1) (n :: t)                else l;;


OCaml executes this function in constant space, regardless of the initial value of n. I believe it knows that when "f" is called recursively, the list it passes will have no other references and can therefore be recycled when constructing the list for the next invocation. This doesn't require that anything be stored on the stack.

It appears to be a limitation of the Lisp implementation that it cannot perform the same optimisation; I can't see any reason why the semantics of the language would prevent it.


I get the impression that OCaml and MLton have much better GCs than the Lisp and Scheme compilers that I'm using. In OCaml's case, Lisp can probably compete. In MLton's case, I think the Lisp language is too stifling to let it compete. MLton's GC does an enormous number of heinously ingenious low-level optimisations that exploit all-manner of high-level (particularly type) information about data.

Quote:
I've also implemented the specialised shadow intersection. Although it has an effect, it is not so pronounced as I might have hoped. Toggle +use-intersect2 to nil if you want to compare times.

My current timings are:

Without shadow specialisation -- 3.6 seconds.

With shadow specialisation -- 3.2 seconds.

On my system, that's nearly one second faster than the original unoptimised OCaml program.


Eh? Are you saying that you have a Lisp program algorithmically equivalent to my 1st OCaml program that is outperforming OCaml?

I'll check it out...

Also, if you haven't already, could you post the double precision implementation? I couldn't do the translation for some reason.

Quote:
It doesn't appear to scale well. Assuming that the times here are in seconds, it's approximately a fourth of the speed of Java. Of course, if the times are in minutes, Lisp rules all. But I imagine that's wishful thinking.


Yes, those would be seconds. :-)

Quote:
Edit: Unless you mistyped on that page. You say you used 16x supersampling, but the source has 4x, and you say 4x on the Usenet threads. At 4x, it takes 36 seconds, which would again be roughly equivalent to the slowest OCaml implementation.


For ss=4, it's doing a 4x4 grid of rays for each pixel. That 16x as many rays as pixels or "16x supersampling" to me. I thought that was standard nomenclature but I may be wrong.
Quote:Original post by Nathan Baum
That's entirely possible, and is something I've also considered. Function redefinition will be an efficiency issue in any implementation which doesn't practice dynamic recompilation, and since I don't know of an implementations which do, I'd tend to avoid doing it unless I particularly needed to.


If over-invoked, dynamic recompilation can be a huge performance cost, of course.
Quote:Original post by jdh30
Yes, that is exactly the benefit provided by polymorphic variants. I guess another advantage of polymorphic variants is the ability to carry data. For example, lablGL's GlTex module:
type parameter =  [ `border_color of Gl.rgba  | `generate_mipmap of bool  | `mag_filter of [ `linear | `nearest ]  | `min_filter of GlTex.filter  | `priority of Gl.clampf  | `wrap_s of GlTex.wrap  | `wrap_t of GlTex.wrap ]

Can you do the equivalent in Lisp, statically checked?

There are two feasible approaches.

The exact equivalent is quite possible, but one would need to use a macro to achieve equivalent conciseness, and you'd have to cope with the inefficiency of heap-allocated structures for the polymorphic variants with data.

The more orthodox Lisp solution would probably be to use keyword arguments.

(defun glTexParameter (target &key border-color generate-mipmap mag-filter                                   min-filter priority wrap-s wrap-t)  (declare (GLtextarget target)           (GLrgba border-color)           (bool generate-mipmap)           (GLmagfilter mag-filter)           (GLminfilter min-filter)           (GLclampf priority)           (GLwrap wrap-s)           (GLwrap wrap-t))  ...)


Then you'd call (glTexParameter GL_TEXTURE_2D :border-color "bad"). This might also allow for more concise code when setting lots of parameters at the same time, (glTexParameter GL_TEXTURE_2D :generate-mipmap t :priority 0.7 :wrap-s GL_REPEAT).

Using the OCaml-esque version, you'd call it like (glTexParameter GL_TEXTURE_2D (border-color "bad")), which would still generate a static warning, since "bad" couldn't be used to initialise a GLrgba, although you might say that e.g. (border-color "Dark Red") would be allowed.
Quote:
No. I am saying that ML guarantees to statically verify the correctness of your code in this case. In contrast, Lisp may try statically check the code but is perfectly within its rights to postpone the checking until run time, when it may or may not actually occur.

In general, Lisp produces warnings at runtime for type errors that ML would consider to be an error. Lisp is of course different in that it has a wider gamut of permissible types: not all Lisp expressions have an ML equivalent.

e.g. Lisp can't do 'full' static type inference on:

(defun foo (n l)  (if (= n 0) l      (foo (1- n) (list l))))


Neither can ML, but there

let rec foo n l = if n = 0 then l else foo (n - 1) [l];;


is considered erroneous.
Quote:
I get the impression that OCaml and MLton have much better GCs than the Lisp and Scheme compilers that I'm using.

I believe SBCL and CMUCL have only aquired generational garbage collection fairly recently. It's likely their implementation is not as mature.
Quote:
Eh? Are you saying that you have a Lisp program algorithmically equivalent to my 1st OCaml program that is outperforming OCaml?

So long as +use-intersect2+ is set to nil, it's equivalent so far as I can tell.
Quote:
Also, if you haven't already, could you post the double precision implementation? I couldn't do the translation for some reason.

With my last program, just search/replace 'single-float' with 'double-float'. Here, double-float is roughly half a second slower.
Quote:
For ss=4, it's doing a 4x4 grid of rays for each pixel. That 16x as many rays as pixels or "16x supersampling" to me. I thought that was standard nomenclature but I may be wrong.

No, that makes sense now that you point it out. That means that my implementation does scale well: it's about the performance of your first OCaml program for both the benchmark scene and the larger scene.
Quote:Original post by jdh30
Quote:Original post by Nathan Baum
That's entirely possible, and is something I've also considered. Function redefinition will be an efficiency issue in any implementation which doesn't practice dynamic recompilation, and since I don't know of an implementations which do, I'd tend to avoid doing it unless I particularly needed to.

If over-invoked, dynamic recompilation can be a huge performance cost, of course.

Undoubtably. I don't imagine that functions would be redefined in a tight loop, but in the event that it did happen, the recompiler would have to have a limit on the rate of recompilation. If recompilations were exceeding the rate limitation, then only necessary recompilations would be performed, whilst purely optimising recompilations would be deferred until the program had free cycles.

Necessary recompilation is defined as when the signature of the function changes, which will happen when an argument or the return value changes from being a raw primitive value to a pointer descriptor.

Whatever happens, the implementation would be well advised to display a debug warning if dynamic recompilation was happening 'suspiciously often'.
Quote:Original post by Tron3k
Firstly, it is related in that I don't want to use two languages at once. It is too much trouble.

If this is true it's a huge weakness of Lisp implementations (and therefore Common Lisp standard itself for all practical purposes) because in the modern world very few serious systems are written in a single language. If it isn't true then this point is a non-issue.
Quote:Original post by Tron3k
Secondly, I'm a do-it-yourselfer. I want to build everything up from pure Win32 API and OpenGL.

That's reasonable. I'm writing a Lisp interpreter at the moment along with a web framework and a website on top of it [grin]
Quote:Original post by Tron3k
Thirdly, I'm doing this project to popularize Lisp and bring it into the mainstream. My major tagline is to say, "Yes, I did all this, the whole client and server, ALL IN LISP!"

We seem to share this goal (though I am doing it to popularize myself as well [smile]). I still think it's better to popularize the best tool for the job. Blind Lisp advocacy is barely any different from, say, blind Java advocacy.
Quote:Original post by CoffeeMug
Quote:Original post by Tron3k
Firstly, it is related in that I don't want to use two languages at once. It is too much trouble.

If this is true it's a huge weakness of Lisp implementations (and therefore Common Lisp standard itself for all practical purposes) because in the modern world very few serious systems are written in a single language. If it isn't true then this point is a non-issue.

Using two languages at once is generally a lot of trouble with any two languages where the underlying data and control models are vastly different.

The way I see it, there are two reasons you'd want to use another language: because you've already got code in that language you can reuse; because that language is better suited to the problem domain.

Lisp's saving grace in this regard is that if you can't use another language, you can at least spend some time making a DSL which is similarly well-suited to the problem domain. For example, you could make a concurrent and distributed DSL which is comparable to Erlang. Doing the same in a less extensible language would be impracticable.

Obviously this doesn't mean Lisp should get away with not providing a useful interface to other languages. The real stumbling block here is that whilst Lisp is a high-performance general-purpose applications programming language, it has the data model of an interpreted scripting language.

Only languages for which compiled programs include rich typing information are likely to be at all easily to load into Lisp at runtime. This is particularly inconvenient from the point-of-view of the major systems programming languages -- C and C++ -- not being such languages; you need to provide your own glue. I am uncertain if there are any effective programs which can scan C/C++ header files and generate e.g. UFFI interfaces to the functions therein. Such would be very useful.

A version of Lisp which targets the .NET VM, the JVM or some other 'high level' virtual machine would probably find it far easier to get along with other programs.
Quote:Original post by CoffeeMug
Blind Lisp advocacy is barely any different from, say, blind Java advocacy.
Well, not really, but anyway...
I think of what I'm doing as "Lisp is cool for games" advocacy. Lisp is so far off the gaming radar screen it's not even funny. [grin]
“[The clergy] believe that any portion of power confided to me, will be exerted in opposition to their schemes. And they believe rightly: for I have sworn upon the altar of God, eternal hostility against every form of tyranny over the mind of man” - Thomas Jefferson
Quote:Original post by Nathan Baum
Lisp's saving grace in this regard is that if you can't use another language, you can at least spend some time making a DSL which is similarly well-suited to the problem domain. For example, you could make a concurrent and distributed DSL which is comparable to Erlang.
Erlisp? [lol]

Link 1
Link 2

It is still under development. I believe they are being funded by Google as part of their "Summer of Code".
“[The clergy] believe that any portion of power confided to me, will be exerted in opposition to their schemes. And they believe rightly: for I have sworn upon the altar of God, eternal hostility against every form of tyranny over the mind of man” - Thomas Jefferson

This topic is closed to new replies.

Advertisement