Quote:Original post by Nathan Baum
Quote:Original post by jdh30
Quote:
For this simple example, you might wonder why we wouldn't just have sum and stable-sum, say. The power of the 'special' variable is that the call to sum doesn't need to be directly within the let. If we put a call to some function which does, amongst other things, summing, that sum will be done with accurate semantics. That means we can use the same function for fast and accurate calculation, and that function doesn't need to know which we're doing.
How is that different from having a global boolean determining which of fast and accurate is used inside "sum"?
It depends what you mean.
If you're refering to the fact that *float-semantics* isn't a boolean, then the difference is that there could be other semantics. There might be a :very-accurate semantic which converts all the floats to rationals before summing them, and then converts the result back to a float.
If you're asking "why not just a global variable, of whatever type", then *float-semantics* is a global variable.
I think that is roughly equivalent to using a global polymorphic variant in OCaml:
let float_accuracy = `accurate...let sum = match float_accuracy with `accurate -> ... | `fast -> ...
The "float_accuracy" is immutable, of course. If you wanted to change it inside a function then you'd probably pass it as state. Haskell can probably handle this more elegantly using monads.
Quote:
The main advantage of special variables over 'normal' global variables from other languages is convenience. When a special variable appears in a let binding list, the old value is stored away and is guaranteed to be restored when the let returns.
Assuming that *foo* is a special variable,
(let ((*foo* bar)) (do-something))
is a convenient shorthand for
(let ((old-*foo* *foo*)) (setf *foo* bar) (unwind-protect (do-something) (setf *foo* old-*foo*)))
IMHO, it's a misfeature that one uses let to do this, since the semantics of let on a special variable are entirely different from the semantics of let on a normal variable. Lisp attempts to resolve this issue by recommending that special variables be named like *name*, and normal variables never be named like that. I'd prefer something like special-let or slet.
Yes, that is a bit weird. In OCaml, the "let" keyword is used for both statements and definitions:
# let x=2;;x : int = 2# let x=2 in x;;- : int = 2
That can get confusing for the same reason.
Quote:
Quote:
Polymorphic variants are not used much in real OCaml programs. A notable exception is their use in "shorthand" code. For example, the lablGL bindings to OpenGL replace the huge C enumeration GL_* with polymorphic variants. These are then succinct, type inferred and statically checked. Is there a simple Lisp equivalent that can do that?
That depends upon exacly what the variants are like. There is a simple Lisp way to check for invalid uses of GL_* constants at compile time.
(deftype gl:primitive-type () '(member gl:POINTS gl:LINES gl:LINE-STRIP gl:LINE-LOOP gl:TRIANGLES gl:TRIANGLE-STRIP gl:TRIANGLE-FAN gl:QUADS gl:QUAD-STRIP gl:POLYGON))(defun gl:begin (type) (declare (gl:primitive-type type)) ...)(gl:begin 'gl:LINE)
Here, SBCL and CMUCL warn that 'GL:LINE isn't one of the members of the types. It can be more sophisticated than that, because a particular symbol isn't restricted to belonging to more than one type. This is particularly useful here, because some of the GL_ constants are used in different ways by different functions.
Yes, that is exactly the benefit provided by polymorphic variants. I guess another advantage of polymorphic variants is the ability to carry data. For example, lablGL's GlTex module:
type parameter = [ `border_color of Gl.rgba | `generate_mipmap of bool | `mag_filter of [ `linear | `nearest ] | `min_filter of GlTex.filter | `priority of Gl.clampf | `wrap_s of GlTex.wrap | `wrap_t of GlTex.wrap ]
Can you do the equivalent in Lisp, statically checked?
Quote:
An example of a constant with two even more unrelated meanings is FRONT, which is both a culling mode and a color buffer. In Lisp, gl:FRONT can easily be a member of the gl:culling-mode type and the gl:color-buffer type.
Yes. Similarly with lablGL that is `front, which appears in more than one type.
Quote:
Quote:
Quote:
With just tuples, the compiler couldn't detect that you were accidentally using a "float * float * float * float * float" (a torus -- x; y; z; outer radius; inner radius) as though it were a "float * float * float * float * float" (a cylinder -- x; y; z; radius; height).
Yes, exactly. This is precisely how you leverage static type checking.
It appears you're saying that you use static type checking to make sure you can tell the difference between the two. It isn't static type checking which does that: just type checking in general.
No. I am saying that ML guarantees to statically verify the correctness of your code in this case. In contrast, Lisp may try statically check the code but is perfectly within its rights to postpone the checking until run time, when it may or may not actually occur.
Quote:
Lisp would never confuse a torus for a cylinder if you'd been sensible and declared a type for each, and in general Lisp will warn you if it can determine at compile-time that you're using something as though it were a torus and a sphere.
ML will always catch such errors at compile time. You can work around the static type checking but it requires effort, so code is normally thoroughly checked. In some cases, it helps to sit down and think about how you could remove run-time checks.
Quote:
Quote:
One problem with records in OCaml is that field names can shadow each other:
# type vec2 = {x:float; y:float };;type vec2 = { x : float; y : float; }# type vec3 = {x:float; y:float; z:float };;type vec3 = { x : float; y : float; z : float; }# {x=1.; y=2.};;Some record field labels are undefined: z
But that is the price you pay for brevity. So if I wanted to use records I'd divide the declarations into separate modules (Torus and Cylinder). Convention is to have a type called "t" in each module.
Lisp also has this limitation, in that accessors for different structures cannot have the same name. Like in OCaml, you need to use objects to do that. However, Lisp also allows a structure to include a single structure.
Ok. I think you can do something almost equivalent in SML with records but in OCaml you'd probably have to use objects. Unfortunately, objects are much less efficient than records.
Quote:
The included structure's slots are then available under the same name as the old slots.
(struct (vec2 (:conc-name nil)) x y)(struct (vec3 (:conc-name nil) (:include vec2)) z)
You can only include a single structure. This means you avoid any tricky issues of where a slot might be stored, such as you would have with multiple-inheritance: you know the vec2 slots are stored before the vec3 slots. This makes it easier to optimise slot access.
Ah, ok.
Quote:
Quote:
Quote:
The real problem is that ML is being rude: in this aspect at least, it forces you to choose between efficiency and clarity. In Lisp, low-level optimisation is enabled not by changing the code, but by adding to it.
I'm not quite sure what the difference is here. Can you give examples of optimisations that require you to change the code in OCaml but add to it in Lisp?
None that work for me, unfortunately.
:-)
Quote:
In principle, a dynamic-extent declaration may allow a Lisp compiler to perform some optimisations that the OCaml compiler might not be able to do. Naturally, it doesn't work in any Lisp compiler I currently have access to.
That may not be true, though. My experimention shows that OCaml is not as dumb about values with dynamic extent as you believe:
let rec times count fn = let vl = fn count in if count > 0 then times (count - 1) fn else vl;;let p = ref 0;;let rec zoo x = [ ((p:= (!p+1)); !p) ];;let rec xoo x = let p = zoo x in p;;times 1000000000 xoo;;!p;
In this code, I've had the list allocated by zoo go through several layers of indirection. Nonetheless, memory usage is constant. OCaml's GC doesn't appear to kick in until the computer's running out of memory, so it's not simply it being very efficient at GC: either the list is being recycled after each call to xoo (except the last), or the list is not being constructed at all. Either way, OCaml isn't being dumb.
I believe that is a consequence of generational garbage collection, rather than region analysis. The ordinary GC code invoked at the end of the function is looking and finding that the data can be deleted immediately. If you make the data live a little longer then it will probably go into the old generation and will suddenly be GC'd much less efficiently.
Quote:
On the hunch that the inefficiency in my implementation of the tracer is mainly caused by consing inside ray_sphere and intersect, I jostled things about so as to avoid said consing.
The particular changes are: ray_sphere now uses a scratch vector for displacing the ray to the sphere's origin; intersect only allocates one hit structure, aux scribbles over it as necessary. I suspect that OCaml does the same thing automatically: it can tell when a particular structure isn't being used any more and will reuse it if it can.
Again, I think this is done implicitly by an efficient GC rather than explicitly by program analysis. But the result is almost the same. I believe MLton has a much more sophisticated GC, so it may well be doing the kind of analysis that you're talking about. Also, the Scheme compiler Stalin does region analysis, IIRC.
Quote:
let rec f n l = if n > 0 then match l with | [] -> f (n - 1) [n] | h :: t -> f (n - 1) (n :: t) else l;;
OCaml executes this function in constant space, regardless of the initial value of n. I believe it knows that when "f" is called recursively, the list it passes will have no other references and can therefore be recycled when constructing the list for the next invocation. This doesn't require that anything be stored on the stack.
It appears to be a limitation of the Lisp implementation that it cannot perform the same optimisation; I can't see any reason why the semantics of the language would prevent it.
I get the impression that OCaml and MLton have much better GCs than the Lisp and Scheme compilers that I'm using. In OCaml's case, Lisp can probably compete. In MLton's case, I think the Lisp language is too stifling to let it compete. MLton's GC does an enormous number of heinously ingenious low-level optimisations that exploit all-manner of high-level (particularly type) information about data.
Quote:
I've also implemented the specialised shadow intersection. Although it has an effect, it is not so pronounced as I might have hoped. Toggle +use-intersect2 to nil if you want to compare times.
My current timings are:
Without shadow specialisation -- 3.6 seconds.
With shadow specialisation -- 3.2 seconds.
On my system, that's nearly one second faster than the original unoptimised OCaml program.
Eh? Are you saying that you have a Lisp program algorithmically equivalent to my 1st OCaml program that is outperforming OCaml?
I'll check it out...
Also, if you haven't already, could you post the double precision implementation? I couldn't do the translation for some reason.
Quote:
It doesn't appear to scale well. Assuming that the times here are in seconds, it's approximately a fourth of the speed of Java. Of course, if the times are in minutes, Lisp rules all. But I imagine that's wishful thinking.
Yes, those would be seconds. :-)
Quote:
Edit: Unless you mistyped on that page. You say you used 16x supersampling, but the source has 4x, and you say 4x on the Usenet threads. At 4x, it takes 36 seconds, which would again be roughly equivalent to the slowest OCaml implementation.
For ss=4, it's doing a 4x4 grid of rays for each pixel. That 16x as many rays as pixels or "16x supersampling" to me. I thought that was standard nomenclature but I may be wrong.