raytracing - ideas for which API to start?

Started by
13 comments, last by Evil Steve 19 years, 6 months ago
Hey all! I've starting to mess with a few raytracing ideas and wanted to start to implement some things. My first idea was to start with OpenGL and to use glDrawPixels for the drawing of the raytraced frame. But, for what i understand, this should be rather slow because that isn't a direct access to the video memory, right? (Please, correct me if i'm wrong on this) I don't have *any* knowledge of DirectX and DirectDraw, but i was told that i would be better off with DirectDraw than with using OpenGL for this! Is this correct? Anyway, i wanted to try to stay away from DirectX because of system-independence. So besides DirectX and OpenGL is there another good API that i can use for having direct-access to video memory and is platform independent? (My graphic knowledge in terms of APIs just stops after OpenGL :( )
Advertisement
I've used both DirectX and OpenGL.

DirectX is better for bigger systems.
OpenGL is best for making small grahical implementations, like raytracing.

Those are my opinions. But since OpenGL doesn't require all the prehand stuff you need with DirectX it's muuuuch simpler to get up and running.

What you meant about getting to the graphics card... I fail to see why that's even relevant to a Raytracing implementation, except for speed. Either way, both are a layer between the code and hardware.

If you want to make a raytracing demo, then ignore all the stuff involved with DirectX programing. Use GLUT to get a window up and running quick, and use OpenGL to get the graphics on screen. Since you should be concentrating on the implemenation of the algorithms instead of making a uber big graphics engine.

Check NeHe for some good tutorials to get you started.

And for info on rendering techniques, check out the homepage for Real-time Rendering book here.

I saw a assignment my friend was making for the course the author of that book gives. He was making a small game with raytraced terrain. What he did was have his terrain made of polygons, triangles, and use a light source (actually just a point, in the middle of the flying ship). He calculated the light strength to each vertex, using raytracing. Then just interpolated those values when rendering. I think that's done by setting the glColor3x() for the vertex that he calculated, using equal values for red green and blue for setting the intensity of the light. Looked good on a moving terrain.
[ ThumbView: Adds thumbnail support for DDS, PCX, TGA and 16 other imagetypes for Windows XP Explorer. ] [ Chocolate peanuts: Brazilian recipe for home made chocolate covered peanuts. Pure coding pleasure. ]
Quote:Original post by wolverine
Hey all!

I've starting to mess with a few raytracing ideas and wanted to start to implement some things.

My first idea was to start with OpenGL and to use glDrawPixels for the drawing of the raytraced frame. But, for what i understand, this should be rather slow because that isn't a direct access to the video memory, right? (Please, correct me if i'm wrong on this)

I don't have *any* knowledge of DirectX and DirectDraw, but i was told that i would be better off with DirectDraw than with using OpenGL for this! Is this correct?

Anyway, i wanted to try to stay away from DirectX because of system-independence. So besides DirectX and OpenGL is there another good API that i can use for having direct-access to video memory and is platform independent? (My graphic knowledge in terms of APIs just stops after OpenGL :( )


To implement a raytracer, the sole (literally nothing else) functionality you need is to be able to set the colours of the pixels of an image. Seriously, you do not need any other API facilities. You could even implement it as a console application and using different coloured characters as pixels. You could even have your output as an image file so you would view your raytraced image in an image viewer. And don't worry about the speed of setting pixels either, all your computing time will be spent working out the colour of the pixels. Think about it, Unreal 2004 is updating the screen 60 times a second. A raytracer only really has to draw one image, so that actual drawing part is not the bottle-neck.
i use SDL.

one buffer in RAM which i write my pixels to, and one in vidmem. when im done with raytracing, i just blit the whole bunch in one go to the videomemory. i dont think its getting much more simple nor efficient than that.
Thxs for the replies. They cleared a few confusing points i had.

Seriema: that's how i started to think about it. Glut+OpenGL. And, as Seanw pointed out, i didn't even worried about other features of the API. It's actually quite easy to put a picture on the screen!

My idea was not to produce only one image and let it as that, but to be able to also "walk-arround" in the world! (I'm not trying to be the next Carmack here! This is just for experimentation mostly.) I know that, technically, i do only have to worry about discovering the color of a given pixel. But I was under the impression that putting the pixels on the screen was indeed a big bottleneck of the method. But apparently, as said, that's not so.

So OpenGL stays as it seems to be a good working method. I've once even considered the hipoteses of learning SDL, but never actually delved in to that.

Thxs everyone!
Quote:Original post by Eelco
i use SDL.

one buffer in RAM which i write my pixels to, and one in vidmem. when im done with raytracing, i just blit the whole bunch in one go to the videomemory. i dont think its getting much more simple nor efficient than that.


I find it benefitially to be able to seen the raytracing output as it is calculated to give you feedback on the image. This is especially useful whilst debugging because you don't want to wait 60 seconds to find out your lighting code is faulty. Displaying your current image every X pixels as they are calculated can be useful. A better one is to calculate pixel colors based on resolution and calculate the current image based on this. For example, the first image it draws would be 5x5 pixel size resolution, then 4x4 etc. so the resolution increases as more colour values are known. It gives you very quick feedback of what the image will look like and the resolution increases over time so you can wait longer to make it look better.
Quote:Original post by wolverine
Thxs for the replies. They cleared a few confusing points i had.

Seriema: that's how i started to think about it. Glut+OpenGL. And, as Seanw pointed out, i didn't even worried about other features of the API. It's actually quite easy to put a picture on the screen!

My idea was not to produce only one image and let it as that, but to be able to also "walk-arround" in the world! (I'm not trying to be the next Carmack here! This is just for experimentation mostly.) I know that, technically, i do only have to worry about discovering the color of a given pixel. But I was under the impression that putting the pixels on the screen was indeed a big bottleneck of the method. But apparently, as said, that's not so.


When you put a pixel on the screen, all you are doing is telling the computer to set a couple of btyes in memory to a certain value. To actually work out the colour to set a pixel (the value to set the memory) you have to do things like work out the eye ray, work out all the intersections of this ray with your world objects, pick the closest one, fire off reflection, refraction and shadow rays etc. Putting the pixels on the screen is going to be your fastest part. :)
a quick tip:
"premature optimization is the root of all evil" gives me 2900 hits on google.

Trust me, don't optimize by theory if you're not a guru in the subject. Do what you want, then run your program through a profiler.

But in your case, whatever method you choose, the slow part will be your algorithms and not the plotting/rendering.

If you need to have input from the keyboard etc. use SDL, it's really easy to use and have lot's of good documentation (they work as tutorials too). Plus it allows you to run OpenGL just like you're used to.

Now, get hacking! ;)
[ ThumbView: Adds thumbnail support for DDS, PCX, TGA and 16 other imagetypes for Windows XP Explorer. ] [ Chocolate peanuts: Brazilian recipe for home made chocolate covered peanuts. Pure coding pleasure. ]

[cool]

Again, thxs for the replys. Perhaps i indeed gaved that detail too much importance.

[hacking process started!]
Quote:Original post by Seriema
a quick tip:
"premature optimization is the root of all evil" gives me 2900 hits on google.

Trust me, don't optimize by theory if you're not a guru in the subject. Do what you want, then run your program through a profiler.

But in your case, whatever method you choose, the slow part will be your algorithms and not the plotting/rendering.

If you need to have input from the keyboard etc. use SDL, it's really easy to use and have lot's of good documentation (they work as tutorials too). Plus it allows you to run OpenGL just like you're used to.

Now, get hacking! ;)


Argument about speed is semi-correct. I found that glDrawPixels really consume significant amount of time in opengl version of my renderer, and can even hang the system in case of bad drivers.
( i've made 3 versions: DirectX, OpenGL, and without screen/non-interactive (can only save to file)). Renderer is 100% separated from api and placed in different module, it just writes image into memory. API-related part just handles everything API-related(copying image onto screen, reading keyboard input,etc), nothing more.
OpenGL version is mainly to be used under Linux.
edit: and my renderer is done like
Api-dependent-thing<--->user-interface-thing<---->renderer
i have several different Api-dependent parts.

also user interface thing have console:
user-interface-thing-console<---->console-parser<---->renderer

and version without user interface is like
reader-from-file<---->console-parser<---->renderer

but it wasn't like that from beginning.... some time ago it had one api and almost everything placed in single source file.

This topic is closed to new replies.

Advertisement