Archived

This topic is now archived and is closed to further replies.

walkingcarcass

hybrid raytracers and programmable pipelines

Recommended Posts

i was thinking of using a simple raytracing engine to do things like grass which could just be a few polys repeated zillions of times. the rest of the world would be HW accelerated polys. the point being the raytracer is fast because there are very few surfaces to check against and the resulting image quality should be very high (a self-shadowing lawn) it''s basically texture generation which returns rgbz ie rgb with depth is there an easy way to draw a texture on a rendered scene where the pixel is only plotted if it''s nearer? can textures be drawn strait from system memory, or is it easier to modify a loaded texture? can pixel shaders say "this pixel''s z-value is ..." before the z check, or an abort-if instruction? is this possible without shaders? ******** A Problem Worthy of Attack Proves It''s Worth by Fighting Back

Share this post


Link to post
Share on other sites
quote:

draw a texture on a rendered scene where the pixel is only plotted if it''s nearer?



Isn''t that just the standard z buffer algorithm?

I can almost certainly say that no matter what you do, raytracing grass on current hardware will be slower (you want self shadowing as well? hmm...). Check out some of the multi pass grass demos on nVidias site for nice low poly grass with movement and everything.

Share this post


Link to post
Share on other sites
glDrawPixels() - writes a block of pixels to the frame buffer with any number of various components set, such as stencil, depth, rgb, luminance, and so on. Use glPixelStore and glPixelTransfer to enable and disable various channels, such as depth, etc.

How fast this is, I don't know.

NOTE: you render the bitmap, with depth, and then call glDrawPixels once to transfer it into the framebuffer where the hardware rasterizes it performing all of the fragment comparisons and operations along the way.

[edited by - bishop_pass on December 25, 2002 7:12:29 AM]

Share this post


Link to post
Share on other sites
glDrawPixels is horribly slow. Better would be to upload it into a texture and render that but even so..

However I''m willing to be proven wrong, go ahead and write that raytraced grass, I''d certainly be interested

Share this post


Link to post
Share on other sites
Actually, his idea might not be so far off. Full 3D grass is pretty expensive to render, and requires complex LOD systems to get it interactive at all. It has a computational complexity proportional to the number of blades: more blades = more power required. Raytracing, OTOH, has a a complexity proportional to the screen resolution. It should be equally fast on 10 grassblades than an 10 billion. That's the theory. In parctice, the limiting factor is the ray/blade intersection test, which will make it just as dependent on the blade number as the 3D polygonal solution.

Now, if you had an analytic grass field model, like a simple equation that would give you back the colour at a certain ray position and direction, then things would get interesting. A kind of noise field, using a Wolf3D-style raycasting approach. Of course, to get quality, you would need heavy supersampling. And lighting is a whole different matter. You could render it using a pixelshader and a depth texture.

I guess a raytracing approach would surely have interesting advantages over a polygonal system. But it would require lots of research and heavy optimization on the algorithmic part. I'm a bit sceptical regarding the quality of such an approach, at least at interactive rates on current hardware. I have a nicely working (and good looking) polygonal grass system, so I won't investigate into grass raytracing methods at this point. But if anyone got some results, I'd be interested to hear about it.

Thinking of it: for the lighting, using deep shadow maps would probably get you excellent shadowing quality. But then you can kiss the realtime part goodbye...

/ Yann

[edited by - Yann L on December 25, 2002 12:00:35 PM]

Share this post


Link to post
Share on other sites
I think this idea has great merit for anything that has extremely fine detail composed of thousands of (semi-repeating objects) which will be repersented on an area of the final rendering which is not too large. Examples include grass, distant fields of shrubs, forests, talus fields, crowds of people...

The key is to work out the proper intersection algorithm which is able to minimize the number of intersection tests.

Share this post


Link to post
Share on other sites
i would go with fur rendering. slicing and fins.. it looks awesome and is quite fast.. (and fully animated and physically correct with gravity and all, no problems.. check the ati homepage, they have quite a bunch of stuff about it..)

grass is fur of the world:D

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
I have to agree with daveperm, for the moment "IBR" type approaches are probably the best.

If you use a LOD based simplification, an efficient model would need to do that too, you likely wont need raytracing.

If you just want to use ridiculous geometry detail and rely on the output sensitivity of raytracing then you will need adaptive supersampling, it is the only way aliasing can be "controlled" in such situations. You will need a reasonable lower bound on the number of samples per pixel though so you can be reasonably sure, but never entirely, that your solution is actually converging ... this isnt really realtime either at the moment, certainly not something you would want to try to accelerate on present day 3D hardware.

Share this post


Link to post
Share on other sites
Actually on my work we had built an engine (not a game) which combined voxel rendered terrain engine and poligonal objects (like in Outcast game, but more sofisticated) . Voxel renderer was working on cluster of PC under MPI and poligonal objects (terees, buildings, vehicles etc) were rendered in DirectX. The system was very complicated and quite slow, though gave more "photorealistic" quality then simple poligonal. Now it''s discarded in favor of common poligonal renderer.

Share this post


Link to post
Share on other sites
quote:
Original post by serg3d
Actually on my work we had built an engine (not a game) which combined voxel rendered terrain engine and poligonal objects (like in Outcast game, but more sofisticated) . Voxel renderer was working on cluster of PC under MPI and poligonal objects (terees, buildings, vehicles etc) were rendered in DirectX. The system was very complicated and quite slow, though gave more "photorealistic" quality then simple poligonal. Now it''s discarded in favor of common poligonal renderer.


i remember outcast, its one of the games i payed for, and one of the games i don''t want to miss at all. it was just awesome, felt that natural to play somehow.. very well done..

i dislike outcast2, they dropped voxels

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
thanks for your input, but this is not a raytracing feasability discussion

z-buffer algorithms retrieve z from interpolating the vertices. here, z is one of the texture components or supplied in another buffer

given a texture and and the z value for each pixel, how do i draw only which pixels are not hidden?

********


A Problem Worthy of Attack
Proves It''s Worth by Fighting Back

Share this post


Link to post
Share on other sites
Due to a paper that yannl posted quite a while ago, i have a modified realtime polygonal grass demo, and it works with localised procedurally defined weather too. All i need to do now is get it to look nide instead of each blade looking like a 5 point polygon. Im trying to get a fragment shader to do it, but i just cant get the right ''look''.

I have 4 levels of detail too, and it runs ~36 fps on a landscape with nothing but grass on a radeon 9700 pro. I''ll fix it eventually when i have time to actually optimise it, and use it.

Share this post


Link to post
Share on other sites
The easiest way would be to do it the other way around, write the results of the raytracing to a color buffer and depth texture before you render the polygonal part using that as the rendering target.

You will need to know the internal format of depth textures of course ...

If you dont want to simply write native Z-values to video memory you could always use DX9 level hardware and simply store an extra depth value in alpha or an offscreen buffer, and use that to kill fragments when needed in the pixel shader by comparing it against the depth from raytracing in your texture (will need both floating point framebuffer and texture of course, so that might be slow ... if possible it might be best to use a normal framebuffer with a single component offscreen floating point buffer to store the depth of the polygonal stuff, and use a normal RGB texture and seperate single component floating point texture for your raytracing results ... or does DX9 hardware support 8-8-8-24 formats?).

[edited by - PinkyAndThaBrain on December 31, 2002 9:47:21 PM]

[edited by - PinkyAndThaBrain on December 31, 2002 9:48:13 PM]

Share this post


Link to post
Share on other sites