hybrid raytracers and programmable pipelines

Started by
11 comments, last by walkingcarcass 21 years, 3 months ago
i was thinking of using a simple raytracing engine to do things like grass which could just be a few polys repeated zillions of times. the rest of the world would be HW accelerated polys. the point being the raytracer is fast because there are very few surfaces to check against and the resulting image quality should be very high (a self-shadowing lawn) it''s basically texture generation which returns rgbz ie rgb with depth is there an easy way to draw a texture on a rendered scene where the pixel is only plotted if it''s nearer? can textures be drawn strait from system memory, or is it easier to modify a loaded texture? can pixel shaders say "this pixel''s z-value is ..." before the z check, or an abort-if instruction? is this possible without shaders? ******** A Problem Worthy of Attack Proves It''s Worth by Fighting Back
spraff.net: don't laugh, I'm still just starting...
Advertisement
quote:
draw a texture on a rendered scene where the pixel is only plotted if it''s nearer?


Isn''t that just the standard z buffer algorithm?

I can almost certainly say that no matter what you do, raytracing grass on current hardware will be slower (you want self shadowing as well? hmm...). Check out some of the multi pass grass demos on nVidias site for nice low poly grass with movement and everything.
glDrawPixels() - writes a block of pixels to the frame buffer with any number of various components set, such as stencil, depth, rgb, luminance, and so on. Use glPixelStore and glPixelTransfer to enable and disable various channels, such as depth, etc.

How fast this is, I don't know.

NOTE: you render the bitmap, with depth, and then call glDrawPixels once to transfer it into the framebuffer where the hardware rasterizes it performing all of the fragment comparisons and operations along the way.

[edited by - bishop_pass on December 25, 2002 7:12:29 AM]
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
glDrawPixels is horribly slow. Better would be to upload it into a texture and render that but even so..

However I''m willing to be proven wrong, go ahead and write that raytraced grass, I''d certainly be interested
Actually, his idea might not be so far off. Full 3D grass is pretty expensive to render, and requires complex LOD systems to get it interactive at all. It has a computational complexity proportional to the number of blades: more blades = more power required. Raytracing, OTOH, has a a complexity proportional to the screen resolution. It should be equally fast on 10 grassblades than an 10 billion. That's the theory. In parctice, the limiting factor is the ray/blade intersection test, which will make it just as dependent on the blade number as the 3D polygonal solution.

Now, if you had an analytic grass field model, like a simple equation that would give you back the colour at a certain ray position and direction, then things would get interesting. A kind of noise field, using a Wolf3D-style raycasting approach. Of course, to get quality, you would need heavy supersampling. And lighting is a whole different matter. You could render it using a pixelshader and a depth texture.

I guess a raytracing approach would surely have interesting advantages over a polygonal system. But it would require lots of research and heavy optimization on the algorithmic part. I'm a bit sceptical regarding the quality of such an approach, at least at interactive rates on current hardware. I have a nicely working (and good looking) polygonal grass system, so I won't investigate into grass raytracing methods at this point. But if anyone got some results, I'd be interested to hear about it.

Thinking of it: for the lighting, using deep shadow maps would probably get you excellent shadowing quality. But then you can kiss the realtime part goodbye...

/ Yann

[edited by - Yann L on December 25, 2002 12:00:35 PM]
I think this idea has great merit for anything that has extremely fine detail composed of thousands of (semi-repeating objects) which will be repersented on an area of the final rendering which is not too large. Examples include grass, distant fields of shrubs, forests, talus fields, crowds of people...

The key is to work out the proper intersection algorithm which is able to minimize the number of intersection tests.
_______________________________
"To understand the horse you'll find that you're going to be working on yourself. The horse will give you the answers and he will question you to see if you are sure or not."
- Ray Hunt, in Think Harmony With Horses
ALU - SHRDLU - WORDNET - CYC - SWALE - AM - CD - J.M. - K.S. | CAA - BCHA - AQHA - APHA - R.H. - T.D. | 395 - SPS - GORDIE - SCMA - R.M. - G.R. - V.C. - C.F.
i would go with fur rendering. slicing and fins.. it looks awesome and is quite fast.. (and fully animated and physically correct with gravity and all, no problems.. check the ati homepage, they have quite a bunch of stuff about it..)

grass is fur of the world:D

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

I have to agree with daveperm, for the moment "IBR" type approaches are probably the best.

If you use a LOD based simplification, an efficient model would need to do that too, you likely wont need raytracing.

If you just want to use ridiculous geometry detail and rely on the output sensitivity of raytracing then you will need adaptive supersampling, it is the only way aliasing can be "controlled" in such situations. You will need a reasonable lower bound on the number of samples per pixel though so you can be reasonably sure, but never entirely, that your solution is actually converging ... this isnt really realtime either at the moment, certainly not something you would want to try to accelerate on present day 3D hardware.
Actually on my work we had built an engine (not a game) which combined voxel rendered terrain engine and poligonal objects (like in Outcast game, but more sofisticated) . Voxel renderer was working on cluster of PC under MPI and poligonal objects (terees, buildings, vehicles etc) were rendered in DirectX. The system was very complicated and quite slow, though gave more "photorealistic" quality then simple poligonal. Now it''s discarded in favor of common poligonal renderer.
quote:Original post by serg3d
Actually on my work we had built an engine (not a game) which combined voxel rendered terrain engine and poligonal objects (like in Outcast game, but more sofisticated) . Voxel renderer was working on cluster of PC under MPI and poligonal objects (terees, buildings, vehicles etc) were rendered in DirectX. The system was very complicated and quite slow, though gave more "photorealistic" quality then simple poligonal. Now it''s discarded in favor of common poligonal renderer.


i remember outcast, its one of the games i payed for, and one of the games i don''t want to miss at all. it was just awesome, felt that natural to play somehow.. very well done..

i dislike outcast2, they dropped voxels

"take a look around" - limp bizkit
www.google.com
If that's not the help you're after then you're going to have to explain the problem better than what you have. - joanusdmentia

My Page davepermen.net | My Music on Bandcamp and on Soundcloud

This topic is closed to new replies.

Advertisement