new water rendering demo

Started by
37 comments, last by vember 20 years ago
very, very nice

Those aren''t bugs, they''re added features
Advertisement
Very nice indeed! The only commercial place I''ve seen water look as good as this, is in Far Cry (the whole game looks awesome). If you''re not sure what I''m talking about, download the demo from http://www.gamershell.com/. It''s just under 500mb.

Well done, though!

-hellz
Actually, just found a link to in-game screenshots from Far Cry, for those who want to see more but don't want to download that hefty demo. You can get them here

Edit: Incidentally, the link to the Far Cry demo is on the bottom of that page. I can't link directly to the download as the site has referrer checks in place.

-hellz

[edited by - hellz on March 22, 2004 8:01:35 PM]
Hi, great looking shots, and a very interesting idea!

I have some questions about this technique, if anyone would be kind enough to discuss them. Apologies if I''ve misunderstood anything - I had never heard of it until I read this thread.

First, am I right in thinking that it''s essentially an adaptive tesellation technique, which uses the projections implicit knowledge of the scaling due to perspective to provide the ''right'' density of vertices at a given depth (after tranformation back to world space)?

Second, if this is the case then while the technique could be expected to produce meshes with an optimum ''vertex density'' at a given distance, it cannot be expected to select a truly representative vertex from the heightmap. EG if the vert''s actual position comes from a simple sampling / interpolation of the heightmap, then the selection is essentially similar to a very simple LOD technique like geomipmapping - nothing is there to make sure the selected vertex is the most representative in the area. This could lead to some bad pops with fast motion, I think.

I can see how this is not too much of a problem for water, as it is always in motion anyway, so the pops wouldn''t be as noticeable (if at all), but for terrain I think it might be too much. That said if it''s fast enough that you can have a good level of detail into the distance, it may be hard to notice it anyway. Maybe there''s scope for a hybrid method which uses the projected grid to get a distribution of vertices, then matches them to close vertices from an appropriate LOD heightmap (ok, I need to think this through more)

Next, I''m fairly sure glslang (and probably other shading languages) allow vert shaders to read from a texture. I can see that this may be too basic for nice water (as you''re doing more than just displacing a mesh by a precalculated heightfield, you''re actually generating the heightfield too, right?) but it would work fine for terrain I think. That way it''s essentially displacement mapping with the projected grid controlling tesselation.

Again, appologies if ignorance has lead to me talking gibberish ;¬)

Dan Groom
[size="1"]
Hi Dan,

It seems like you''ve understood the concept correctly, glad you enjoyed the demo.

The problem that the "right" vertex isn''t always gets selected causes an artifact which I have called "swimming". In effect, the vertices sampled using a regular non-moving grid is no more representative that those of the projected grid as both has an error that moves the sampled point away from the truly representative one. However, in the case of the non-moving grid the error is consistent, so it''s generally not noticable. With the projected grid the errors will change when the camera moves and thus become noticable.

The correct way to reduce this artifact (according to signal-theory) is to lowpass-filter (blur) the heightdata to make sure that the linearly interpolated output is within a margin of error that is acceptable. This can be implemented by using mipmapping with a bias so that the heightdata reads are sligthly blurry. The mipmap-levels are already bandlimited so they are, in effect, representative. Mipmap-bias isn''t the best kind of blur though, so better results are obtained by not using bias but by doing a seperate blurring pass instead.

I''m suspicious your hybrid can be very hard to get good result from, as there will be another form of popping (if I understood it correctly) when a projected vertex changes between two vertices in the LOD heightmap. I assume the switch between vertices is supposed to be discrete, as if it were continuous (like with linear interpolation) this hybrid would be identical to the mipmapped method in the paragraph above.

I haven''t played around with glslang yet but I''m pretty sure it requires that the hardware actually has texture units on the vertex-pipeline otherwise it will use a software callback. I''m generating the heightfield on the GPU, but that is a separate pass so a simple texture read per-vertex would be all that is required.

cheers,
- claes
Hi Claes,

You''re right about the hybrid idea. I would indeed pop anyway, but would maybe give a more representative vert. Probably not worth it.

Further reading revealed that texture reads in a vertex shader aren''t mipmapped (eg you just get the base texture) so it wouldn''t work in a vertex shader without manually selecting mipmap levels anyway.

I''ve no idea if the texture read is done solely in hardware, but you''re probably right - I haven''t found any evidence of people using texture reads in vertex shaders in a while''s googling so chances are it''s a wait for the next gen thing, as you said before. Funny though, I thought matrox had pushed hardware displacement mapping a few yearts ago, which I would have thought would be done as a texture read.

Thanks,

Dan
[size="1"]
Just as an aside. I started getting into GLSlang recently, and had a look through the specs. At this point, vertex shaders CAN read textures, but they ALWAYS use the base level. You cannot set them to read other mipmap levels. Of course fragment shaders use the correct mipmap level.
quote:Original post by mrbastard
Further reading revealed that texture reads in a vertex shader aren''t mipmapped (eg you just get the base texture) so it wouldn''t work in a vertex shader without manually selecting mipmap levels anyway.


I see. But this wouldn''t have to be a problem if a generated heightmap is rendered (with mipmapping an post-blurring) to a texture first and read on the vertex-shader in another pass. If the vertexbuffer is of the same resolution as the texture it would be a perfect match and no filtering would be required. The upcoming OpenGL extension überbuffers should allow writing into vertexbuffers as if it were a texture, which would make it all possible on the current hardware generation.
quote:Original post by rick_appleton
Just as an aside. I started getting into GLSlang recently, and had a look through the specs. At this point, vertex shaders CAN read textures, but they ALWAYS use the base level. You cannot set them to read other mipmap levels. Of course fragment shaders use the correct mipmap level.

Quite right, but I didn''t mean setting them to use different driver generated mipmap levels, but creating your own in a pre process and just binding them as 3 (or so) seperate textures. Any idea if the texture reads are done in the current pass or if it''s a readback thing? From what people have been saying (and lack of evidence to the contrary), it sounds like a slow copy.

quote:Original post by vember
I see. But this wouldn''t have to be a problem if a generated heightmap is rendered (with mipmapping an post-blurring) to a texture first and read on the vertex-shader in another pass. If the vertexbuffer is of the same resolution as the texture it would be a perfect match and no filtering would be required. The upcoming OpenGL extension überbuffers should allow writing into vertexbuffers as if it were a texture, which would make it all possible on the current hardware generation.


Yes, definately. I suppose I keep looking for ways to use it for pregenerated heightmaps in a vert shader, expecting there to be a nice fast way to do it in one pass (materials notwithstanding). Looks like my instinct is wrong

Ah, right so uberbuffers imply there is a hardware path for a texture read for vertex shaders (even if it''s done per vertex buffer / heightmap, rather than per vertex in a vert shader)? Roll on the next gen of drivers then

Anyway, I''m off to read the ARB notes for hints

thanks guys
[size="1"]

This topic is closed to new replies.

Advertisement