Jump to content
  • Advertisement
Sign in to follow this  
reaper93

Vertex Texture Displacement & Sampling Radial Grid

This topic is 3538 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am working on a project and am looking to re-create an effect I saw in GPU gems for the water part of my demo, however there is a part of the paper I am having trouble understanding Article Source: Click here
Quote:
18.2.3 Sampling Height Maps Our implementation samples the height maps per vertex and computes the resulting displacement value in the vertex program. For sampling, we use a radial grid, centered at the camera position. This grid is tessellated in such a way that it provides more detail closer to the viewer, as shown in Figure 18-3. 18_vertex_texture_03.jpg Figure 18-3 Radial Grid for Sampling Vertex Textures The following equations show how the vertex positions for the radial grid are computed. r = a 0 + a 1 i 4 xi,j = r cos (2pj/M) yi,j = r sin (2pj/M), where i = [0..N - 1], j = [0..M - 1]. We choose a 0, a 1 so that r 0 = a 0 = 10 cm r N-1 = a 0 + a 1 (N-1)4 = 40 km. With this approach, we naturally get distance-based tessellation, which provides a simple level-of-detail (LOD) scheme. Other approaches, such as the ROAM or SOAR terrain-rendering algorithms, could be used here, but they require a significant amount of work on the CPU side, which would eliminate all the benefits of using vertex textures. See Chapter 2 of this book, "Terrain Rendering Using GPU-Based Geometry Clipmaps," for another approach to rendering height fields on the GPU with adaptive tessellation. Listing 18-1 shows the simple vertex shader that implements sampling from a single height map with a radial grid.
Does the article suggest that the water plane I am using should have a loding system inplace and the vertices should be laid out to form a radial grid? There is no explanation in the forumula of what each component stands for, can anybody make sense of it? Also I am abit unsure about the input format of the vertex, if you scroll down a little and look at the shader code it states the following, I am not to sure why they are packed like that, and what is j?:
Quote:
// Read vertex packed as (cos(), sin(), j) float4 INP = position;
Any pointers would be much appreciated. I have tried to look for different sources for better explanations but resources seem to be limited for this topic.

Share this post


Link to post
Share on other sites
Advertisement
Well I can give it a go:
looks like i,j are your coordinates into the "radial grid", and N,M are the maximum coordinates. I think j is the coordinate around the circles, and i is the distance out from the center.
x and y are 2D cartesian coordinates, so x(i,j) (sorry no subscript!) is the cartesian coordinate of "radial grid" point i,j.
I think p is actually meant to be PI, which is just w.t.f.
a0 and a1 are general scaling factors, calculated as shown.
I think r is the radius of each circle in the radial grid?
Hope that is correctish and helpfulish!

Share this post


Link to post
Share on other sites
i see, that clears up making sense of the formula abit, thanks,

any ideas on what the vertex positions are supposed to be packed as? It suggests cos(), sin(), and j in the comments

does it mean do the sin/cos calculations from the following:
xi,j = r cos (2pj/M)
yi,j = r sin (2pj/M)

on the CPU during vertex generation time?

If so I imagine this suggests that instead of using a standard flat square 2d plane, I will need to have a radial grid with a loding system to allow for more detailed geometry closer to the camera?

Share this post


Link to post
Share on other sites
I think I'm missing something here. What's the point of this weird vertex format? Why not just set up a radial grid on the CPU, with positions stored in the vertex, and then simply apply an offset so that it's always centred on the camera? That would give you a simple LOD scheme and let you take more height map samples near the camera.

Or is that in fact what the article's saying? I'm finding it slightly opaque.

Share this post


Link to post
Share on other sites
Quote:
Original post by myers
I think I'm missing something here. What's the point of this weird vertex format? Why not just set up a radial grid on the CPU, with positions stored in the vertex, and then simply apply an offset so that it's always centred on the camera? That would give you a simple LOD scheme and let you take more height map samples near the camera.

Or is that in fact what the article's saying? I'm finding it slightly opaque.


I think the article is saying do it on the GPU, but part of the calculation for transforming from our standard grid to a radial grid is done on the CPU, and then stored in the vertex data, it then looks like the rest of the transformation is done in the vertex shader

Quote:

// Read vertex packed as (cos(), sin(), j)
float4 INP = position;

// Transform to radial grid vertex
INP.xy = INP.xy * (pow(INP.z, 4) * VOfs.z);


The article is preety poor to be honest, it does not give you enough information to attempt to implement this method, I might just go for a different approach as its giving me a headache now :)

Regarding your suggestion about the radial grid, I suppose it would mean I would have to have a dynamic vertex buffer and rebuild my plane every frame on the CPU, this was something I was hoping to avoid considering we are using Vertex Textures to modify that vertex positions on the GPU already.

Share this post


Link to post
Share on other sites
Quote:
Original post by myers
vertex, and then simply apply an offset so that it's always centred on the camera? That would give you a simple LOD scheme and let you take more height map samples near the camera.

Or is that in fact what the article's saying? I'm finding it slightly opaque.


That's how I would do it too. There doesn't seem to be a strong argument in favour of constructing the radial mesh in the Vertex Shader (except being able to modify the DMParameters on-the-fly). Depending on the requirements of your application, you can probably find some set of parameters that are good in most/all cases. If that is the case, then constructing the entire radial mesh on the CPU at initialization, would save instructions in the VS. The important things in this article are IMO the reminders to make sure the mesh is detailed enough (particularly in the center), to do the lighting in the FS and to experiment with clipping in the VS as an optimization if you target older hardware (< shader model 4).

Share this post


Link to post
Share on other sites
Quote:
Original post by reaper93
Regarding your suggestion about the radial grid, I suppose it would mean I would have to have a dynamic vertex buffer and rebuild my plane every frame on the CPU, this was something I was hoping to avoid considering we are using Vertex Textures to modify that vertex positions on the GPU already.


Why would you have to rebuild the plane every frame? The only thing that ever changes is the position of the grid's centre, which can be achieved by using the identity world matrix (so that it's always at the camera position). The data in the vertex buffer itself remains static.

Share this post


Link to post
Share on other sites
Quote:
Original post by myers
Quote:
Original post by reaper93
Regarding your suggestion about the radial grid, I suppose it would mean I would have to have a dynamic vertex buffer and rebuild my plane every frame on the CPU, this was something I was hoping to avoid considering we are using Vertex Textures to modify that vertex positions on the GPU already.


Why would you have to rebuild the plane every frame? The only thing that ever changes is the position of the grid's centre, which can be achieved by using the identity world matrix (so that it's always at the camera position). The data in the vertex buffer itself remains static.


sure we could just use the world matrix to translate a radial plane but it would mean we would have to have a huge plane even if we had a relativily small amount of water. I suppose the natural loding of a radial plane would allow the wasted geometry to be acceptable.

Share this post


Link to post
Share on other sites
The approach in the article is really only suitable for ocean rendering. For smaller and constrained areas of water, e.g. lakes and ponds, you would probably just use a regular mesh with discrete level of detail.

If a lot of the ocean can be occluded by land (large island, archipelago) it's probably also a good idea to a do depth-only pass first. The fragment shader will be quite complex and thus something you would want to avoid as much as possible :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!