Jump to content
  • Advertisement
Sign in to follow this  
BloodOrange1981

Creating a displacement map for oceans based on Gerstner waves

This topic is 1217 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Everybody. If you're not playing Hotline Miami 2 right now (I will be after posting this!) I would like some help with what might be a simple question.

 

I'm currently trying to render an ocean scene, as I thought it would be a good practical project to make where I learn new techniques and knowledge along the way. I have been using this article as a reference http://http.developer.nvidia.com/GPUGems/gpugems_ch01.html

 

I currently have a very basic Gerstner wave "generator" that runs on the cpu side, but can't really play with more than 5 or so waves at the current level of tesselation in a single threaded app without some nasty slowdown.

In the article I linked to there is a method of rendering a displacement/height map in the shader to a render target, a pretty quick way to accumulate 15 or more individual waves' effects in a shader. For each channel (r,g,b) in the map you can store the displacement of each vertex in the x,y and z directions respectively. 

 

Outside of deferred rendering I have little experience of rendering to a render target and reading custom data from the texture - so before I go ahead I wanted to check if normalization of values/packing is really necessary?

 

I see textures as big buffers of data. So therefore there shouldn't be problems holding arbitrary float values from anything like -34.0f to 500.0f, for example.Loading would be simple too right? Unless there's something about texture samplers I don't know.

 

 However I'd like to visually check the created texture, as that is a pretty simple way to see if everything is ok. Without knowing the max displacement created by summing the sines beforehand how can I normalize values to 0.0f - 1.0f? Do I assume that the max possible displacement is -textureSize/2.0f < d < textureSize/2.0f where 'd' is the displacement?

 

Thanks

Share this post


Link to post
Share on other sites
Advertisement

I see textures as big buffers of data. So therefore there shouldn't be problems holding arbitrary float values from anything like -34.0f to 500.0f, for example.Loading would be simple too right? Unless there's something about texture samplers I don't know.

Yep, take a 16 bit floating point texture to save some bandwidth.

 

 

 


However I'd like to visually check the created texture, as that is a pretty simple way to see if everything is ok.

Take a look at tone-mapping. It is the same concept, you need to compress one space to a limited space. A very simple tone mapper would be

1-(1/(1+x)) or if you like to scale it 1-(1/(1+s*x)). Putting this into a simple shader:

float wave_height = ...
vec3 color;
if(wave_height<0) {
    // red for negative values
   color = vec3( 1.0 - 1.0/(1.0-wave_height), 0.0,0.0);
} else {
    // green for positive values
   color = vec3( 0.0,1.0 - 1.0/(1.0+wave_height), 0.0);
}
Edited by Ashaman73

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!