Boat wakes on projected grid water

Started by
2 comments, last by Reitano 8 years, 10 months ago

I have implemented a water system, which uses FFT to generate water waves and projected grid to generate water mesh. Now I want to add V-shape boat wakes, does anyone has suggestion on this. I have read Tessendorf's iWave paper, but it works on regular grid, seems not work with projected grid water. Thanks!

Advertisement

I have a couple ideas on how I'd do that; the first tends towards higher fidelity, the latter towards "good enough for gaming".

You'll need to model dynamic wave displacements over an area, and combine those with your "standing" FFT-generated waves. When you sample the height for each vertex in your projected grid, you'll also sample for any live dynamic waves in the area, and combine them if they exist. Naturally you'll want the dynamic waves to fade or die off eventually, or else you may as well be doing a completely dynamic wave simulation.

You could also fake it with some well-defined displacement textures in the form of particle/sprites you can apply to your standing waves. In game-like scenarios, dynamic waves usually happen as linear displacements (your V-shaped wakes) or as radial impulses (think explosions), so drop those in as particle effects that spread and fade. If you're generating projected-grid vertex positions in a shader, and thus cannot afford spatial-tree lookups to find particles that might affect a given vertex... project the particles onto the average water plane and render them into a texture from the camera's position, and then sample that texture when sampling your projected grid.

RIP GameDev.net: launched 2 unusably-broken forum engines in as many years, and now has ceased operating as a forum at all, happy to remain naught but an advertising platform with an attached social media presense, headed by a staff who by their own admission have no idea what their userbase wants or expects.Here's to the good times; shame they exist in the past.

I have a couple ideas on how I'd do that; the first tends towards higher fidelity, the latter towards "good enough for gaming".

You'll need to model dynamic wave displacements over an area, and combine those with your "standing" FFT-generated waves. When you sample the height for each vertex in your projected grid, you'll also sample for any live dynamic waves in the area, and combine them if they exist. Naturally you'll want the dynamic waves to fade or die off eventually, or else you may as well be doing a completely dynamic wave simulation.

You could also fake it with some well-defined displacement textures in the form of particle/sprites you can apply to your standing waves. In game-like scenarios, dynamic waves usually happen as linear displacements (your V-shaped wakes) or as radial impulses (think explosions), so drop those in as particle effects that spread and fade. If you're generating projected-grid vertex positions in a shader, and thus cannot afford spatial-tree lookups to find particles that might affect a given vertex... project the particles onto the average water plane and render them into a texture from the camera's position, and then sample that texture when sampling your projected grid.

Due to the projected grid method, there is no water mesh grid data on CPU, so it's not easy to run a completely dynamic wave simulation like Tessendorf's iWave algorithm? What I want is a simple wake propagation method which I can run on a pixel shader, Actually, a pixel shader simulation wake propagation to generate wave height map. Then generate normal map from the height map. When render water, combine the wake normal map.

The projected grid method only relates to water rendering, not the physical simulation of waves. For that you'll need other techniques, like iWave, wave particles, procedural effects, artist-authored textures etc.
Most of these techniques use textures to store the wave state. They often operate in a coordinate space that lies parallel to the water surface. The coordinate space of the projected grid is, by definition, screen space (approximately). Therefore, you'll need a mechanism to move data from one coordinate space to the other, taking into account aliasing of course.
At this purpose, in my engine I use two render targets. The first has the size of the projected grid and stores the total displacement at each vertex of the grid. First, the 3D displacement of ambient (FFT) waves is written in this target. Then, other wave types are added to it (remember that water waves combine linearly). In my case all wave types are represented by textures. For antialiasing, I rely on the mipmapping capability of the GPU in conjunction with tex2Dgrad, to which I pass the analytical derivatives at each vertex.
The second render target has the size of the backbuffer and stores the horizontal components of the water normal. As above, it is initialized first with ambient waves. Then, other waves types are blended on it, again relying on mipmapping and, this time, hardware derivatives at each pixel.
You can blend procedural effects such as circular or Gerstner waves (trochoids) directly on these two render targets, without baking them into textures (which would be lossy in addition to unnecessary). Again, anti-alias these effects based on the frequency of the effects and the derivatives at the pipeline stage where they are drawn.

This topic is closed to new replies.

Advertisement