# OpenGL Breaking waves on shorelines

## Recommended Posts

Hi, The other post on here about shorelines reminded me of a feature I would like to implement for my game but am not sure about. I was inspired by this image and would like to replicate the surf, except in realtime. Bit of background about my program screenshot here and here.. be nice, it's nowhere near finished The landscape uses a standard heightmap, and I currently have a single quad covering the visible terrain acting as water. I see no reason to increase the polycount of this to produce a wave effect as it will not add much to the overall visuals & will just be wasting polygons I could put to better use elsewhere (and there is a lot of stuff to put into this yet) The helicopter is constantly moving across the terrain, in a similar fashion to Virus or Zeewolf, the game I am trying to do a remake of. I have thought that one such way of doing this would be to have a particle system periodically shoot a few polys out from the shore with a textured image of some surf according to where the water plane intersects with the shoreline, and if all else fails this is the method I will choose to use, although I can't think of a good way to make the individual particles look more like one big piece of water instead of just a load of particles. I would like to pick your brains on whether there is a better way to achieve this effect? How would you do it? I'm not looking to model water movement accurately, just good enough. It would be nice to see the surf gradually move into deeper water & blend together in the same way as shown in the deviantart image although this may be more trouble than it's worth, and it's a feature I'm willing to scrap. I am using OpenGL if it matters, and would like to avoid using shaders if at all possible. Reading back over this I suspect I've answered my own questions (at least the ones in my head) but I'm clicking submit anyway, if anyone's got any tips/suggestions or has done something similar I'd really like to hear from you. Many thanks, Drew

##### Share on other sites
Pass the elevation of the terrain into the pixel shader and animate a surf texture over time.

##### Share on other sites
Thanks for that, it makes a lot of sense, however I have no knowledge of shaders (this is a separate question though). Guess I'll go and find out!

##### Share on other sites
Hi,

This old forum post might be of interest for creating surf/foam
Foam

Sadly the pictures are no longer available, but I have an offline copy I can send you if you like. PM me with an email address.

BTW. You'll want to tessalate your water quad at some point as a single big quad over a terrain of smaller quads/traingles is likely to exhibit zbuffer precision issues.

##### Share on other sites
Modelling water at the shore is an extremely difficult task and I am not aware of any work actually succeeding in this (at least in realtime on common hardware). The work by noisecrime gives brilliant results and I suggest you to implement that technique for its simplicity. I think FarCry uses a similar approach.

Personally, I am investigating a novel technique to model the behaviour of water at the shoreline, where a different physical model (other than the FFT-based one) applies. My idea is that of modelling the shoreline with a spline. The spline control points are estimated in the neighbourood of the intersection between the water plane and the terrain.

Water is usually drawn (at least in my engine) as axis aligned quads but, ideally, it should be oriented point by point in the direction of the spline (that is, the shoreline). This alignment affects both the geometry (waves roll towards the shore) and all texturing effects (foam included). This requires a parametrization that remaps every (x,y) point of the quad to two parameters (u,v), where u is the distance from the spline, and v is the distance along the spline. These (u,v) parameters are used in place of the (x,y) for texturing. The required remapping is explained in <url>http://wscg.zcu.cz/wscg2005/Papers_2005/Full/C61-full.pdf</url>
We can store these parameters (u,v) in a texture. Additionally, we can store in this texture the depth of water, useful to distinguish shallow and deep water in the shaders.

For a foam effect, this texture is then accessed and the u,v parameters are used to align an animated foam texture to the shoreline in a pixel shader. On shader model 3.0 we can also access this texture in the vertex shader and use the water depth to change the appearance of the waves (i.e. waves become steeper and steeper until the break). Trochoids are quite handy for this.

Hopefully in the future I will find the time to try this approach in my engine.

##### Share on other sites
Just like to point out the foam tutorial was not written by myself, or anything to do with me at all. I just posted the link to the thread since it was something I had remembered and thought would be of interest.

##### Share on other sites
Thanks for the replies, I'll definitely look into those two. I checked out Cg & got a couple of free shader designer programs but as I suspected, my card doesn't support very many modes unfortunately.
As for the z-fighting, I fixed that by modifying the view frustum before drawing the water. It's not perfect but it works for my needs.

explanation here - http://www.codemonkeysoftware.net/content.php?article.1

I have just been playing the age of empires 3 demo & got some ideas from that as well. From what I can tell just by looking, the foam is a static image, and a plane moulded to the shoreline shape moves across the water. As it does so the texture coordinates are shifted in the opposite direction to the movement (so the foam appears not to move), and fades out on the trailing edge. It does look quite good but you can sometimes see hard edges which give the technique away. (useful for me!)

edit: also I went looking for that paradise island demo & it's now at http://indago.gamez.lv/i2004/?incl=sala.html .. I'm jealous :D

[Edited by - DrewGreen on September 8, 2005 9:30:41 PM]

##### Share on other sites
Moving waves along the beach would be very difficult to make without additional geometry. Or really shader intensive.
You could tesselate your grid, and store texture coordinates for the wave texture - one coordinate is the distance to the shore, but the other is not so trivial to compute.
On second thought - you can tesselate the water into regular grid, evaluate distance to the nearest shore for every vertex (those that are beneath the terrain are 0).
Then, get the vertices that are 0 (beneath the terrain), but have non-0 neighbours - and make a contours from them.
Then collect their neighbours, with 0 < x <= 1 values - create contours from them too.
And so on - extract as many contours, as you want. Like onion.
Then, walk each contour, compute length from the start and assign U-texture coordinate. V texture coordinate will be the number of the onion-contour.. :)
This can be made to look good, especially if contours are smoothed (their u,v-s) by spline.

##### Share on other sites
Quote:
 Original post by DrewGreenAs for the z-fighting, I fixed that by modifying the view frustum before drawing the water. It's not perfect but it works for my needs.explanation here - http://www.codemonkeysoftware.net/content.php?article.1
Actually, pushing out the near plane is the preferred method of combating depth buffer issues. You should try to have the near plane as far away as possible and the far plane as close as possible, but the far plane distance doesn't have as much of an effect on depth precision as the near plane distance. Read here for more info. It's OpenGL-centric, but it applies equally to Direct3D.

##### Share on other sites
Breaking waves curl on top of other water; therefore, a heightfield is incapable of representing the geometry. There are all kinds of research to check out going back decades. It's not easy. Good luck.

##### Share on other sites
A lot of people way overuse shaders nowadays: In my opinion there are a lot of things that can be done with the old fashoned way just as fast or faster with a little clever engineering. (Before there is a flame war, I do think there are also many things shaders do a lot better, but I think it gets just a little silly to bust out a pixel shader just to do really easy things. I once saw someone require pixel shader 2.0 for a serious demo just so that they would have the ability to multiply two colored quads. Why this was supposedly better then multitexturing or glBlendFunction(GL_MULTIPLY) I will never know.)

With that being said, I immediatly thought to myself, What would I do to achieve that effect after I saw the picture. Here is what I came up with:

Create a spline around your island. This spline will make up the inner edge of a triangle strip wrapping all the way around the island, moving up and down with the surface of the ocean. The outer edge will be about 5-6 feet out, creating a strip around the island. (Picture an island with a tutu and you will understand what I mean) Then, texture map this strip with an alpha white texture. This texture should consist of several alpha lines snaking across it with a thick line at the bottom(for the shore foam). Slowly transform the texture matrix to make all of the coordinates pulsate back and forth as the water rises and falls, in addition to slowly rotating the entire splines texture one direction around the island. Add 1-3 more of these tutues rotating in slightly different directions to make a really neat compound blending effect as the alphamaps layer on top of each other. Finally, throw in a neat small shimmering particle effect for the foam spraying around the edges where the water meets the shore. (It might help if you make your edge definition spline also serve as an emitter for your particles)

Boom. Done. Ok, maybe tricker than you expected to be, but in my head this looks good.

OH! BTW, check out the source for my demo Ocean for a really cool texure coordinate generation effect that simulates reflections on moving water. you can get it under the creative contest listing for NeHe Creative 2004

Tell me what you think if you like it!

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627714
• Total Posts
2978775
• ### Similar Content

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.

• 9
• 21
• 14
• 12
• 42