# OpenGL Rendering liquids

## Recommended Posts

Hi guys, I'm working on a simple 2D platform game (that uses OpenGL ES for rendering) that involves fluids management. So far, each particle of the fluid simulation is being rendered using a single texture sprite. However the effect is not that good so I was thinking to try to alternative way to render the fluids when particles are in contact. What I want to achieve is something like PixelJunk Shooter's water rendering. The only thing I've tried so far is using Delaunay trinagulation on the particles: this creates a decent mesh out of a group of connected particles, that is then rendered using OpenGL. However the effect is not as expected because Delaunay triangulation creates every possible triangle that could be generated by the given set of points. So, if you have a situation like this one:
|
|O
|OOO
|AOOOOOO
-------OO
OO
OO
OO
B

You end out having additional edges (for instance the one that connects A and B, that shouldn't be there. As long as I'm on a mobile device, I cannot execute too much code so what I was thinking about was to remove all the edges that are longer then a given threshold (lets say that 2 particles interact when the distance is less then N, I can remove all the edges longer then N safely), and then render the fluid mesh below the obstacles on the screen. The effect won't be extremely good, but probably it will remove connections that are not required. Do you have any other suggestion ? Maybe an alternative approaches that are able to generate a less complicated mesh in less time ?

##### Share on other sites
rubicondev    296
I'm sure a 2D iso-surface is what you need. Basically a marching cubes implementation in 2D only. I'm sure this would be much simpler than the 3D case it's known for, but it still won't be trivial. I've not seen a demo for it done in 2D but tbh I never looked either.

Some kind of 2D metaballs might work too. At least it's a good seach term for you

##### Share on other sites
PolyVox    712
Quote:
 Original post by Rubicon...Basically a marching cubes implementation in 2D only...

If you want to try this then 'Marching Squares' is the term to search for.

##### Share on other sites
taby    1265
I have done this before, and I'm having trouble following the example on wikipedia.org, so perhaps that isn't the best example to go from.

Anyway, if you need some working source code for the 2D case, you can send me a private message, or pry it out of http://nd-disconnectedness.googlecode.com/files/nd-disconnectednessv1.zip -- this zip file contains code for the 1D and 2D cases. In the 2D case I convert a grayscale image to a mesh.

##### Share on other sites
rubicondev    296
Quote:
Original post by PolyVox
Quote:
 Original post by Rubicon...Basically a marching cubes implementation in 2D only...

If you want to try this then 'Marching Squares' is the term to search for.
LOL, I guess I shoulda worked that one out for myself :)

##### Share on other sites
Great, thanks for the code.

I already did a 2D marching squares implementation but it wasn't working as expected. However I think that the problem might have been related to the fact that the grid was too small.
The other problem I was having was with the fact that the generated mesh was too complex (too many triangles).

I'll give a look at the code and let you know if I can achieve a good result.

##### Share on other sites
taby    1265
Quote:
 Original post by gabriele farinaGreat, thanks for the code.I already did a 2D marching squares implementation but it wasn't working as expected. However I think that the problem might have been related to the fact that the grid was too small.The other problem I was having was with the fact that the generated mesh was too complex (too many triangles).I'll give a look at the code and let you know if I can achieve a good result.

You will find that this code also gives lots of triangles. It's the nature of the beast.

Some kind of naive mesh simplification/decimation algorithm might work for you, like combining all adjacent squares on each "line" into a single rectangle.

##### Share on other sites
rubicondev    296
Not sure why you're getting too many triangles, but if they're mainly internal then it should be fairly easy to navigate the complex mesh you get and find just the edges, then restitch them using ear clipping.

##### Share on other sites
taby    1265
Quote:
 Original post by RubiconNot sure why you're getting too many triangles, but if they're mainly internal then it should be fairly easy to navigate the complex mesh you get and find just the edges, then restitch them using ear clipping.

I think you're probably right, that they're internal. Let's all remember though that the time taken to decimate the mesh on the CPU has to be roughly less than the time it takes to transfer the non-optimized mesh to GPU RAM and render it, otherwise decimation's not even worth it in the first place. I should have made this explicit beforehand.

##### Share on other sites
rouncED    103
this is an extremely awesome idea! id love to see it once you got it working i bet its really cool :)

All I could think of is to grid up all the particles in a mesh, but then it wouldnt be able to spread apart so that idea doesnt work.

##### Share on other sites
rubicondev    296
Quote:
Original post by taby
Quote:
 Original post by RubiconNot sure why you're getting too many triangles, but if they're mainly internal then it should be fairly easy to navigate the complex mesh you get and find just the edges, then restitch them using ear clipping.

I think you're probably right, that they're internal. Let's all remember though that the time taken to decimate the mesh on the CPU has to be roughly less than the time it takes to transfer the non-optimized mesh to GPU RAM and render it, otherwise decimation's not even worth it in the first place. I should have made this explicit beforehand.
You're right actually. Even if you're targetting quite old hardware, "too many" triangles is usually many orders of magnitude above "a visually acceptable amount". I'd just run with it as is, but I was trying to help the poster.

This particular problem is going to be pure CPU and I'd suggest getting off it asap.

##### Share on other sites
taby    1265
Quote:
Original post by Rubicon
Quote:
Original post by taby
Quote:
 Original post by RubiconNot sure why you're getting too many triangles, but if they're mainly internal then it should be fairly easy to navigate the complex mesh you get and find just the edges, then restitch them using ear clipping.

I think you're probably right, that they're internal. Let's all remember though that the time taken to decimate the mesh on the CPU has to be roughly less than the time it takes to transfer the non-optimized mesh to GPU RAM and render it, otherwise decimation's not even worth it in the first place. I should have made this explicit beforehand.
You're right actually. Even if you're targetting quite old hardware, "too many" triangles is usually many orders of magnitude above "a visually acceptable amount". I'd just run with it as is, but I was trying to help the poster.

This particular problem is going to be pure CPU and I'd suggest getting off it asap.

That's a good idea in and of itself... I assume you mean using a geometry shader to generate the triangles.

##### Share on other sites
emiel1    166
I was thinking of using 2D metaballs, but not triangulating them, but using the GPU to calculate the "charges".

1. Drawing precalculated textures with the "charges" of one metaball on a rendersurface with additive alpha blending. The "charges" will rise.
2. Using a post-processing effect to calculate threshold.

Using this method calculates the iso-surface with pixel-precision. I don't know if this method has any major flaws. I hope someone else can comment on that.

Edit: I found a reference: here.

Emiel1

##### Share on other sites
rubicondev    296
Quote:
Original post by taby
Quote:
Original post by Rubicon
Quote:
Original post by taby
Quote:
 Original post by RubiconNot sure why you're getting too many triangles, but if they're mainly internal then it should be fairly easy to navigate the complex mesh you get and find just the edges, then restitch them using ear clipping.

I think you're probably right, that they're internal. Let's all remember though that the time taken to decimate the mesh on the CPU has to be roughly less than the time it takes to transfer the non-optimized mesh to GPU RAM and render it, otherwise decimation's not even worth it in the first place. I should have made this explicit beforehand.
You're right actually. Even if you're targetting quite old hardware, "too many" triangles is usually many orders of magnitude above "a visually acceptable amount". I'd just run with it as is, but I was trying to help the poster.

This particular problem is going to be pure CPU and I'd suggest getting off it asap.

That's a good idea in and of itself... I assume you mean using a geometry shader to generate the triangles.

Not particularly, just do the verts in the fastest way (ie no edge find and ear clip) and just send em up. However, if you can do this code on a GS and don't mind limiting your market that would be even better for performance, yeah.

##### Share on other sites
jjanevski    200
Hey, I'm interested in this type of 2D fluid liquid graphics style that was shown in Pixel Junk Shooter as well. Looking forward to seeing your results, as well as the expansion of this thread with some good info. Thanks.

##### Share on other sites
taby    1265
Quote:
 Original post by emiel1I was thinking of using 2D metaballs, but not triangulating them, but using the GPU to calculate the "charges".1. Drawing precalculated textures with the "charges" of one metaball on a rendersurface with additive alpha blending. The "charges" will rise.2. Using a post-processing effect to calculate threshold.Using this method calculates the iso-surface with pixel-precision. I don't know if this method has any major flaws. I hope someone else can comment on that.Edit: I found a reference: here.Emiel1

That's a pretty ingenious approach, given its simplicity (simple conversion of grayscale data to binary data).

The edges will be aliased, mind you, but that's not really a show stopper. Also, if your field resolution is less than the screen resolution (e.g., you're stretching it out), don't use nearest neighbour interpolation or your result will look like something straight out of Q*Bert.

## Create an account

Register a new account

• ### Similar Content

• There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window.
I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.

Thanks
• By cebugdev
hi all,

i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
thank you, and looking forward to positive replies.

• I have a few beginner questions about tesselation that I really have no clue.
The opengl wiki doesn't seem to talk anything about the details.

What is the relationship between TCS layout out and TES layout in?
How does the tesselator know how control points are organized?
e.g. If TES input requests triangles, but TCS can output N vertices.
What happens in this case?
http://www.informit.com/articles/article.aspx?p=2120983
the isoline example TCS out=4, but TES in=isoline.
And gl_TessCoord is only a single one.
So which ones are the control points?
How are tesselator building primitives?
• By Orella
I've been developing a 2D Engine using SFML + ImGui.
Here you can see an image
The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine.
I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui.
3D Editor preview
But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
If you can provide code will be better. And if you want me to provide any specific code tell me.
Thanks!

• Hi
I'm new to learning OpenGL and still learning C. I'm using SDL2, glew, OpenGL 3.3, linmath and stb_image.
I started following through learnopengl.com and got through it until I had to load models. The problem is, it uses Assimp for loading models. Assimp is C++ and uses things I don't want in my program (boost for example) and C support doesn't seem that good.
Things like glVertexAttribPointer and shaders are still confusing to me, but I have to start somewhere right?
I can't seem to find any good loading/rendering tutorials or source code that is simple to use and easy to understand.
I have tried this for over a week by myself, searching for solutions but so far no luck. With tinyobjloader-c and project that uses it, FantasyGolfSimulator, I was able to actually load the model with plain color (always the same color no matter what I do) on screen and move it around, but cannot figure out how to use textures or use its multiple textures with it.
I don't ask much: I just want to load models with textures in them, maybe have lights affect them (directional spotlight etc). Also, some models have multiple parts and multiple textures in them, how can I handle those?
Are there solutions anywhere?
Thank you for your time. Sorry if this is a bit confusing, English isn't my native language

• 15
• 10
• 40
• 22
• 9