Jump to content
  • Advertisement
Sign in to follow this  
Steve5050

OpenGL Water Waves

This topic is 3625 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, Using Vb.Net 2003 with DX 9.0c I've been trying to make water waves for many days now and I just cannot figure it out. This is code from NeHe Lesson 11 in opengl I'll see if I can upload the code with the exe so you can see I want to do something similar to the lesson. I already tried to change the code over to dx but it's not working. Actually the source and exe is Here.
Private Shared points(44, 44, 2) As Single
Private Shared Function DrawGLScene() As Boolean
			Dim x, y As Integer
			Dim float_x, float_y, float_xb, float_yb As Single

			Gl.glClear(Gl.GL_COLOR_BUFFER_BIT Or Gl.GL_DEPTH_BUFFER_BIT) ' Clear The Screen And The Depth Buffer
			Gl.glLoadIdentity() ' Reset The View
			Gl.glTranslatef(0, 0, -12)
			Gl.glRotatef(xrot, 1, 0, 0)
			Gl.glRotatef(yrot, 0, 1, 0)
			Gl.glRotatef(zrot, 0, 0, 1)
			Gl.glBindTexture(Gl.GL_TEXTURE_2D, texture(0))
			Gl.glBegin(Gl.GL_QUADS)
				For x = 0 To 43
					For y = 0 To 43
						float_x = x / 44.0f
						float_y = y / 44.0f
						float_xb = (x + 1) / 44.0f
						float_yb = (y + 1) / 44.0f
						Gl.glTexCoord2f(float_x, float_y)
						Gl.glVertex3f(points(x, y, 0), points(x, y, 1), points(x, y, 2))
						Gl.glTexCoord2f(float_x, float_yb)
						Gl.glVertex3f(points(x, y + 1, 0), points(x, y + 1, 1), points(x, y + 1, 2))
						Gl.glTexCoord2f(float_xb, float_yb)
						Gl.glVertex3f(points(x + 1, y + 1, 0), points(x + 1, y + 1, 1), points(x + 1, y + 1, 2))
						Gl.glTexCoord2f(float_xb, float_y)
						Gl.glVertex3f(points(x + 1, y, 0), points(x + 1, y, 1), points(x + 1, y, 2))
					Next y
				Next x
			Gl.glEnd()

			If wiggle_count = 2 Then
				For y = 0 To 44
					hold = points(0, y, 2)
					For x = 0 To 43
						points(x, y, 2) = points(x + 1, y, 2)
					Next x
					points(44, y, 2) = hold
				Next y
				wiggle_count = 0
			End If

			wiggle_count += 1

			xrot += 0.3f
			yrot += 0.2f
			zrot += 0.4f
			Return True ' Keep Going
	 End Function

Not only have I tried that code, I tried making a grid and moving the vertices and nothing is working for me, I'm screwing up and can't figure it out. If you run that lesson it will give you some insight on what I'm trying to do. Thanks Steve

Share this post


Link to post
Share on other sites
Advertisement
Quote:
I already tried to change the code over to dx but it's
not working.

Are you working in DirectX or OpenGL, Steve5050? You mention DX 9.0c but your question appears to be related to OpenGL. If so, you should consider posting to the OpenGL forum.

If you're trying to rewrite the OpenGL code for DirectX:
1. You need to describe what's "not working."
2. Post some DirectX code for people to look at.

Share this post


Link to post
Share on other sites
Using Directx.
That is why I said I was using dx and posted in the dx forum.
I was trying to convert the ogl code to use in dx.
What isn't working is that it doesn't show up.
Just a small flicker of a triangle was flickering if I aimed
the at I just the right way,I also didn't have culling set to none
while trying the code
No errors.
Have you seen the source and exe in the link?
I cut out the code that I was trying to use but it basically wasn't
much differnt than the ogl code.

Gl.glVertex3f(points(x, y, 0), points(x, y, 1), points(x, y, 2))

Hold on I just spotted my code, this would be one of the ways
I tried it:

Private Shared points(44, 44, 2) As Single

Public Function LoadVB() As Boolean
vb = New VertexBuffer(GetType(CustomVertex.PositionNormalTextured), numVerts, Device, Usage.WriteOnly, CustomVertex.PositionNormalTextured.Format, Pool.Managed)
verti = New CustomVertex.PositionNormalTextured(numVerts) {}
vb.Lock(0, 0)
Dim x, y As Integer
Dim float_x, float_y, float_xb, float_yb As Single
For x = 0 To 43
For y = 0 To 43
float_x = x / 44.0F
float_y = y / 44.0F
float_xb = (x + 1) / 44.0F
float_yb = (y + 1) / 44.0F

verti(0).X = points(x, y, 0)
verti(0).Y = points(x, y, 1)
verti(0).Z = points(x, y, 2)
verti(0).Tu = float_x
verti(0).Tv = float_y

verti(1).X = points(x, y + 1, 0)
verti(1).Y = points(x, y + 1, 1)
verti(1).Z = points(x, y + 1, 2)
verti(1).Tu = float_x
verti(1).Tv = float_yb

verti(2).X = points(x + 1, y + 1, 0)
verti(2).Y = points(x + 1, y + 1, 1)
verti(2).Z = points(x + 1, y + 1, 2)
verti(2).Tu = float_xb
verti(2).Tv = float_yb

verti(3).X = points(x + 1, y, 0)
verti(3).Y = points(x + 1, y, 1)
verti(3).Z = points(x + 1, y, 2)
verti(3).Tu = float_xb
verti(3).Tv = float_y

Next y
Next x
'vb.SetData(verti, 0, LockFlags.None)
vb.Unlock()
Return True
End Function





Ok let me know what you think.

Thanks
Steve

Share this post


Link to post
Share on other sites
Quote:
Have you seen the source and exe in the link?

The link you provided was to the NeHe OpenGL tutorial. That was the reason for my question.

EDIT:
Have you tried to render just a single triangle, to ensure your rendering loop is correct?

EDIT2:
Quote:
I also didn't have culling set to none

Try setting cullmode to CULLNONE.

OpenGL uses a right-hand coordinate system and DirectX uses left-hand. If you use the same routine for setting up the points that the tutorial uses, all your triangles will be CCW instead of CW.

If so, you should be able to make the change in your routine to:

verti(...).X = points(x, y, 2)
verti(...).Y = points(x, y, 1)
verti(...).Z = points(x, y, 0)


[Edited by - Buckeye on July 16, 2008 4:02:17 PM]

Share this post


Link to post
Share on other sites
Ah, I put the link in because the source and exe is
at the bottom of the tutorial, I can see why that would be
confusing, usually you get sent to a file download.
Sorry about that,I get the confusing part now.

Share this post


Link to post
Share on other sites
See my edited comments above. I was editing during your last post.

EDIT: I think I misread your code. IGNORE the comment about changing the points!

You should be able to generate CCW quads by changing the order of the vertex loading:

verti(3).X = points(x, y, 0)
verti(3).Y = points(x, y, 1)
verti(3).Z = points(x, y, 2)
...
verti(2).X = points(x, y, 0)
verti(2).Y = points(x, y, 1)
verti(2).Z = points(x, y, 2)
...
verti(1).X = points(x, y, 0)
verti(1).Y = points(x, y, 1)
verti(1).Z = points(x, y, 2)
...
verti(0).X = points(x, y, 0)
verti(0).Y = points(x, y, 1)
verti(0).Z = points(x, y, 2)

Share this post


Link to post
Share on other sites
I going to go back and try it again,judging by your answer
your not seeing any thing wrong with the code.
I'll be back.

Share this post


Link to post
Share on other sites
I tried reversing the order and that showed nothing at all.
I tried drawuserprimitives and didn't get anywhere, I'll put
up all the code,this is the closest I came to seeing anything.
I did test a box in the code and it worked perfectly.
I'm thinking getting 4 out 3 might be the problem,I'm not coming
up with the solution, thats for sure. I tried going with three
verts and lost that battle.


Private Shared points(44, 44, 2) As Single
Private numVerts As Integer = 44
Private Verts() As CustomVertex.PositionTextured

Public Sub LoadVB()
vb = New VertexBuffer(GetType(CustomVertex.PositionTextured), numVerts, Device, Usage.WriteOnly, VertexFormats.Position And VertexFormats.Texture0, Pool.Managed)
Dim Verts(numVerts) As CustomVertex.PositionTextured
Dim x, y As Integer
Dim float_x, float_y, float_xb, float_yb As Single
For x = 0 To 43
For y = 0 To 43
float_x = x / 44.0F
float_y = y / 44.0F
float_xb = (x + 1) / 44.0F
float_yb = (y + 1) / 44.0F

Verts(0) = New CustomVertex.PositionTextured(points(x, y, 0), points(x, y, 1), points(x, y, 2), float_x, float_y) '0
Verts(1) = New CustomVertex.PositionTextured(points(x, y + 1, 0), points(x, y + 1, 1), points(x, y + 1, 2), float_x, float_yb) '1
Verts(2) = New CustomVertex.PositionTextured(points(x + 1, y + 1, 0), points(x + 1, y + 1, 1), points(x + 1, y + 1, 2), float_xb, float_yb) '2
Verts(3) = New CustomVertex.PositionTextured(points(x + 1, y, 0), points(x + 1, y, 1), points(x + 1, y, 2), float_xb, float_y) '3
Next y
Next x
vb.SetData(Verts, 0, LockFlags.None)

End Sub

Public Sub Render()
Device.RenderState.Lighting = False
Device.RenderState.ZBufferEnable = True
Device.Transform.World = Me.Mat1
Device.VertexFormat = CustomVertex.PositionTextured.Format
Device.SetTexture(0, Tex)
Device.SetStreamSource(0, vb, 0)
Device.DrawPrimitives(PrimitiveType.TriangleStrip, 0, numVerts / 3)
End Sub






Share this post


Link to post
Share on other sites
Wow. You've got several problems, now that I see the code.

The biggest problem:
1. You don't increment the Verti() subscript. You continually load data into just Verti(0) .. (1) .. (2) ..(3).

2. The OpenGL data is arranged to render quads ( Gl.glBegin(Gl.GL_QUADS) ) and you're rendering the same data as a triangle strip. I'm pretty sure that won't work. Your best bet may be to create 2 triangles ( 6 vertices ) for each set of quad data and render a triangle list.

3. As I mentioned in one of my previous posts, OpenGL uses CW axes and DirectX use CCW axes so the data has to be rearranged.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
    • By Lewa
      So, i stumbled upon the topic of gamma correction.
      https://learnopengl.com/Advanced-Lighting/Gamma-Correction
      So from what i've been able to gather: (Please correct me if i'm wrong)
      Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format  
      Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
      First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)
       
      What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
      vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
      No gamma correction:
      With gamma correction:
       
      The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
      Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631380
    • Total Posts
      2999673
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!