Sign in to follow this  
Sereath

OpenGL Animated OpenGL water texture (without GLSL)

Recommended Posts

Sereath    102
Hi,

I am creating a 2D game using openGL and I would like to add some water in it.

I have a very little understanding of OpenGL yet so many questions about it but I will do my best to keep things simple and clear.

So basically I have two textures : the water texture and a perlin noise texture. What I would like to do is to alterate the positions at which OpenGL look into the water texture using values of the perlin noise texture.

so basically something like :

[code]looping trough all pixels to draw on screen for the water quad
{
texel_coordinate = glMagic();

texel_offsetXY = perlinNoise_texture[pixel_coordinate].redValue(); // Or .intensity().... idk

glSetTexelCoordinatesToUseIsteadOfWhatOpenGLIntentedAtFirst(
texel_coordinate.X + texel_offsetXY, texel_coordinate.Y + texel_offsetXY)

{
// keep looping for all the pixels to draw[/code]

...and all that, without using a shading language (GLSL).

is it possible ?
is it stupid ?
is it extremely computer intensive? (if it is possible of course)
is there a shader technique that is widely supported trough all computer between now and 2000 or does all GLSL are just as bad on the compatibility issue (if you never experience any compatibility issue with GLSL, this question isn't for you).


PS : I don't want to start a debate on what people think of how many computer they know support GLSL. I just want to know if it's possible & viable to draw water with a texel displacement effect without the use of GLSL.

Share this post


Link to post
Share on other sites
mhagain    13430
You're not going to get an acceptable solution for this that doesn't involve some kind of shader - what you're talking about is a [url="http://stackoverflow.com/questions/1054096/what-is-a-dependent-texture-read"]dependent texture read[/url] and that needs shaders (or some other kind of per-fragment programmability). The only way I can think of would be to tesselate your geometry really [i]really[/i] heavily (such that one triangle approximately maps to one pixel), keeping the noise texture in system memory, and doing your coord modification per-vertex. That's not going to run well at all - a combination of heavy tesselation and needing to resubmit all the texcoords for it every frame will pull down performance badly.

So, on to shaders.

The main question I'd ask is "are you really certain that you need to run on hardware going back to the year 2000?" That's 12-year old hardware now, and assuming that you can even find any that still works, would somebody who's clinging on to such ancient junk even be considered part of your target audience? On the surface, it has to be said, you seem to be being a great deal over-conservative here, and that's a requirement that I seriously recommend you rethink.

Let's move on and assume that some kind of shader support is acceptable as a baseline requirement.

Here we have a number of options. Something like the old NV register combiners would work if you really want to target older hardware, but they're not going to be so broadly available. ARB assembly programs are another option (moving up a little more) and they will meet your requirement while still maintaining very broad hardware compatibility - even integrated Intels from 6 years back will support them. They are more limited (and more complex to write - although the C interface is simpler) than GLSL however. And that brings us to GLSL itself where the compatibility problem is really not so bad. Shooting for something like the GL 2.1 version of GLSL will give you very good hardware compatibility across the vast majority of hardware you'll find in people's machines today - it's only the really old integrated Intels I mentioned before that won't support it (more recent ones do) so the question changes to one of "do you need to run on ancient integrated Intels?"

Bear in mind here that these Intels were really only ever found in business PCs, laptops, and el-cheapo high-street "multimedia" PCs and it comes back again to you needing to think more about who your target audience are. That's the first thing you need to define before any definitive recommendation on how to go forward can be given.

Share this post


Link to post
Share on other sites
Sereath    102
[quote name='mhagain' timestamp='1334019505' post='4929713']you seem to be being a great deal over-conservative[/quote]
I am being over-conservative because of a lack of knowledge on so many levels reguarding this issue.

From what I could find on the web (along with what I could not find) (and along with your helpful reply) i will make the assumption that some shaders has to be used for this feature to be implemented in a intelligent way.

[quote name='mhagain' timestamp='1334019505' post='4929713']ARB assembly programs are another option[/quote]
I have been digging this possibility for a while (with some true interrest) yet found very little ressource on the subject. Many time I've read ARB fragments being called "outdated" and such, which made me wonder if they would even be supported by newer graphic cards (another issue that may not exist but concern me as well)... or maybe they are not supported by the latest versions of OpenGL which seem to remove support for old stuff from time to time (from what I have reed).

[quote name='mhagain' timestamp='1334019505' post='4929713']And that brings us to GLSL itself where the compatibility problem is really not so bad.[/quote]A debate for which I have read good argument on both sides. Without any real statistics on the subject, I would rather take the conservative side unless I dont have the skills for it (which is another possibility). I'm not here to debate on how peoples that would like to play my game can play GLSL games, I have no stats to back me up.

[quote]"do you need to run on ancient integrated Intels?"[/quote]
I have no idea, but as I said, without stats il take the concervative choice. I for myself own a not so old computer (5 years) with an integrated graphic card (intel I guess) that doesn't support GLSL, yet it can play some pretty decent games, go figure.

[quote]who your target audience are[/quote]
People who don't have the money to buy games that will undoubtly be better than mines.

...from there I can only guess some subcategories such as :
- people who can only afford old computers.
- people who spent money on bad games and are now looking for free games (yet own a good computer)




Now that I have read your replie I have a better understanding of my choices.
I will now aim at some ARB fragment shaders (since I believe it might not be too hard) (say the guy who know nothing about anything) since this animated water will most likely be the only and single aspect of my game using shaders (thus worth a little extra effort).

Also, I will take a look into GLSL 2.1. If I'm lucky enough, Nvidia has a software (that is hopefully free) that will allow me to code the shader once and export it to GLSL 2.1 and ARB. From there il let my game probe the computer it is run from and decide from there if GLSL 2.1 shaders are supported or if ARB shaders are supported.

Thanks you very much for your help mhagain !

Share this post


Link to post
Share on other sites
mhagain    13430
Assembly programs are going to remain supported for quite some time yet - reason why is that Doom 3 used them, so no hardware vendor is going to be too willing to drop Doom 3 support.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now