Sign in to follow this  
jaronimo

OpenGL VBO generation at runtime

Recommended Posts

jaronimo    100
Hi everyone!

I'm currently trying to implement a random terrain generator using OpenGL.. and since I wanna go with the times I'm planning to do the whole thing in OpenGL 3.0 and therefore use VBOs.
since the user will be able to adjust the terrain size at runtime, the point grid VBO will not have a fixed size.. so I'm trying to come up with an algorithm that calculates a vertex grid of resolution n*n (n is user specified) and also calculates all the indices which I guess is the tricky part..

what I wanted to ask: is there an established way in which this sort of thing should/could be done? I'm currently trying to figure something out.. but I don't want to reinvent the wheel, so if anyone has any suggestions, please let me know!

thanks,
jaronimo

Share this post


Link to post
Share on other sites
SuperVGA    1132
You could've raycast the terrain in screen space with just a shader and a heightmap.
You can modify the texture fast, and it can give a real neat-looking result while circumventing the need for vertices almost completely.

Share this post


Link to post
Share on other sites
jaronimo    100
sounds intriguing! but after googling around for quite a while now, I think I'll try the VBO approach since I'm not really familiar with raycasting.. or do you know a good tutorial?

for example, with raycasting, would it be possible to rotate the heightmap? or could I just look at it from my camera position?

Share this post


Link to post
Share on other sites
SuperVGA    1132
Quote:
Original post by jaronimo
sounds intriguing! but after googling around for quite a while now, I think I'll try the VBO approach since I'm not really familiar with raycasting.. or do you know a good tutorial?

for example, with raycasting, would it be possible to rotate the heightmap? or could I just look at it from my camera position?


Yes it will be possible to rotate as much as you like. This old technique should work for 4DOF at first, as it uses a simple column approach.
That's the advantage with heightmaps here, when each column of pixels cast
we can guarantee that (if we go from the bottom to the top of screen) the point where we hit the landscape is further away every time, and we can approximate the hit spot every time after the first ray.

It's really simple, and this tutorial, though a little old, explains the technique just fine. You can do what it does in a fragment shader;

Voxel terrain on flipcode

This is also good:

Voxel terrain on codermind

But more modern techniques are ready for 6DOF translations, but I can't find any tutorials on that for you. You should give the above articles a go.

Share this post


Link to post
Share on other sites
jaronimo    100
so, I continued with the VBO approach anyway, since I can use parts of my code for another course this way.

...and I hit a roadblock that I can't seem to get past.. I'm quite sure it's a noob problem too, but I really can't figure it out...

my program has no problem rendering this points array for my VBO:

GLfloat points[] = { 0.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
2.0f, 0.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
2.0f, 1.0f, 0.0f, 1.0f,
0.0f, 2.0f, 0.0f, 1.0f,
1.0f, 2.0f, 0.0f, 1.0f,
2.0f, 2.0f, 0.0f, 1.0f};

but when I try this instead, nothing is rendered:

GLfloat* points = generateVBO(dimension);

generateVBO() looks like this:

GLfloat* generateVBO(int dim){

GLfloat genPoints[] = { 0.0f, 0.0f, 0.0f, 1.0f,
1.0f, 0.0f, 0.0f, 1.0f,
2.0f, 0.0f, 0.0f, 1.0f,
0.0f, 1.0f, 0.0f, 1.0f,
1.0f, 1.0f, 0.0f, 1.0f,
2.0f, 1.0f, 0.0f, 1.0f,
0.0f, 2.0f, 0.0f, 1.0f,
1.0f, 2.0f, 0.0f, 1.0f,
2.0f, 2.0f, 0.0f, 1.0f};

return genPoints;
}

of course this is not the original code but I dumbed it down to this to find out why it is not rendering.. I thought, this HAS to work since the array is not generated dynamically. I just return the same array as a pointer.. but this shouldn't be a problem, should it?

i mean glBufferData() even needs a pointer to the data, so why is nothing rendering when I give it a pointer instead of an actual array?
glBufferData(GL_ARRAY_BUFFER, 24*4*sizeof(GLfloat), (GLfloat*)points, GL_STATIC_DRAW);

Share this post


Link to post
Share on other sites
karwosts    840
You need to allocate genPoints on the heap (hint, use 'new'), not on the stack. When generateVBO exits, the array genPoints is destroyed, leaving your GLfloat* pointer pointing to destroyed data.

Share this post


Link to post
Share on other sites
jaronimo    100
thanks for the tip.. I now found out what the real problem was ;(
quite embarrassing really.. it did not work because the real code had an infinite loop because I wrote i=+4 in the for loop... yikes

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now