voodoodrul

Members
  • Content count

    19
  • Joined

  • Last visited

Community Reputation

130 Neutral

About voodoodrul

  • Rank
    Member
  1. I switched to the local rendering, placing chunks relative to the camera, and keeping the camera at (0,y,0), at least as far as glTranslatef() is concerned. I also switched to doubles throughout. I can now travel to extremely far locations from the origin.    Thanks again Wintertime
  2. Thanks wintertime! You're suggesting an idea similar to the "localized world" I was thinking of. That did indeed fix the gaps. However, now I have the same problem with my camera - it lacks float precision and jitters around when far from the origin, simply because the floats don't have the decimal precision.    My outer render loop does:   1) glLoadIdentity(); 2) Look through the camera - rotate pitch/yaw, then translate to some obscenely large x,y,z     For some reason, I have to negate all the values when I do this.  glTranslatef(-vectorPos.x, -vectorPos.y, -vectorPos.z); 3) Render the world     First translate to camera position 4) For each chunk, translate only the difference between the camera position relative to the chunks position.    I wonder if I'm doing this all wrong. Should I keep the entire world near the origin ( camera essentially fixed at (0,y,0) ) and just move the world around relative to the origin?   Eh.. That probably won't help. The fundamental problem is that there isn't enough decimal precision for acceleration on vectors to be smooth. I guess I need to convert to doubles instead of floats, or accept the fact that things jitter around when I go to 500000,0,500000 and beyond. 
  3. It's a plain old Minecraft clone ("oh no.. not another one of these..."), chunked rendering, "infinite" terrain. I have a 2D plane of WorldChunk objects. Each chunk is converted to a minimal mesh of exposed faces and stored as a VBO. Once the chunk is ready for render, I might need to place it somewhere very far from the origin, because this chunk is at, say, (16384, 16384). What happens in my rendering is that gaps appear between the chunks and, as I pan my camera around, each chunk jitters a bit due to lack of precision in absolute placement. Each chunk might jitter around 1-2 pixels, sometimes landing exactly lined up with its neighbor, other times 2 pixels offset with a gap. The problem gets worse as you go farther from the origin. Near the origin, such as below (4096,4096) the problem is not obvious. At the origin is it not possible to spot the problem at all. I know there must be limited precision in translates, so I need to do things in a fundamentally different way.   For each chunk, when it has a VBO that is ready to draw, I do:   glPushMatrix(); glTranslatef(16384.0f, 0.0f, 16384.0f); .... glDrawArrays(GL_TRIANGLES, 1, this.numVerts); glPopMatrix();   I thought about making my translates more like a "local world" where I simply place them relative to the viewer, but it makes a few other things difficult. I'm sure I'm doing this wrong by trying to use very large glTranslatef() values relative to the origin.    Any advice out there?    The rendering test app is @ https://voodoo.arcanelogic.net/CYDI-latest.jar if anyone cares to look at it, but it would take a while to fly far enough away from the origin to notice big gaps and jitter   Thanks!
  4. Awesome. This was my next logical question and it's good to know that blending nearest regions is a legitimate way to go about it - when needed. 
  5. Thanks! Looks like I was overcomplicating things.. The perlin noise algorithm I was using was designed to generate the entire map in one go. I'm now trying to hijack https://code.google.com/p/mikeralib/source/browse/trunk/Mikera/src/main/java/mikera/math/PerlinNoise.java for this purpose, but it seems highly optimized code and difficult to interpret the algorithm. I'll keep plugging away on it until it works. 
  6. I'm sure I'm coming at this from the wrong angle, but I wanted to know, if you use a chunked rendering scheme and Simplex or Perlin 2D or 3D noise, what's the best way to create a continuous map without ever building the entire world map? Should I offset the base seed by adding the chunk's position (x,y) to the seed and build a new map for that chunk? If I do that, the two neighboring maps won't blend with each other.   I thought about blending the two maps using another pass that takes map A's edge values and seeds the neighbor's edge map with the same ones, but I think I just don't know how to use noise maps properly.    In my mind, I have a limited amount of space to store one complete 2D perlin noise map. I use that for the entire world, but clearly that's not going to get me to "infinite"..    Any help is appreciated!
  7. So I have a LWJGL renderer. It spends all of it time rendering. Imagine that. It seems that garbage collection waits until the last possible second to run and it's becoming problematic. Even triggering a System.gc() request to perform GC doesn't seem to help. If I stop moving around in my renderer, which won't need to create new large objects, it eventually seems to clean itself up..    Should I deliberately halt the renderer, for maybe 100 msec, to convince java to perform GC?    Profiling my app doesn't seem to help much. I see a metric crapload of object allocations, but that's not "objects that are still alive" and looking at the heap doesn't show an obvious memory leak.. 
  8. Reusing VBOs

    Thanks MarkS. The community here is really helpful. I think I've solved most of my current issues and I think I'm getting some really decent performance out of my prototype voxel renderer. I want to share it with the world so I'll post a link to my app here:    https://voodoo.arcanelogic.net/CYDI-latest.jar   If anyone cares, this represents about 4-5 weeks of entirely from-scratch effort to teach myself OpenGL. Despite being a programmer for a few years, I have never worked with graphics programming before and I just wanted to see how long it would take to get something simple off the ground.    My renderer uses perlin noise to generate a seeded, random, and unique heightmap with each app restart and uses chunked rendering to page in and out chunks/tiles.    Controls are Minecraft-like. Space to jump, double space to fly up, shift to fly down. Turn off camera collision to fly outside the chunk boundaries. Move around faster with +/- keys. Then turn up the view distance (F5) and turn off vsync (F8) and take in the view.    Oddly, I get really excellent performance on Intel integrated graphics cards - on an HD 5000 card I'm generally getting 60fps with a million exposed block faces. 250-300fps with view distance 11, which I think is plenty. Still holds 60fps at view distance 33 which I think is nutty, especially for an integrated graphics card...   My GTX 690, though, holds steady at ~2450 fps.
  9. Reusing VBOs

    Solved. I blame this on 4 days of less than 3 hours of sleep per night. Ha.    When I draw, I was doing:    glDrawArrays(GL_TRIANGLES, 1, this.numVerts);   I step over the first degenerate vertex as I should. The problem was that, stupid me, this.numVerts was being calculated wrong..    I was doing:   this.numVerts = this.vbuffer.position() / 8;   instead of    this.numVerts = this.vbuffer.position() / 12;   You'll see my blocks hold 12 floats per vertex. So chalk this one up to me being stupid.    I suppose it's useful to know that if you tell glDrawArrays(GL_TRIANGLES, 1, x); where x is larger than the currently bound buffer, you can and will overrun it and spill over into surrounding memory.    Lesson learned. 
  10. Reusing VBOs

    It's probably best to refer you to my earlier thread - http://www.gamedev.net/topic/644944-optimization-of-many-small-gldrawelements-calls/   Just focus on the last couple entries there to get a feel for how I am creating vertex arrays.    Sorry for creating another thread, but I guess I had diverged from the initial problem enough to facilitate a new one..    As you can see, that's how I'm drawing my cubes and populating a vertex array. It works well as long as I explicitly call glDeleteBuffers() and glGenBuffers() again before populating the VBO.    I have discovered that these rogue faces occur on the edges of chunks - each chunk is a: glPushMatrix() glTranslatef() //draw VBO glPopMatrix();   It's as if the degenerate verticies I am using to draw only certain faces are "bleeding over" into the next chunk.render(), but that's a completely unique glBindBuffer() operation and shouldn't impact the previous one..   Is it possible that the glGenBuffers() just slows it down enough to sit inside of some race condition or state issue? Clearly everything in the render() loop should be occurring in a single thread anyway so I doubt it. I do offload work into thread for things like building the vertex arrays (converting the chunk to a mesh) and then flag that chunk as "ready" for the render loop to actually push it in to the VBO if not already.. 
  11. Reusing VBOs

    That's why I thought I should be able to reuse it without flushing it..    Each chunk has its own VBO so I should be able to create/destroy them without worry. I don't need to rewrite portions of the buffer, I just need a overwrite with data that might be larger or smaller..    Think this might be a bug in LWJGL's JNI calls? *update* latest version doesn't change anything.. 
  12. Reusing VBOs

    I have an odd problem. My render() code does this render() { ... if (rebuildMesh == true) { buildMesh(); } ... } buildMesh() { .... if (this.vboVertexHandle == 0) { this.vboVertexHandle = glGenBuffers(); } glBindBuffer(GL_ARRAY_BUFFER, vboVertexHandle); glBufferData(GL_ARRAY_BUFFER, this.vbuffer, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, 0); ... } It creates a buffer if one doesn't exist, binds it, and buffers the data.    The problem comes later when another render() call decides to update the mesh. It creates the vertex buffer correctly and then tries to upload the data into the existing vboVertexHandle. This seems to result in a corrupted VBO with fragments of the original vertex data somehow "transposed" onto the existing buffer.    I was under the impression that glBufferData(GL_ARRAY_BUFFER, this.vbuffer, GL_STATIC_DRAW) would replace the entire contents of the buffer, but it doesn't appear to. It seems to prepend the new data onto the buffer?   I need to flush and upload all new data to the VBO. The VBO holds a mesh that changes infrequently.   This code works: buildMesh2() { .... if (this.vboVertexHandle == 0) { this.vboVertexHandle = GL15.glGenBuffers(); } else { GL15.glDeleteBuffers(vboVertexHandle); this.vboVertexHandle = GL15.glGenBuffers(); } GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboVertexHandle); GL15.glBufferData(GL15.GL_ARRAY_BUFFER, this.vbuffer, GL15.GL_STATIC_DRAW); GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); ... } But that seems wasteful to delete and gen a whole new buffer.. 
  13. Without the degeneration, I would have visible triangles drawn between disconnected faces of a block. If I have 3 blocks to draw, like this:   block1 -> front + back + right block2 -> right + back + top block3 -> front + left   Imagine trying to compute a closed, single connected triangle fan from that. It would be tricky. You'd need to start your first face on block 1, then stop drawing (degenerate vertex) and move the vertex array "pointer" to a back face corner and start drawing the back face. When you are done with that, you'll need to stop drawing (with a degenerate) and get a vertex pointer over to a corner on block2's right face (another degenerate), then commit to actually start drawing again with yet another degenerate vertex on block2's right face.    It sounds overly complicated, but I don't believe it to be - I think this is how optimal meshes are output and even Blender, when I draw a mesh like this, will output a similar mesh. Computing truly optimal meshes, however, is proven to be an NP-complete problem.    I did get it functional with something that looks like this: new float[]{ //Vertex Normals Colors Texcoord Float.NaN, Float.NaN, Float.NaN, 0, 0, 0, 0, 0, 0, 0, 0, //Degenerate reset x + 1, y + 1, z + 1, 0, 0, 1, color[0], color[1], color[2], 1, 0, x, y + 1, z + 1, 0, 0, 1, color[0], color[1], color[2], 0, 0, x, y, z + 1, 0, 0, 1, color[0], color[1], color[2], 0, 1, // v0-v1-v2front x, y, z + 1, 0, 0, 1, color[0], color[1], color[2], 0, 1, x + 1, y, z + 1, 0, 0, 1, color[0], color[1], color[2], 1, 1, x + 1, y + 1, z + 1, 0, 0, 1, color[0], color[1], color[2], 1, 0, // v2-v3-v0 x + 1, y + 1, z + 1, 0, 0, 1, color[0], color[1], color[2], 1, 0, //End this face Float.NaN, Float.NaN, Float.NaN, 0, 0, 0, 0, 0, 0, 0, 0,}, //Degenerate reset //Right face new float[]{ Float.NaN, Float.NaN, Float.NaN, 0, 0, 0, 0, 0, 0, 0, 0, //Degenerate reset x + 1, y + 1, z + 1, 1, 0, 0, color[0], color[1], color[2], 0, 0, x + 1, y, z + 1, 1, 0, 0, color[0], color[1], color[2], 0, 1, x + 1, y, z, 1, 0, 0, color[0], color[1], color[2], 1, 1, // v0-v3-v4right x + 1, y, z, 1, 0, 0, color[0], color[1], color[2], 1, 1, x + 1, y + 1, z, 1, 0, 0, color[0], color[1], color[2], 1, 0, x + 1, y + 1, z + 1, 1, 0, 0, color[0], color[1], color[2], 0, 0, // v4-v5-v0 x + 1, y + 1, z + 1, 1, 0, 0, color[0], color[1], color[2], 0, 0,//End this face Float.NaN, Float.NaN, Float.NaN, 0, 0, 0, 0, 0, 0, 0, 0,}, //Degenerate reset As you can see, there are several degens in there. Every time a face is drawn, I move the vertex "pointer" from a known bogus vertex - (Float.NaN, Float.NaN, Float.NaN) and start drawing a new face with a real x,y,z. At the end of each face I restore the degenerate back to (Float.NaN, Float.NaN, Float.NaN) - I need to ensure that no matter where I am in drawing blocks, I won't send OpenGL a bunch of duplicate values. If I used 0,0,0 for example, and I tried to draw a block actually at 0,0,0, this scheme would not work because OpenGL would be fed even more degenerates and toggle drawing of those verticies off. The easiest thing to do is start and stop each face with degenerate verticies that you know will never actually need to be drawn and that won't collide with any actual drawn vertex in your mesh.    This all works fine at the moment with a few caveats. Performance is excellent - I get over 300fps on an intel hd 4000 integrated card and over 2000fps on my GTX690 with about 20 million blocks in scene. Of course those 20 million blocks become relatively few faces to actually draw since most are completely concealed - maybe a couple hundred thousand faces actually being drawn.      I have one lingering problem that only occurs on Radeon cards. Notice these rogue faces:     Each chunk is outlined in the red wires. Notice the "thrashing" of garbage vertex data near the origin of the chunk:     Whereas Intel and Nvidia cards render the scene as expected:      
  14. Okay, so I have tweaked this a bit:    //Interleaved array - vertex3f, normal3f, color3f, u,v //bottom face new float[]{ x, y, z, 0, 0, 0, 0, 0, 0, 0, 0, x, y, z, 0, -1, 0, color[0], color[1], color[2], 0, 1, x + 1, y, z, 0, -1, 0, color[0], color[1], color[2], 1, 1, x + 1, y, z + 1, 0, -1, 0, color[0], color[1], color[2], 1, 0,// v7-v4-v3bottom x + 1, y, z + 1, 0, -1, 0, color[0], color[1], color[2], 1, 0, x, y, z + 1, 0, -1, 0, color[0], color[1], color[2], 0, 0, x, y, z, 0, -1, 0, color[0], color[1], color[2], 0, 1,// v3-v2-v7 x, y, z, 0, 0, 0, 0, 0, 0, 0, 0 }, //back face new float[]{ x + 1, y, z, 0, 0, 0, 0, 0, 0, 0, 0, x + 1, y, z, 0, 0, -1, color[0], color[1], color[2], 0, 1, x, y, z, 0, 0, -1, color[0], color[1], color[2], 1, 1, x, y + 1, z, 0, 0, -1, color[0], color[1], color[2], 1, 0,// v4-v7-v6back x, y + 1, z, 0, 0, -1, color[0], color[1], color[2], 1, 0, x + 1, y + 1, z, 0, 0, -1, color[0], color[1], color[2], 0, 0, x + 1, y, z, 0, 0, -1, color[0], color[1], color[2], 0, 1, x + 1, y, z, 0, 0, 0, 0, 0, 0, 0, 0 }   As you can see, each face begins with a degenerate and ends with one. I basically stick these faces together in any order, so I start drawing a chunk and draw block1->face1, block1->face 3, block2->face2, ..., blockN->faceN   I start drawing at offset one GL11.glDrawArrays(GL11.GL_TRIANGLES, 1, this.numVerts);  since I want to draw this array but I probably shouldn't start off by drawing a degenerate.    The problem is that my degenerates are still wrong, or at least they are being toggled in ways I'm not expecting. They seem to be toggling on and off which results in this trippy  scene.   
  15. Thanks Sponji. You've been a big help.    After spending  most of the night being frustrated, I just took a step back. The solution is now working well. I am using interleaved vertex, normal, color and texcoord arrays, but I'm having trouble dealing with the degenerative verticies   WorldChunk.buildMesh() calls    verticies.add(Block.generate(i, j, k, EXPOSED_FACES, new float[]{0.2f, 1.0f, 0.2f}));   Block.generate(x,y,z,faces, colors) produces a vertex array including degenerates, like this:    public static FloatBuffer generate(float x, float y, float z, boolean[] faces, float[] color) { float[][] cubeFaces = new float[][]{ //Front face new float[]{ //Vertex Normals Colors Texcoord (x+1), (y+1),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 1f, 0f, (x), (y+1),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 0f, 0f, (x), (y),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 0f, 1f, // v0-v1-v2 (front) (x), (y),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 0f, 1f, (x+1), (y),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 1f, 1f, (x+1), (y+1),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 1f, 0f, // v2-v3-v0 (x+1), (y+1),(z+1),0,0,0,0,0,0,0,0 }, ... //back face new float[]{ (x+1), (y), (z), 0f, 0f, -1f, color[0], color[1], color[2], 0f, 1f, (x), (y), (z), 0f, 0f, -1f, color[0], color[1], color[2], 1f, 1f, (x), (y+1), (z), 0f, 0f, -1f, color[0], color[1], color[2], 1f, 0f,// v4-v7-v6 (back) (x), (y+1), (z), 0f, 0f, -1f, color[0], color[1], color[2], 1f, 0f, (x+1), (y+1), (z), 0f, 0f, -1f, color[0], color[1], color[2], 0f, 0f, (x+1), (y), (z), 0f, 0f, -1f, color[0], color[1], color[2], 0f, 1f, (x+1), (y), (z),0,0,0,0,0,0,0,0 } }; int faceCount = 0; for (int i = 0; i < faces.length; i++) { if (faces[i] == true) { faceCount++; } } float[] values = new float[faceCount * 11 * 7]; int ptr = 0; float[] degenerate = new float[11]; //store the previous vertex from some other face draw to use as our next degenerate boolean degen = false; //Have we processed an earlier face and created a degenerate ? for (int i = 0; i < faces.length; i++) { //foreach face if (faces[i] == true) { //if this face is to be drawn float[] face = cubeFaces[i]; //get the vertex data for the face for (int j = 0; j < face.length; j++) { //Copy the vertex data into the return array if (degen && j < 11) { //prepend the previous degenerate vertex for the next draw values[ptr] = degenerate[j]; } else { values[ptr] = face[j]; //Otherwise just copy the face vertex data as-is } ptr++; if (j > 66) { //Store a degenerate vertex by copying the last interleaved vertex data from this draw degenerate[j-66] = face[j]; degen = true; } } } } return Util.getFloatBuffer(values); //Return the final floatbuffer }   Are my degenerate verticies all wrong? I thought all I need to do is declare the same x,y,z from a previous draw, so I could not do the hacky stuff here like remembering the last vertex from an earlier face.    Can't I just stick the degenerate in the vertex data like this?                //Front face             new float[]{                 //Vertex                Normals  Colors                        Texcoord                 (x+1), (y+1),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 1f, 0f,                 (x), (y+1),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 0f, 0f,                 (x), (y),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 0f, 1f, // v0-v1-v2 (front)                 (x), (y),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 0f, 1f,                 (x+1), (y),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 1f, 1f,                 (x+1), (y+1),(z+1), 0f, 0f, 1f, color[0], color[1], color[2], 1f, 0f, // v2-v3-v0                 (x+1), (y+1),(z+1),0,0,0,0,0,0,0,0    //degenerate             },