# Cristian Decu

Member

19

236 Neutral

• Rank
Member

• Role
Programmer
• Interests
Programming
1. ## Relative to Camera rendering.

First of all, thank you for taking your time to read my long post; second of all, it worked, even though i had to rethink a few things. I quit normalizing the vertices on the GPU, and i started creating a VBO for each node's grid on the CPU, projecting each vertex on an imaginary sphere before uploading everything in the VBO. This way, it was easy to apply the method you described and now i can render absurdly large planets with no visible fp32 precision problems at all! It may not be the best solution, however with some caching involved, i believe i can optimize it a little. Thank you again!
2. ## OpenGL Relative to Camera rendering.

Hello fellow programmers, For a couple of days now i've decided to build my own planet renderer just to see how floating point precision issues can be tackled. As you probably imagine, i've quickly faced FPP issues when trying to render absurdly large planets. I have used the classical quadtree LOD approach; I've generated my grids with 33 vertices, (x: -1 to 1, y: -1 to 1, z = 0). Each grid is managed by a TerrainNode class that, depending on the side it represents (top, bottom, left right, front, back), creates a special rotation-translation matrix that moves and rotates the grid away from the origin so that when i finally normalize all the vertices on my vertex shader i can get a perfect sphere. T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, 1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(180.0), glm::dvec3(1.0, 0.0, 0.0)); sides[0] = new TerrainNode(1.0, radius, T * R, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_FRONT)); T = glm::translate(glm::dmat4(1.0), glm::dvec3(0.0, 0.0, -1.0)); R = glm::rotate(glm::dmat4(1.0), glm::radians(0.0), glm::dvec3(1.0, 0.0, 0.0)); sides[1] = new TerrainNode(1.0, radius, R * T, glm::dvec2(0.0, 0.0), new TerrainTile(1.0, SIDE_BACK)); // So on and so forth for the rest of the sides As you can see, for the front side grid, i rotate it 180 degrees to make it face the camera and push it towards the eye; the back side is handled almost the same way only that i don't need to rotate it but simply push it away from the eye. The same technique is applied for the rest of the faces (obviously, with the proper rotations / translations). The matrix that result from the multiplication of R and T (in that particular order) is send to my vertex shader as r_Grid'. // spherify vec3 V = normalize((r_Grid * vec4(r_Vertex, 1.0)).xyz); gl_Position = r_ModelViewProjection * vec4(V, 1.0); The r_ModelViewProjection' matrix is generated on the CPU in this manner. // No the most efficient way, but it works. glm::dmat4 Camera::getMatrix() { // Create the view matrix // Roll, Yaw and Pitch are all quaternions. glm::dmat4 View = glm::toMat4(Roll) * glm::toMat4(Pitch) * glm::toMat4(Yaw); // The model matrix is generated by translating in the oposite direction of the camera. glm::dmat4 Model = glm::translate(glm::dmat4(1.0), -Position); // Projection = glm::perspective(fovY, aspect, zNear, zFar); // zNear = 0.1, zFar = 1.0995116e12 return Projection * View * Model; } I managed to get rid of z-fighting by using a technique called Logarithmic Depth Buffer described in this article; it works amazingly well, no z-fighting at all, at least not visible. Each frame i'm rendering each node by sending the generated matrices this way. // set the r_ModelViewProjection uniform // Sneak in the mRadiusMatrix which is a matrix that contains the radius of my planet. Shader::setUniform(0, Camera::getInstance()->getMatrix() * mRadiusMatrix); // set the r_Grid matrix uniform i created earlier. Shader::setUniform(1, r_Grid); grid->render(); My planet's radius is around 6400000.0 units, absurdly large, but that's what i really want to achieve; Everything works well, the node's split and merge as you'd expect, however whenever i get close to the surface of the planet the rounding errors start to kick in giving me that lovely stairs effect. I've read that if i could render each grid relative to the camera i could get better precision on the surface, effectively getting rid of those rounding errors. My question is how can i achieve this relative to camera rendering in my scenario here? I know that i have to do most of the work on the CPU with double, and that's exactly what i'm doing. I only use double on the CPU side where i also do most of the matrix multiplications. As you can see from my vertex shader i only do the usual r_ModelViewProjection * (some vertex coords). Thank you for your suggestions!

4. ## Cracks between patches with same the LOD level.

Hello fellow programmers,   This morning i decided to write a simple quadtree based terrain renderer (no frustum culling or any kind of optimization). Since it was done it just a couple of hours, it's a little messy so maybe some sharp eyes can figure out what's wrong here.   To explain everything, i'm using a quadtree to render my terrain. When the camera gets close enough to a patch, the patch splits into for baby patches, so on and so forth. The vertex buffer is contant and so is the index buffer. (for now i only have one IBO, i'm planning to create 16 in order to prevent cracks between different LOD patches).   The vbo has 65 * 65 vertices, 64 x 64 units. When i split a patch, i'm rendering the VBO 4 times, scale and translate everything in order to fit within the parent patch. I'm generating the height data using libnoise. The generated heightmaps are the four tiles of the tile that belonged to the parent patch. I'm applying the elevation inside the vertex shader by lighting the Y value according to the r channel.   It could be libnoise, generating heightmap tiles that are not perfectly seamless, or it could be something else that i missed.   I know it's a primitive way of rendering terrain, however i'm trying to get this right and then try a more modern approach.   There's several pictures attached here to better understand the problem.   [attachment=31996:err1.png] [attachment=31997:err2.png] [attachment=31998:err3.png] [attachment=31999:err4.png] [attachment=32003:err5.png]     Thank you!
5. ## How to organize worlds/levels as seen in the new Doom

I believe it's all the same. If you think about it, size is just an illusion in video games. A map represented by one big mesh is too much even for the new GPUs, so there's obviously some visibility testing going on. Objects are probably streamed in on demand, based on the player's position. In order to overcome precision issues, the space is partitioned (octree, kdtree, bsp even quadtrees).   Basically, it's all the same, with the GPU taking some of the burden of the CPU where possible.   Read more about procedural planet renderers. They are the best large-world managers since they have to render an enourmous amount of data.   Cheers!
6. ## Terrain Rendering

For now i'm thinking to stick around OpenGL 3.0 since 4.0 and earlier is not that widely adopted as far as i know.     Now this is interesting. You're saying that without sending any kind of geometry to the GPU, the GPU is capable of rendering a terrain based on a heightmap? I'm thinking this would be really fast.   One disadvantage of this seems to me to be the collision detection. Even if i generate the mesh entirely on the GPU, then i'll have to somehow probe some regions of it to test it for collisions. Nonetheless interesting.   I need an approach that could do well on OpenGL 3.0 so there's no tessellation shaders for me. I'll still need to send some geometry to the GPU in order to make this work, it seems to me that the 1 VBO approach can be really slow. (No frustum culling yet, but i expect a 20% - 25% frame rate increase).   I need something that could easily be adapted for spherical terrains.     That means that i need to fetch the geometry out of my octree, test it against the frustum planes, weld it, stitch it, remove the cracks and then upload it to the GPU. That seems a little complicated, but then i'll be able to render everything with one draw call, so it should be fast. Maybe the geometry fetching part could cost my CPU a bunch, but when i think of it again, maybe not that much. And texturing could prove a pain in the neck since by sending everything at once, you'll need to texture everything as a whole, leaving me to find a proper way of distributing different resolution textures across the tiles (we won't have anymore tiles after stitching everything up, isn't it?).   I love this, so many techniques that i can use, all with their pros and cons, i'm not going to get bored this summer folks!   Thank you for your support!
7. ## Terrain Rendering

Hello fellow game programmers,   After finally reading most of the forums and articles related to terrain rendering, i think i reached a conclusion. I just wanted to share with you my ideas and leave this thread behind any other people looking for such a subject.   The first problem that i came into was, obviously, solving the cracks between different LODs. Not a trivial thing to do, however, after proper documentation and experimentation i reached a final conclusion.   I will use a quadtree to split my terrain into multiple patches. I will have only 1 VBO containing 33 x 33 vertices, and multiple IBOs(16 i think). Each IBO has one or more edges altered so that it fits the neighbouring patches. Now i need to make sure that the LOD level difference is not higher than 1, but that shouldn't be a problem.   Now, why one single VBO? Well, i see no reason to use multiple VBO since i can scale down my patch. For instance, a level 0 patch of 33x33 vertices splits into 4 33x33 patches having 0.25 the size of the parent patch. (33 x 33 vertices means a width and height of 32, i love numbers that are a power of 2, probably an OCD or something.)   One single VBO for the whole terrain, multiple IBO for solving the cracks. The downside is that i will need multiple draw calls, but i don't see that being too big of an issue. If it misbehaves, then i could probably optimize this a little bit by merging some of the patches into a bigger VBO, thus reducing the number of draw calls. Moreover, frustum culling and horizon culling are going to significantly cut down the number of draw calls per frame.   One thing i can't get straight however is how can i implement geomorphing with this technique. I'm thinking i could use an attribute that would slowly interpolate the position of my vertex, but that means that for a while, i will need to update the attribute buffer every frame, not sure my GPU is going to like it.   What do you people think? Is it ok? Can i do it better?
8. ## Anyone here a self-taught graphics programmer?

Your question brings back many memories. When i turned 9, my parents bought me a Sinclair Spectrum computer. It was not much, but it helped me to understand the basics of programming. It was then when i learned to create and optimize algorithms. I then started plotting points on the screen and then i learned how to draw circles and squares. It was the start of my journey in the beautiful world of programming.   I just recently started studying IT, first year at the University of Computer Science, but as expected i aced most of the classes.   There's still much to learn and a lot of experience to gain, but (0, 0, -1) is the only way i know!   Good luck!
9. ## Mapping a Sphere to a Cube

Hello,   I'm working on my planet renderer for a while now, and i'm stuck with the quadtree sphere. So i constructed 6 quadtrees and formed a cube, but how can i map my sphere to this cube?   For instance, think of a line tangent to the surface of my sphere; the center of the line is the closest point to the sphere, the more you go sideways, the more the distance increases. But that's for a sphere, for the cube all the points are equally distanced from the side of the cube.   In order to make this work i'm thinking to somehow project a position from sphere space to cube space (excuse my terminology, i'm a self taught programmer).   Now, getting back to my tangent line, if projected to a cube, my tangent line should look like a parabola,but how can this be achieved for an arbitrary point around the sphere?   In other words, a circular orbit around my planet should describe a square path in the "deformed" space.   I need an algorithm that takes a vec3 as input, representing the local space (sphere space) and to output a vec3 representing the "deformed" cube space.   vec3 cubify(vec3 p);   I've noticed that the quadtree way of managing the sphere level of detail is used almost everywhere, and it makes most sense, however i couldn't find any information about this particular issue.   Thank you so much for your help!
10. ## Planet rendering: From space to ground

Hello fellow game programmers,   For quite a while now i'm trying to create a planet renderer, just for fun. The fun seems to be taken over by frustration however since i can't really solve the LOD problem for spherical terrains.   The thing is, i read almost everything i found and i'm still having troubles understanding the concepts behind some of the techniques.   1. Most of the techniques i hear are using quadtrees.     So they use 6 quadtrees forming a cube. Question is, how is that cube mapped to the actual sphere?   2. How can i make it fast?     Some techniques use the old school glVertex* and some use VBO.     Using the deprecated features of OpenGL is out of question, so how can i     use the VBO here?     Do i really have to create a VBO for all of my children quads down to the lowest recursion level?     If so, that could take a hell lot of VRAM space.     Or do i have to constantly update the VBO and IBO, thus abusing the glBufferData every time i need to update a quadtree?   3. Cracks in the ground.     How is this issue solved when using the VBO/IBO?     From what i've learned, it's all a matter of checking whether our neighbours have a lower LOD than us, and if they do, cut some edges     and weld some vertices.     How do i do that using VBOs and IBOs?   4. Are there any documented open sourced projects that i can use for reference?     I know about proland, but i can't really find my way inside it's source code.     Thank you!
11. ## What is the best way to update a Quadtree/Octree?

I believe rebuilding the entire Octree could prove faster then splitting/merging it's child nodes. void MyEngine::Update() {     m_WorldOSP->Clear();     for (int e = 0; e < getSceneEntityCount(); e ++) {         setEntityOctant(e, m_WorldOSP->Insert(getEntityBBox(e)));     } }
12. ## Getting objects within a range (In order, fast)

Well, there's nothing wrong with iterating as long as you are iterating through a small set. Thing is, there's no magic algorithm that could simply retrieve the objects within a range without any iterations. Space partitioning is your best friend in your case and the implementation is not that complicated.     Got it - any good resources for learnings how to do octrees and stuff?   http://programmingmind.com/projects/basic-octree   Check out the source code, it's pretty self explanatory.   Good luck!
13. ## Localized subdivision

I finally what i was looking for.   http://kerbalspace.tumblr.com/post/9056986834/on-quadtrees-and-why-they-are-awesome   Thank you!
14. ## Getting objects within a range (In order, fast)

Well, there's nothing wrong with iterating as long as you are iterating through a small set. Thing is, there's no magic algorithm that could simply retrieve the objects within a range without any iterations. Space partitioning is your best friend in your case and the implementation is not that complicated.
15. ## Getting objects within a range (In order, fast)

If you're dealing with a large set of objects your best option is to partition the space. I'd use an octree and place the objects inside the octree based on their position. This way you don't have to iterate through all of your entities but only through those that are in your area.   LE: If you load your objects from a database, you can easily model it to behave like an octree, thus saving time by loading only what's needed.