tseval

Members
  • Content count

    22
  • Joined

  • Last visited

Community Reputation

130 Neutral

About tseval

  • Rank
    Member
  1. Hi guys, thanks again for feedback!   The actual problem I am trying to solve is to make a visibility map for line of sight. Basically you have an observer which can be dragged about on a 2D map and you want all the areas that are visible from the observer's position to be visualized.   In the following screen shot you can see one example where it works more or less as planned with the observer on a mountain, The green areas are visible, while the red are invisible.   [attachment=18478:LineOfSight.png]   In the next shot, I have placed the observer 5 meters above the sea.   [attachment=18479:LineOfSight2.png]   Here you can see we get a cutoff at a couple of kilometers distance. I suspect this is due to the depth map resolution as described in the original post. Another effect I don't understand is the peculiar "pillow" shape of the visibility map. This shape only appears when I account for the earth's curvature in the calculations, but I can't see why it should be shaped like this, it should be circular and further out from the observer. If I do not account for earth's curvature, the shape is more square.   I have tried with non-square projection matrices. The results are much better when I reduce the vertical FOV, but I still don't get the full range, even with the FOV at 5 degrees.   I don't think cascading shadow maps would help in this case. IIRC, in CSM you split the view frustum and make one shadow map for each, section along the distance of the view, but in this case I would be more interested in splitting the light frustum.    
  2. Hi and thanks for the answer :-) I don't think the z value distribution is the problem here, I'm currently using logarithmic depth values which is supposed to increase the precision on the far values.   The problem here is rather that the y values map to the same pixel in the depth map, and of course only the closest one is stored.
  3. Hi,   I'm implementing a line-of-sight calculation shader in which I have an observer which can be dragged about on a map, and the areas visible to the observer are visualized. This is done by traditional shadow mapping where I render the 3D terrain surface in 4 directions with 90 degree frustums to produce depth maps, and then render everything from above, using the four depth maps and frustums for lookup.   This works fairly well as long as there is some variation in the terrain. However, on flat surfaces, the depth map doesn't have enough precision to separate the z values as they get close to the horizon. I have tried to illustrate the problem in the following figure:   [attachment=18393:DepthMapPrecision.png]   This is one of the camera frustums as viewed from the side. When the camera, C gets close to the ground, the two depth values,  Z1 and Z2 will map to the same pixel in the depth map, and only the closest one is used, causing the surface beyond this point to be visualized as invisible.   I know that this method will have its limitations, and that the range can't be too far, but does anyone have any ideas about how I could reduce this problem and increase the useful range of this method?   I have tried increasing the number of frustums up to 8 frustums with 45 degree FOV, and it helps a little bit, but not very much.   Cheers
  4. Quote:Original post by swiftcoder Quote:Original post by tseval Now, one challenge here is that the water should look nice from a large span of altitudes, ie. it should look nice from a fairly close distance (walking along the water shore) as well as when flying 10km above the water. Using a simple tiling noise texture may look good very close up, but when zooming out, the repeating tiling pattern will be noticable, as well as the high frequency of the noise gets very disturbing.If you ever look out the window of an airplane, you will notice that the ocean *does* look a bit odd due to the high-frequency noise of the waves. I think the key here is to reduce the surface opacity with altitude, which will also reduce the intensity of surface noise. True, I'm still not sure how to reduce the tile patterns in the noise though. Any ideas?
  5. Hi, I'm implementing a simple water shader for a terrain model. The general idea is to use a noise function or noise texture to modify the normal vectors and do bump mapping on it. Currently I have a textured terrain where the water is tagged with a special alpha value in the terrain textures, and then I add a specular highlight for water in the terrain shader. Now, one challenge here is that the water should look nice from a large span of altitudes, ie. it should look nice from a fairly close distance (walking along the water shore) as well as when flying 10km above the water. Using a simple tiling noise texture may look good very close up, but when zooming out, the repeating tiling pattern will be noticable, as well as the high frequency of the noise gets very disturbing. Any ideas how to solve this in a GLSL shader? Cheers
  6. Hi, I know that this particular library has been discussed earlier, and it's known to be quite buggy... However, I thought I'd check if anyone else had seen this particular issue and had a fix for it. The library in question is NVIDIAs NvTriStrip library for generating post-T&L vertex cache friendly triangle strips: http://developer.nvidia.com/object/nvtristrip_library.html My problem is that NvTriStrip from time to time will return part of a strip with reverse triangle order, so that those triangles are flipped relative to the rest. Anyone seen this? Alternatively, if there are some other libraries that does the same I could use that instead. NvTriStrip is very slow, but I usually get good results (when I don't get those flipped triangles) and I like that it returns everything stitched into one big triangle strip. Cheers
  7. Great! That solved it, thanks a lot :-D
  8. Hi folks, I have a little problem with a depth texture FBO. I want to render to a depth buffer (for shadow map rendering) and use the following code to initialize the FBO. glGenTextures(1, &depth_tex_); glBindTexture(GL_TEXTURE_2D, depth_tex_); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, depth_size_*num_splits_, depth_size_, 0, GL_DEPTH_COMPONENT, GL_UNSIGNED_BYTE, (GLvoid*)NULL); // Set up FBO with a depth texture array as target. glGenFramebuffersEXT(1, &depth_fb_); glBindFramebufferEXT(GL_FRAMEBUFFER_EXT, depth_fb_); // Attach texture to framebuffer depth buffer glFramebufferTexture2DEXT(GL_FRAMEBUFFER_EXT, GL_DEPTH_ATTACHMENT_EXT, GL_TEXTURE_2D, depth_tex_, 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); GLuint status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER_EXT); This code seems to work nicely, since I get my shadows and everything. However the glCheckFrameBufferStatusEXT returns GL_FRAMEBUFFER_UNSUPPORTED_EXT on my linux computers with NVIDIA 8700 and 7950 graphics cards. On a MacBook Pro with NVIDIA 9600 it returns GL_FRAMEBUFFER_COMPLETE_EXT. As I said, the code works, even if the status check returns an error condition on linux, but I would like to know what happens here... Can anyone see what I have done wrong here, or is this a driver problem or something? Cheers
  9. Great, thanks! I'll check it out
  10. I can adjust the near clipping plane in the frustum culling to avoid the problem, but I also ran into the interpolation problems mentioned on cameni's blog, so I went back to the split scene solution for the time being. However, I was wondering if this could be done per fragment instead of per vertex? Wouldn't this eliminate the interpolation problem? @cameni: The problem was that all objects close to the camera were culled by the near clipping plane in the software frustum, indicating that the z coordinate before projection was closer to the screen than the modified depth coordinate in GL.
  11. Hi again, I tried the logarithmic z function and that seemed to work nicely! However, it seems that it shifts the entire scene in the z direction? This means that my frustum culling fails on close objects, because the near clipping plane seems to be in another place than expected. The frustum culling is done before the projection matrix is applied, the log z transform is done after projection. Am I doing something wrong here, or is this as expected?
  12. @Waterwalker. You're right, I can render those objects twice :-) However, I would really like a different solution than splitting the scene. I get a lot of overhead since I use cascading shadow maps and need to calculate shadow maps for both zones. @cameni: The logarithmic z buffer looks like just what I need! How do you implement this in practice? I use OpenGL so I guess I have to introduce this calculation in all my fragment shaders? Or is there another clever way to insert it right after the projection step?
  13. Hi folks, I'm working on an indie game where the players will be able to drive and fly vehicles from the ground and all the way into space. We have a global terrain model and the visual range will be all the way to the horizon. Now, when the player moves around by foot on ground, this is OK, the far clipping plane will generally not be that far out, but when the player enters a vehicle and flies high up, we need to have the near clipping plane close enough to not clip the cockpit geometry and far enough to reach the horizon, which can be very far if we're high enough. Needless to say, this is bad for z buffer precision.... Now, we've tried splitting the scene so that we render the far objects first, then clear the z buffer and render the near objects on top. This works most of the time, but we have some cases where objects from the "far" group will stretch into the near group, typically very large objects. Then we may have smaller objects from the "near" group that should have been behind parts of the large object, but still will be rendered in front of it. I just wondered if some of you had worked with similar problems before, and if you have some clever ideas to solve the problem of huge z distance spans. Cheers
  14. Hi folks, I'm trying to integrate skinning in an existing scene graph structure for a game engine. I'm struggling a bit to make it fit and make it efficient, and I hope someone can help me out with some input on how to solve this. Our models are modeled and animated in Maya and exported to Collada. We have implemented joints as transform nodes in the scene graph, more or less like any other transform nodes. Then we have Mesh nodes that can be placed elsewhere in the scene graph with references to the joints. To render we first make an update pass in the scene graph to update all the animated transforms, joints and skinning. Next we do a culling pass where we gather all the visible meshes in render bins. Finally we render all the buffers in the render bins. Now, especially the update pass we do first is very inelegant and very inefficient, so we would like to get rid of this. Instead we want to update all transforms and joints by lazy evaluation and do the skinning in the culling pass. We have two fundamental problems here: 1. The joints are part of the scene graph structure and need to be updated before we render the meshes, but the traversal order doesn't guarantee that the joints are visited first. Joints as scene graph nodes isn't a requirement in itself, but we have models with multiple skeletons that can be placed deep in the scene graph with animated transforms as parents. 2. The bounding volumes of the meshes depends on the joint transforms, so we can't determine visibility before we have done the skinning. I suppose we could do some kind of pre-calculation to determine the "worst case bounding volume" based on the currently loaded animations, but I would like to hear what other people do here. We have thought of several alternative solutions to this, but none of them are particularly elegant. Hopefully we're not the first to have these problems, so hopefully someone can provide some useful tips on how to do this. Any suggestions are welcome, we're prepared to rewrite large parts of this to make it work as long as we can make it efficient and flexible. Thanks!
  15. I was thinking about the variant with a blob-textured quad. I am however a bit uncertain how I would handle depth sorting and z-buffer issues with this method. I guess I could disable depth testing/depth write and draw it after the terrain and before the shadowing object. Not sure how this would work when drawing other objects nearby though. It would simplify things a lot if I could draw the shadows together with the shadowing object, instead of drawing all shadows before all objects. Any thoughts on this?