Jump to content
  • Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

401 Neutral

About Hyunkel

  • Rank
  1. Hyunkel

    OpenGL FBO Questions

      The reason I initially wanted to do this is because my shadow maps for directional lights have 4 cascades, and I tried generating them simultaneously by quadrupling the geometry using  the geometry buffer. If I rendered only to the depth buffer I was able to avoid allocating an additional 4 R32f 2048x2048 color attachment slices. After some profiling I noticed that this isn't really any faster than generating the cascades in individual passes though, provided that I use decent frustum culling for the individual cascades.   This is precisely what I am doing now. It is also much easier to have either a linear or exponential shadow map this way, which is needed for ESM.   Thanks again! Hyu
  2. Hyunkel

    OpenGL FBO Questions

    I've tried a few more different combinations, and it seems like it is not possible to create a texture that can be used both as a depth attachment and as a color attachment. But as you've mentioned, I can either simply output to gl_FragDepth for the final pass, or for the shadow map generation, output the depth to a color attachment, using a shared RBO across all shadow maps.   Thank you for all the helpful comments! Cheers, Hyu
  3. Hyunkel

    OpenGL FBO Questions

    Good to know, thanks!   But I can't use GL_32F as a depth attachment when generating the shadow map, can I?
  4. Hyunkel

    OpenGL FBO Questions

    Yes, that is correct. I would like to avoid allocating an extra texture for each shadow map if possible.   Makes perfect sense, thanks!
  5. Hello,   I'm doing shadow mapping with exponential shadow maps (ESM) in OpenGL 4.3. This requires blurring the shadow map, which I do with a normal 2-pass gaussian blur. What I want to do is the following:   Shadow map -> (Vertical Blur Shader) -> Intermediate Texture -> (Horizontal Blur Shader) -> Shadow Map   The shadow map is a DepthComponent32f texture and the intermediate texture uses R32f. The first pass works fine, but for the second pass, where I want to write back to the shadow map, I can't seem to use the shadow map as a FBO color attachment, so I'm unable to write back to it.   I've also noticed, completely by accident, that I can sample from a texture that I am currently writing to, without any ill results. For example I can do: Texture -> (Vertical Blur Shader) -> Texture   To recap: Is there a way to use a DepthComponent texture as a color attachment in a FBO? Why can I sample a texture that I'm currently writing to? Is this legal in OpenGL 4.3, or is the behavior undefined? What happens behind the scenes? Does it internally create a new texture to write to, and then discard the old one when the draw call finishes? Cheers, Hyu
  6. Hyunkel

    Directx 11 porting & other changes

    On a related note because the XNA -> DX11 topic was brought up: I started out prototyping what I wanted to do in XNA as well. I made the switch to C++/DX11 rather early, and the biggest difference really is the ability to use compute shaders. For example I did normal vector generation with geometry shaders in XNA. I do it in compute shaders now, making use of group shared memory which speeds things up a lot. Structured buffers are also really handy.
  7. Hyunkel

    Directx 11 porting & other changes

    Thank you so much for your reply. It really helps a lot. I was getting so frustrated because I wasn't making any progress with my terrain shaping lately. Now I have new things to try out that will hopefully provide me with better results. It seems like we're using a different method after all though. I'm basically generating a variable amount of 33x33 terrain patches in a compute shader (position + normal) and store them in a buffer. During my geometry pass when I want to render the planet, I use a NULL vertex buffer, an index buffer for a 33x33 patch, and a hardware instancing buffer that only provides patch id's. Using that and SV_VertexID I can sample the correct vertex positions and normals from my generated data.
  8. Hyunkel

    Directx 11 porting & other changes

    This is really interesting! I'm also working on DX11 procedural planet generation on the gpu (my masters thesis topic) and our approaches seem to be very similar. However I'm wondering why you are CPU bound if you compute your terrain on the GPU? The only terrain related work I'm doing on the CPU is quadtree splitting, sorting and culling, which isn't very expensive and can be done in a separate thread. Everything else I do on the GPU, mostly in compute shaders. I get ~99% GPU usage because my CPU isn't really doing much besides uploading data to the GPU. I spend about ~1ms in compute shaders each frame (at high lod's), generating vertex positions, normals and doing stitching for approximately 340.000 vertices on a gtx580. (A total of 32 octaves of 3d perlin multifractals) Because this is so fast I haven't bothered optimizing yet and just regenerate the entire planet every frame. I have got to say though, your terrain looks so much better than mine, especially your mountains, and that is some very good looking water. I've seen in your other journal entry that you are using voronin/cell noise to generate your mountains which I found very interesting. Would you maybe be willing to say a few words about how you displace your voronin noise input to get such good looking results, or how you generate your terrace effect? Cheers, Hyu
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!