Abecederia

Members
  • Content count

    4
  • Joined

  • Last visited

Community Reputation

0 Neutral

1 Follower

About Abecederia

  • Rank
    Newbie

Personal Information

  • Interests
    Art
    Programming
  1. OpenGL The simplest OpenGL example

    Check this if you don't mind using a little library to help you: http://www.opengl-tutorial.org/beginners-tutorials/tutorial-1-opening-a-window/
  2. Yeah I'm already scaling samplecount by angle and distance (and offset by distance, you don't really see it, so that's great). I've added clamping of the detail in the heightmap by massaging the mipmapping, and that's giving me a huge speed boost on large textures, since most of my textures are fairly smooth (like medieval brick and such). I'm doing it like this at the moment, and it works fine, but since I'm a shader noob perhaps there's a better way? // Two helper functions... float GetMipLevel(sampler2D tex, vec2 uv) { return textureQueryLOD(tex, uv).y; } float GetMipLimit(sampler2D tex, float limit) { // Get texture size in pixels, presume square texture (!). float size = textureSize(tex, 0).x; // Convert to power-of-two to get number of mipmaps. size = log2(size); // mipmap 0 = nearest and largest sized texture. Get the // smallest required mip-offset to avoid large textures. if (limit < size) { return size - limit; } else { return size; } } // Then inside the parallax function, but outside the loop... // Limit heightmap detail. float mipLimit = GetMipLimit(tex, 7); float mipLevel = GetMipLevel(tex, uv); float mipLod = max(mipLevel, mipLimit); // And sample inside the loop... textureLod(tex, uv, mipLod); Yeah, the hierarchical traversal doesn't seem to be worth it in practice, shame really. Maybe worth it for soft shadows, the QDM paper seems to have an interesting approximation for shadowing. Another interesting thing I read was in the Cone Step Mapping paper, where he ditches the normals and instead uses vertical/horizontal derivatives, that allows him to trivially scale the normals alongside the height. Generating the derivative textures could also be crazy fast I think... perhaps even worth doing that at load/async and shipping only with a heightmap. Seems kinda neat, but I'm not sure how much you buy with that. Thanks for the tips, I'll remember the BC4 unorm thing.
  3. Thanks a lot for the very complete answer! I've looked at your version and it's very close to my own, I do use textureLod (SampleLevel) inside the loop since it throws anisotropic filtering out of the window [1] and doesn't look all that different. [1] http://www.diva-portal.org/smash/get/diva2:831762/FULLTEXT01.pdf Will look into it some more once I implement texture compression.
  4. So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing. Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance. Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!