Jump to content
  • Advertisement

Lewa

Member
  • Content count

    67
  • Joined

  • Last visited

Community Reputation

426 Neutral

About Lewa

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Business
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks! The visualisation helped me to grasp the concept better. I was able to implement this into my c++ project. Some screenshots: It works quite well. (There are some issues like lightbleeding on the intersections between two planes and the UV map isn't that great but this can be fixed.) I used blenders Icosphere to create uniformly distributed points on the sky. The samplecount had to be quite high to avoid any banding artifacts: Now the issue is that due to the shadowmaps only being cast from above the ground (pointing downards) all triangles which point downwards (face normal at 0,0,-1) will be completely black. An example: One possible solution would be to have additional points under the ground (basically creating a full pointcloud sphere instead only a halfsphere) but the results were subpar. (Especially as the ground mesh occludes most of the stuff anyways.) Removing the floor mesh from rendering for shadowmaps with the origin under the ground might work, but this introduces artifacts on geometry which is in contact with the ground. I think the only proper solution for that would be to use the half-sphere (like in the screenshot above) and have (at least) one bounce lighting in the AO calculation to lighten interiors up a bit but i wasn't able to find a solution which would work well enough with this baking approach. (Maybe reflective shadowmaps? The issue is that they don't seem to check for occlusion of the bounced light.)
  2. I just tested this simple setup in blender and baked AO there: The middle part is correctly occluded but the edges on the side wouldn't be lit by the shadowmaps (because they are coming from the top.) I suppose that placing additional shadowmaps on the botton isn't enough as those may then interfere with additional geometry (like a floor for example) How did you handle this issue? Or is this just an artifact one has to accept with this technique? Baking those lightmaps during the loading process is a good idea. (Hopefully it doesn't drag the loadingtimes out too much.)
  3. So, i'm currently on a quest of finding a realtime world-space Ambient Occlusion algorithm for a game i'm making. (Or at least check if it's feasible. I wanted to avoid baking AO/lightmaps in a mapeditor as i would like to avoid storing lightmap data in my levelfiles in order to reduce the filesize as much as possible and avoid expensive/long precomputations in the first place.) Now, i stumbled upon an AO concept which works by using multiple shadowmaps which are placed on the skys hemishpere and then merged together to create the Ambient Occlusion effect. Here is an old example/demo from nvidia: http://developer.download.nvidia.com/SDK/9.5/Samples/samples.html#ambient_occlusion I was able to find a video which shows this in action: It seems to work rather well, although i can see a couple issues with it: - Rendering multiple shadowmaps is expensive (though that's expected with realtime AO) - As shadows are only cast from the top, every surface which is pointing downwards will be 100% in shadow/black. (normally such a surface would have a bit of light around the edges due to the light bouncing around. It works best for surfaces facing upwards/towards the sky.) - Flickering can be an issue if the shadowmap is covering a large scene/area or if the resolution of the shadowmap is too low. (could be fixed?) It's incredibely hard to find information on this technique on the internet. (either demos, implementation/improvement details, etc...). I suppose because it's not that widely used? Did anybody implement AO in a similar style like this? Are there any known sources which are covering this technique in more detail?
  4. Yes, i have. I ended up scrapping the Uncharted tonemapper as A) it creates a rather dimm image and whitebalancing it creates a rather unsaturated image which doesn't fit the overall look i'm going for. (it might work for a more realistic scene, but in this case where everything is mostly grey/white it doesn't seem to do so.) What i tried is to use a custom curve (still experimenting with it): vec3 custom( vec3 x ) { float a = 10.2; // Mid float b = 1.4; // Toe float c = 1.5; // Shoulder float d = 1.5; // Mid vec3 r =(x*(a*x+b))/(x*(a*x+c)+d); return r; } Although, i think the best solution in this case would be to have a partially linear curve (basically starting out linearly and then falling off at the end to get more white in range) and then applying colorgrading (in LDR) to get the desired look.
  5. That makes sense. So basically you divide/multiply the value with the exposure (to bring the value down into a smaller range) and let the tonemapper handle the values which are >1 to bring them also into the LDR range. Correct? So here is what i did: First the shader starts like that: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//floating point texture with values from 0-10 It looks like that: Obviously the values are mostly outside the range of 0-1. Then i divide this value by "exposure". color = color/uExposure;//exposure (in this case uExposure is manually set to "7") This brings the values down while still maintaining HDR. Now i apply the Uncharted Tonemapper: color = Uncharted2Tonemap(color); And this results (again) in a darker image. I'm not sure if this is correct, but i tried to increase the values by dividing the tonemapped color with (unchartedtonemap(vec3(1,1,1)). Given that the tonemapper seems to converge to 1 (but veeeeeeeery slowly) this is very likely wrong (this might not be nessecary with other tonemappers?): //color = Uncharted2Tonemap(color); color = Uncharted2Tonemap(color)/Uncharted2Tonemap(vec3(1,1,1)); Which results in this image: No idea if that's the correct approach. (probably not in case of the tonemapper division.) /Edit: Found a post on this site from someone who seemed to have the exact same issue: Though, it doesn't seem to be solved there either.
  6. So, is there a reference in what range my lights/values should be? I also tried setting the sun value to 1000 and the ambient light to 1 while applying the uncharted tonemapper: I think that i get something fundamentally wrong here. Code again: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//floating point values from 0 - 1000 //tonemap color = Uncharted2Tonemap(color); color = gammaCorrection(color); outputF = vec4(color,1.0f); } Tonemappers map the HDR range to LDR. But what i don't quite get how this can properly work if they don't "know" the range of your max brightness RGB value in the first place. (they only take the RGB values of the specific pixel in your FP buffer as input.). The range has to be important if you want to realize (for example) an S-curve in your tonemapper (like in the ACES filmic tonemapper). And that can only happen if you A) pass the range into the tonemapper (if you have an arbitary range of brightness) or B) the tonemapping algorithm assumes that your values are in a specific "correct" range in the first place.
  7. That's how your proposed tonemapper looks like: vec3 reinhardTone(vec3 color){ vec3 hdrColor = color; // reinhard tone mapping vec3 mapped = hdrColor / (hdrColor + vec3(1.0)); return mapped; } And the image: The light values are between 0 and 10. (7 for directional, 3 for ambient.) Yes, it brings down the range of the values from 0-X to 0-1 but it doesn't look good at all. That's why i wonder if the values of the lights and the sun have to be in a specific range (by adjusting exposure?) in order to work properly and create images like this: Even in the screenshots from the blogpost of frictional games they don't look either too bright or too dark: (Given that the image without any kind of tonemapping isn't overexposed i suppose they used values from 0-1 for the lightsources like i have before (instead of 0-10 or 0-100, etc...), but this doesn't explain why the uncharted tonemapper results in a more natural image in this case compared to my darkened image in the first post.) That's how i apply the tonemaps: vec3 reinhardTone(vec3 color){ vec3 hdrColor = color; // reinhard tone mapping vec3 mapped = hdrColor / (hdrColor + vec3(1.0)); return mapped; } vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//this texture is a //FP16RGBA framebuffer which stores values from 0-10 color = reinhardTone(color); //color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); }
  8. I use a FP16 RGBA buffer to store the HDR values. (they are also in linear space.) And the image is gamma corrected. The image appears a bit dark because the textures have a max brightness value of 0.8 and the directional light a value of 1. (thus, even if the dot product between the lightnormal and the trianglenormals is 1, at max you will get a value of 0.8) That's also one of the issues which i haven't figured out yet. I'm using a PBR rendering pipeline and while researching online i always stumble upon the suggestion that in PBR one should use "real world values" to light the scene but it's never explained/shown how this should look like. (No reference values to take note of.) For example, setting the light values to 7 (directiional light) and to 3 (ambient light), meaning the max value in the HDR FP16 buffer can never exceed 10, the image looks like this: Without unchartedtonemap: (Obviously mostly white because i'm mapping values >1 to the screen.) With uncharted tonemap: So if that's the correct behaviour, how can i get a "normal looking" image? What HDR range is required (in the FP16 buffer) in order to get correct results after tonemapping? /Edit: IMHO there is a big difference between a value of 0.7 and 0.9 which then gets displayed to the screen. So, does this tonemapper excpect you to have values of >100 in order to "properly" map between the displays 0-1 range?
  9. So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts: http://filmicworlds.com/blog/filmic-tonemapping-operators/ http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html and tried to implement some of those mentioned tonemapping methods into my postprocessing shader. The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7) This is the tonemapping code: vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping: This is with the uncharted tonemapping: Which makes the image a lot darker. The shader code looks like this: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1. But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement? To check this i plotted the tonemapping curve: You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.) My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure? For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
  10. Which transformation are you referring to in this case? the camera? the objects transformation (which i don't have access to in the shader. Only the position of the pixel which is reconstructed from the depth)? and does setting W to zero refer to this line? vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),0.0)).xyz);//W set to zero
  11. So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here. Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer. And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations). (here is the full shader source code if someone wants to take a look at it) Now, i suspect that the normals are the culprit. vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer. Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result. So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0); //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
  12. I think i understand now. So essentially we store the FP32(linear) values in an SRGB(non linear) buffer in order to preserve precision between steps. Does writing into an SRGB texture convert linear data to SRGB data? The only way this can work is if: writing to an SRGB framebuffer converts linear (written) data to non linear (SRGB) data reading/sampling the SRGB framebuffer converts SRGB data (which is sampled) to linear data. (That's how the textures also work) Is this how sRGB framebuffers/textures behave? Sorry for all those guestions. Never worked in the sRGB color space and have absolutely no idea how reading/writing from/to sRGB textures actually behaves.
  13. Well, yes if i output the framebuffer directly on the screen then the SRGB framebuffer will do the conversion from linear to SRGB space for me. But more often than not (deferred rendering) we will do additional post-processing steps (reading from the albedo buffer, etc...) thus we need the linear space. From my understanding, setting the SRGB flag for the framebuffer would convert the linear colors to SRGB if i access the framebuffer in a postprocessing shader which then would lead to wrong results again (as i would add/multiply SRGB colors). I found this post here: https://stackoverflow.com/questions/11386199/when-to-call-glenablegl-framebuffer-srgb And in the first answer tells us that we should remain in linear space until the very end, thus not setting SRGB for postprocessing purposes. Howerver, as you said the precision of the framebuffer needs to be increased in order to avoid loosing precision due to the conversion. So the solution would be: Setting textures to SRGB framebuffers should remain in linear space (RGB not SRGB) but increase the precision (RGB10,FP16, etc...) in order to preserve precision at the end of the renderpipeline do gamma correction with a shader or a seperate SRGB framebuffer to output the framebuffer to the screen in SRGB Is this correct?
  14. So textures need to be loaded in as GL_SRGB in order to gamma correct them for calculations, meaning we convert them from SRGB to linear space. Now, what i don't get is why the framebuffer also has to be set to sRGB. The texture values which are read/processed are converted to linear space and stored linearly in the framebuffer so it should be fine? (As an example, if i read the framebuffer values in a shader for additional postprocessing effects, then i already have them in linear space and don't need to convert anything with GL_SRGB.) The only thing that we have to do is to convert back from linear space to SRGB with (as an example) a post processing shader at the end of the renderstage. Am i missing something with the framebuffer?
  15. So, i stumbled upon the topic of gamma correction. https://learnopengl.com/Advanced-Lighting/Gamma-Correction So from what i've been able to gather: (Please correct me if i'm wrong) Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL. First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range) What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline) vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this: No gamma correction: With gamma correction: The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.) Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!