Jump to content
  • Advertisement

Lewa

Member
  • Content count

    64
  • Joined

  • Last visited

Community Reputation

423 Neutral

About Lewa

  • Rank
    Member

Personal Information

  • Role
    Programmer
  • Interests
    Business
    Programming

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yes, i have. I ended up scrapping the Uncharted tonemapper as A) it creates a rather dimm image and whitebalancing it creates a rather unsaturated image which doesn't fit the overall look i'm going for. (it might work for a more realistic scene, but in this case where everything is mostly grey/white it doesn't seem to do so.) What i tried is to use a custom curve (still experimenting with it): vec3 custom( vec3 x ) { float a = 10.2; // Mid float b = 1.4; // Toe float c = 1.5; // Shoulder float d = 1.5; // Mid vec3 r =(x*(a*x+b))/(x*(a*x+c)+d); return r; } Although, i think the best solution in this case would be to have a partially linear curve (basically starting out linearly and then falling off at the end to get more white in range) and then applying colorgrading (in LDR) to get the desired look.
  2. That makes sense. So basically you divide/multiply the value with the exposure (to bring the value down into a smaller range) and let the tonemapper handle the values which are >1 to bring them also into the LDR range. Correct? So here is what i did: First the shader starts like that: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//floating point texture with values from 0-10 It looks like that: Obviously the values are mostly outside the range of 0-1. Then i divide this value by "exposure". color = color/uExposure;//exposure (in this case uExposure is manually set to "7") This brings the values down while still maintaining HDR. Now i apply the Uncharted Tonemapper: color = Uncharted2Tonemap(color); And this results (again) in a darker image. I'm not sure if this is correct, but i tried to increase the values by dividing the tonemapped color with (unchartedtonemap(vec3(1,1,1)). Given that the tonemapper seems to converge to 1 (but veeeeeeeery slowly) this is very likely wrong (this might not be nessecary with other tonemappers?): //color = Uncharted2Tonemap(color); color = Uncharted2Tonemap(color)/Uncharted2Tonemap(vec3(1,1,1)); Which results in this image: No idea if that's the correct approach. (probably not in case of the tonemapper division.) /Edit: Found a post on this site from someone who seemed to have the exact same issue: Though, it doesn't seem to be solved there either.
  3. So, is there a reference in what range my lights/values should be? I also tried setting the sun value to 1000 and the ambient light to 1 while applying the uncharted tonemapper: I think that i get something fundamentally wrong here. Code again: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//floating point values from 0 - 1000 //tonemap color = Uncharted2Tonemap(color); color = gammaCorrection(color); outputF = vec4(color,1.0f); } Tonemappers map the HDR range to LDR. But what i don't quite get how this can properly work if they don't "know" the range of your max brightness RGB value in the first place. (they only take the RGB values of the specific pixel in your FP buffer as input.). The range has to be important if you want to realize (for example) an S-curve in your tonemapper (like in the ACES filmic tonemapper). And that can only happen if you A) pass the range into the tonemapper (if you have an arbitary range of brightness) or B) the tonemapping algorithm assumes that your values are in a specific "correct" range in the first place.
  4. That's how your proposed tonemapper looks like: vec3 reinhardTone(vec3 color){ vec3 hdrColor = color; // reinhard tone mapping vec3 mapped = hdrColor / (hdrColor + vec3(1.0)); return mapped; } And the image: The light values are between 0 and 10. (7 for directional, 3 for ambient.) Yes, it brings down the range of the values from 0-X to 0-1 but it doesn't look good at all. That's why i wonder if the values of the lights and the sun have to be in a specific range (by adjusting exposure?) in order to work properly and create images like this: Even in the screenshots from the blogpost of frictional games they don't look either too bright or too dark: (Given that the image without any kind of tonemapping isn't overexposed i suppose they used values from 0-1 for the lightsources like i have before (instead of 0-10 or 0-100, etc...), but this doesn't explain why the uncharted tonemapper results in a more natural image in this case compared to my darkened image in the first post.) That's how i apply the tonemaps: vec3 reinhardTone(vec3 color){ vec3 hdrColor = color; // reinhard tone mapping vec3 mapped = hdrColor / (hdrColor + vec3(1.0)); return mapped; } vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//this texture is a //FP16RGBA framebuffer which stores values from 0-10 color = reinhardTone(color); //color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); }
  5. I use a FP16 RGBA buffer to store the HDR values. (they are also in linear space.) And the image is gamma corrected. The image appears a bit dark because the textures have a max brightness value of 0.8 and the directional light a value of 1. (thus, even if the dot product between the lightnormal and the trianglenormals is 1, at max you will get a value of 0.8) That's also one of the issues which i haven't figured out yet. I'm using a PBR rendering pipeline and while researching online i always stumble upon the suggestion that in PBR one should use "real world values" to light the scene but it's never explained/shown how this should look like. (No reference values to take note of.) For example, setting the light values to 7 (directiional light) and to 3 (ambient light), meaning the max value in the HDR FP16 buffer can never exceed 10, the image looks like this: Without unchartedtonemap: (Obviously mostly white because i'm mapping values >1 to the screen.) With uncharted tonemap: So if that's the correct behaviour, how can i get a "normal looking" image? What HDR range is required (in the FP16 buffer) in order to get correct results after tonemapping? /Edit: IMHO there is a big difference between a value of 0.7 and 0.9 which then gets displayed to the screen. So, does this tonemapper excpect you to have values of >100 in order to "properly" map between the displays 0-1 range?
  6. So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts: http://filmicworlds.com/blog/filmic-tonemapping-operators/ http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html and tried to implement some of those mentioned tonemapping methods into my postprocessing shader. The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7) This is the tonemapping code: vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping: This is with the uncharted tonemapping: Which makes the image a lot darker. The shader code looks like this: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1. But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement? To check this i plotted the tonemapping curve: You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.) My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure? For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
  7. Which transformation are you referring to in this case? the camera? the objects transformation (which i don't have access to in the shader. Only the position of the pixel which is reconstructed from the depth)? and does setting W to zero refer to this line? vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),0.0)).xyz);//W set to zero
  8. So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here. Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer. And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations). (here is the full shader source code if someone wants to take a look at it) Now, i suspect that the normals are the culprit. vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer. Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result. So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0); //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
  9. I think i understand now. So essentially we store the FP32(linear) values in an SRGB(non linear) buffer in order to preserve precision between steps. Does writing into an SRGB texture convert linear data to SRGB data? The only way this can work is if: writing to an SRGB framebuffer converts linear (written) data to non linear (SRGB) data reading/sampling the SRGB framebuffer converts SRGB data (which is sampled) to linear data. (That's how the textures also work) Is this how sRGB framebuffers/textures behave? Sorry for all those guestions. Never worked in the sRGB color space and have absolutely no idea how reading/writing from/to sRGB textures actually behaves.
  10. Well, yes if i output the framebuffer directly on the screen then the SRGB framebuffer will do the conversion from linear to SRGB space for me. But more often than not (deferred rendering) we will do additional post-processing steps (reading from the albedo buffer, etc...) thus we need the linear space. From my understanding, setting the SRGB flag for the framebuffer would convert the linear colors to SRGB if i access the framebuffer in a postprocessing shader which then would lead to wrong results again (as i would add/multiply SRGB colors). I found this post here: https://stackoverflow.com/questions/11386199/when-to-call-glenablegl-framebuffer-srgb And in the first answer tells us that we should remain in linear space until the very end, thus not setting SRGB for postprocessing purposes. Howerver, as you said the precision of the framebuffer needs to be increased in order to avoid loosing precision due to the conversion. So the solution would be: Setting textures to SRGB framebuffers should remain in linear space (RGB not SRGB) but increase the precision (RGB10,FP16, etc...) in order to preserve precision at the end of the renderpipeline do gamma correction with a shader or a seperate SRGB framebuffer to output the framebuffer to the screen in SRGB Is this correct?
  11. So textures need to be loaded in as GL_SRGB in order to gamma correct them for calculations, meaning we convert them from SRGB to linear space. Now, what i don't get is why the framebuffer also has to be set to sRGB. The texture values which are read/processed are converted to linear space and stored linearly in the framebuffer so it should be fine? (As an example, if i read the framebuffer values in a shader for additional postprocessing effects, then i already have them in linear space and don't need to convert anything with GL_SRGB.) The only thing that we have to do is to convert back from linear space to SRGB with (as an example) a post processing shader at the end of the renderstage. Am i missing something with the framebuffer?
  12. So, i stumbled upon the topic of gamma correction. https://learnopengl.com/Advanced-Lighting/Gamma-Correction So from what i've been able to gather: (Please correct me if i'm wrong) Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL. First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range) What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline) vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this: No gamma correction: With gamma correction: The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.) Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  13. I think i undestand this now? Essentially the Fresnel Value "F" resulting from this equation isn't the actual fresnel value of the geometry but gets feeded into the BRDF equation which results in the desired effect. float D = BRDF_D_GGX(NdotH, fragRoughness); //normal distribution float G = BRDF_G_Smith(NdotV,NdotL,fragRoughness); //geometric shadowing vec3 F = BRDF_F_FresnelSchlick(VdotH, F0); // Fresnel vec3 specular = (D * F * G) / 4.0f * max(max(NdotL,0.0) * max(NdotV,0.0),0.001); I have to read up more on the theory. Implementing PBR without properly understanding the inner workings is kinda a dead end. Thank you for your throughout explenation,
  14. My F0 value is hardcoded at 0.04. What is kind of confusing for me is that the fresnel effect doesn't depend on the surface normal at all. Given that my vectors are defined like this: vec3 N = normalize(fragNormal);//normal vector of the surface vec3 L = normalize(fragToLightNormal);//light vector vec3 V = normalize(uCameraPosition-fragPos.xyz); //vector from surface pixel to the camera vec3 H = normalize(L+V); //half vector between light and eye vector float NdotH = max(dot(N,H),0.0f); float NdotV = max(dot(N,V),0.0f); float NdotL = max(dot(N,L),0.0f); float VdotH = max(dot(V,H),0.0f);//this get's passed into the fresnel equation The fresnel equation then takes VdotH vec3 F = BRDF_F_FresnelSchlick(VdotH, F0); // Fresnel As far as i can tell this means the fresnel doesn't change/is affected by the surface normal of the particulare geometry. An example: This is the fresnel effect rendered from the shadowed area. (the light is on the other side.) This time i tried to use a directional light to see how this impacts the image. (and and ambient light to give the dark areas a bit of light.) The black spot is always the same size and doesn't really change the form/geometry. Here is how the result of the fresnel looks like: You can see that having a Sphere in front of an empty skybox doesn't impact the fresnel effect. I'm just wondering because you said: which implies that the surface normal should impact the fresnel equation. (which isn't the case in my equation.) Here is the shader again this time with the directional light:
  15. Is dithering the goto solution for this kind of banding in the industry? Also good to know that the fresnel equation is correct. I just wondered why i was never able to get the fresnel to show up no matter the lighting condition. Is it supposed to have only a small contribution to the surface? (it's barely if at all noticeable) I made a test where i removed the fresnel equation from the shader and i wasn't really able to tell a difference between the render with- and without the fresnel equation.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!