Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

426 Neutral

About Lewa

  • Rank

Personal Information

  • Role
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I'm fully aware that PBR does require proper illumination in order to achieve the best results I don't have the ability to do that yet though. (I neither have realtime nor pre-beaked GI). So what i'm trying to do is to approximate the illumination as best as possible without deviating from PBR as much. The only thing i have is a very basic implementation of baked AO but even then it's not even single bounce: (Note that the floor is still unaffected by the shading as i didn't include the floor mesh into the baking process) The ambient lighting in this case does represent the lighting of the sky (areas which are occluded get progressively darker the less they are lit by the sky). So what i tried to do is to have only 1 lightsource (the sun) and simulate lighting from the sky/GI by having an ambient light which gets filled into the occluded areas. In order to darken interiors a baked AO texture is implemented by multiplying it with the AO value. So the shader code looks roughly like this: vec3 color = texture2D(sLight, vTexcoord).rgb;//this is the accumulated light on the given fragment (from the directional light) float shadow = texture2D(sShadowMap, vTexcoord).r;//tells us if the pixel is in shadow. 0 = fragment is occluded by the sun, 1 = pixel is lit by the sun color*=shadow;//occluded pixels are set to 0 (completely dark) //now approximate skydome-lighting vec3 ambient = uAmbientLightColor*texture2D(gAlbedo, vTexcoord).rgb;//ambient light (fixed value) multiplied with albedo texture float AO = texture2D(sAmbientOcclusion, vTexcoord).r;//baked World-space Ambient Occlusion color+=ambient;//... //apply AO only if pixel is in SunShadow (if shadow == 0) color *= mix(AO,1.0,shadow); outputF = color;//write to screen I'm fully aware that this isn't a physically correct solution. So in my case it's either implementing a proper GI solution or hacking the PBR to achieve my desired results? How is the ambient lighting in Unreal (seen in my first post) implemented?
  2. So, there is one thing that i don't quite understand. (Probably because i didn't dive that deep into PBR lighting in the first place.) Currently, i implemented a very basic PBR renderer (with the BRDF microfaced shading model) into my engine. The lighting system i have is pretty basic (1 directional/sun light, deffered point lights and 1 ambient light) I don't have a GI solution yet. (Only a very basic world-space Ambient occlusion technique). Here is how it looks like: Now, what i would like to do is to give the shadows a slightly blueish tint. (To simulate the blueish light from the sky.) Unreal seems to implement this too which gives the scene a much more natural look: Now, my renderer does render in HDR and i use exposure/tonemapping to bring this down to LDR. The first image used an indirect light with a RGB value of (40,40,40) and an indirect light of (15,15,15). Here is the same picture but with an ambient light of (15,15,15) * ((109, 162, 255) / (255,255,255)) which should give us this blueish tint. The problem is it looks like this: The shadows do get the desired color (more or less) the issue is that all lit pixels also get affected giving the scene a blue tint. Reducing the ambient light intensity results in way too dark shadows. Increase the intensity and the shadows look alright but then the whole scene gets affected way too much. In the shader i basically have: color = directionalLightColor * max(dot(normal,sunNormal),0.0) + ambientLight; The result is that the blue component of the color will be always higher than the other two. I could of course fix it by faking it (only adding the ambient light if the pixel is in shadow) but i want to stay as close to PBR as possible and avoid adding hacks like that. My question is: How is this effect done properly (with PBR / proper physically based lighting)?
  3. Thanks! The visualisation helped me to grasp the concept better. I was able to implement this into my c++ project. Some screenshots: It works quite well. (There are some issues like lightbleeding on the intersections between two planes and the UV map isn't that great but this can be fixed.) I used blenders Icosphere to create uniformly distributed points on the sky. The samplecount had to be quite high to avoid any banding artifacts: Now the issue is that due to the shadowmaps only being cast from above the ground (pointing downards) all triangles which point downwards (face normal at 0,0,-1) will be completely black. An example: One possible solution would be to have additional points under the ground (basically creating a full pointcloud sphere instead only a halfsphere) but the results were subpar. (Especially as the ground mesh occludes most of the stuff anyways.) Removing the floor mesh from rendering for shadowmaps with the origin under the ground might work, but this introduces artifacts on geometry which is in contact with the ground. I think the only proper solution for that would be to use the half-sphere (like in the screenshot above) and have (at least) one bounce lighting in the AO calculation to lighten interiors up a bit but i wasn't able to find a solution which would work well enough with this baking approach. (Maybe reflective shadowmaps? The issue is that they don't seem to check for occlusion of the bounced light.)
  4. I just tested this simple setup in blender and baked AO there: The middle part is correctly occluded but the edges on the side wouldn't be lit by the shadowmaps (because they are coming from the top.) I suppose that placing additional shadowmaps on the botton isn't enough as those may then interfere with additional geometry (like a floor for example) How did you handle this issue? Or is this just an artifact one has to accept with this technique? Baking those lightmaps during the loading process is a good idea. (Hopefully it doesn't drag the loadingtimes out too much.)
  5. So, i'm currently on a quest of finding a realtime world-space Ambient Occlusion algorithm for a game i'm making. (Or at least check if it's feasible. I wanted to avoid baking AO/lightmaps in a mapeditor as i would like to avoid storing lightmap data in my levelfiles in order to reduce the filesize as much as possible and avoid expensive/long precomputations in the first place.) Now, i stumbled upon an AO concept which works by using multiple shadowmaps which are placed on the skys hemishpere and then merged together to create the Ambient Occlusion effect. Here is an old example/demo from nvidia: http://developer.download.nvidia.com/SDK/9.5/Samples/samples.html#ambient_occlusion I was able to find a video which shows this in action: It seems to work rather well, although i can see a couple issues with it: - Rendering multiple shadowmaps is expensive (though that's expected with realtime AO) - As shadows are only cast from the top, every surface which is pointing downwards will be 100% in shadow/black. (normally such a surface would have a bit of light around the edges due to the light bouncing around. It works best for surfaces facing upwards/towards the sky.) - Flickering can be an issue if the shadowmap is covering a large scene/area or if the resolution of the shadowmap is too low. (could be fixed?) It's incredibely hard to find information on this technique on the internet. (either demos, implementation/improvement details, etc...). I suppose because it's not that widely used? Did anybody implement AO in a similar style like this? Are there any known sources which are covering this technique in more detail?
  6. Yes, i have. I ended up scrapping the Uncharted tonemapper as A) it creates a rather dimm image and whitebalancing it creates a rather unsaturated image which doesn't fit the overall look i'm going for. (it might work for a more realistic scene, but in this case where everything is mostly grey/white it doesn't seem to do so.) What i tried is to use a custom curve (still experimenting with it): vec3 custom( vec3 x ) { float a = 10.2; // Mid float b = 1.4; // Toe float c = 1.5; // Shoulder float d = 1.5; // Mid vec3 r =(x*(a*x+b))/(x*(a*x+c)+d); return r; } Although, i think the best solution in this case would be to have a partially linear curve (basically starting out linearly and then falling off at the end to get more white in range) and then applying colorgrading (in LDR) to get the desired look.
  7. That makes sense. So basically you divide/multiply the value with the exposure (to bring the value down into a smaller range) and let the tonemapper handle the values which are >1 to bring them also into the LDR range. Correct? So here is what i did: First the shader starts like that: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//floating point texture with values from 0-10 It looks like that: Obviously the values are mostly outside the range of 0-1. Then i divide this value by "exposure". color = color/uExposure;//exposure (in this case uExposure is manually set to "7") This brings the values down while still maintaining HDR. Now i apply the Uncharted Tonemapper: color = Uncharted2Tonemap(color); And this results (again) in a darker image. I'm not sure if this is correct, but i tried to increase the values by dividing the tonemapped color with (unchartedtonemap(vec3(1,1,1)). Given that the tonemapper seems to converge to 1 (but veeeeeeeery slowly) this is very likely wrong (this might not be nessecary with other tonemappers?): //color = Uncharted2Tonemap(color); color = Uncharted2Tonemap(color)/Uncharted2Tonemap(vec3(1,1,1)); Which results in this image: No idea if that's the correct approach. (probably not in case of the tonemapper division.) /Edit: Found a post on this site from someone who seemed to have the exact same issue: Though, it doesn't seem to be solved there either.
  8. So, is there a reference in what range my lights/values should be? I also tried setting the sun value to 1000 and the ambient light to 1 while applying the uncharted tonemapper: I think that i get something fundamentally wrong here. Code again: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//floating point values from 0 - 1000 //tonemap color = Uncharted2Tonemap(color); color = gammaCorrection(color); outputF = vec4(color,1.0f); } Tonemappers map the HDR range to LDR. But what i don't quite get how this can properly work if they don't "know" the range of your max brightness RGB value in the first place. (they only take the RGB values of the specific pixel in your FP buffer as input.). The range has to be important if you want to realize (for example) an S-curve in your tonemapper (like in the ACES filmic tonemapper). And that can only happen if you A) pass the range into the tonemapper (if you have an arbitary range of brightness) or B) the tonemapping algorithm assumes that your values are in a specific "correct" range in the first place.
  9. That's how your proposed tonemapper looks like: vec3 reinhardTone(vec3 color){ vec3 hdrColor = color; // reinhard tone mapping vec3 mapped = hdrColor / (hdrColor + vec3(1.0)); return mapped; } And the image: The light values are between 0 and 10. (7 for directional, 3 for ambient.) Yes, it brings down the range of the values from 0-X to 0-1 but it doesn't look good at all. That's why i wonder if the values of the lights and the sun have to be in a specific range (by adjusting exposure?) in order to work properly and create images like this: Even in the screenshots from the blogpost of frictional games they don't look either too bright or too dark: (Given that the image without any kind of tonemapping isn't overexposed i suppose they used values from 0-1 for the lightsources like i have before (instead of 0-10 or 0-100, etc...), but this doesn't explain why the uncharted tonemapper results in a more natural image in this case compared to my darkened image in the first post.) That's how i apply the tonemaps: vec3 reinhardTone(vec3 color){ vec3 hdrColor = color; // reinhard tone mapping vec3 mapped = hdrColor / (hdrColor + vec3(1.0)); return mapped; } vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb;//this texture is a //FP16RGBA framebuffer which stores values from 0-10 color = reinhardTone(color); //color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); }
  10. I use a FP16 RGBA buffer to store the HDR values. (they are also in linear space.) And the image is gamma corrected. The image appears a bit dark because the textures have a max brightness value of 0.8 and the directional light a value of 1. (thus, even if the dot product between the lightnormal and the trianglenormals is 1, at max you will get a value of 0.8) That's also one of the issues which i haven't figured out yet. I'm using a PBR rendering pipeline and while researching online i always stumble upon the suggestion that in PBR one should use "real world values" to light the scene but it's never explained/shown how this should look like. (No reference values to take note of.) For example, setting the light values to 7 (directiional light) and to 3 (ambient light), meaning the max value in the HDR FP16 buffer can never exceed 10, the image looks like this: Without unchartedtonemap: (Obviously mostly white because i'm mapping values >1 to the screen.) With uncharted tonemap: So if that's the correct behaviour, how can i get a "normal looking" image? What HDR range is required (in the FP16 buffer) in order to get correct results after tonemapping? /Edit: IMHO there is a big difference between a value of 0.7 and 0.9 which then gets displayed to the screen. So, does this tonemapper excpect you to have values of >100 in order to "properly" map between the displays 0-1 range?
  11. So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts: http://filmicworlds.com/blog/filmic-tonemapping-operators/ http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html and tried to implement some of those mentioned tonemapping methods into my postprocessing shader. The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7) This is the tonemapping code: vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping: This is with the uncharted tonemapping: Which makes the image a lot darker. The shader code looks like this: void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1. But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement? To check this i plotted the tonemapping curve: You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.) My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure? For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
  12. Which transformation are you referring to in this case? the camera? the objects transformation (which i don't have access to in the shader. Only the position of the pixel which is reconstructed from the depth)? and does setting W to zero refer to this line? vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),0.0)).xyz);//W set to zero
  13. So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here. Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer. And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations). (here is the full shader source code if someone wants to take a look at it) Now, i suspect that the normals are the culprit. vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer. Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result. So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0); //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
  14. I think i understand now. So essentially we store the FP32(linear) values in an SRGB(non linear) buffer in order to preserve precision between steps. Does writing into an SRGB texture convert linear data to SRGB data? The only way this can work is if: writing to an SRGB framebuffer converts linear (written) data to non linear (SRGB) data reading/sampling the SRGB framebuffer converts SRGB data (which is sampled) to linear data. (That's how the textures also work) Is this how sRGB framebuffers/textures behave? Sorry for all those guestions. Never worked in the sRGB color space and have absolutely no idea how reading/writing from/to sRGB textures actually behaves.
  15. Well, yes if i output the framebuffer directly on the screen then the SRGB framebuffer will do the conversion from linear to SRGB space for me. But more often than not (deferred rendering) we will do additional post-processing steps (reading from the albedo buffer, etc...) thus we need the linear space. From my understanding, setting the SRGB flag for the framebuffer would convert the linear colors to SRGB if i access the framebuffer in a postprocessing shader which then would lead to wrong results again (as i would add/multiply SRGB colors). I found this post here: https://stackoverflow.com/questions/11386199/when-to-call-glenablegl-framebuffer-srgb And in the first answer tells us that we should remain in linear space until the very end, thus not setting SRGB for postprocessing purposes. Howerver, as you said the precision of the framebuffer needs to be increased in order to avoid loosing precision due to the conversion. So the solution would be: Setting textures to SRGB framebuffers should remain in linear space (RGB not SRGB) but increase the precision (RGB10,FP16, etc...) in order to preserve precision at the end of the renderpipeline do gamma correction with a shader or a seperate SRGB framebuffer to output the framebuffer to the screen in SRGB Is this correct?
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!