Jump to content

  • Log In with Google      Sign In   
  • Create Account

Styves

Member Since 17 Dec 2009
Offline Last Active Today, 10:31 AM

#5302438 Fast Way To Determine If All Pixels In Opengl Depth Buffer Were Drawn At Leas...

Posted by on 25 July 2016 - 02:51 AM

Writing your own rasterizer isn't really going to solve your problem, since you won't be utilizing your GPU at all (or if you use compute, not as efficiently as you could be). Just leave that stuff to the GPU guys, they know what they're doing. :)

 

Anyway, do you really need such precise culling? I mean, are you absolutely sure you're GPU bound? Going into such detail just to cull a few triangles might not be worth it, and could hurt your performance rather than help if you're actually CPU bound since modern GPUs prefer to eat big chunks of data more than they like to issue a draw call for each individual triangle. If you have bounding box culling on your objects, and frustum culling, then I think that's all you'll really need unless you're writing a big AAA title with a very high scene complexity.

 

Just bear in mind that Quake levels were built for some different hardware constraints, so you should probably break up the obj model you have into small sections to avoid processing the entire mesh in one chunk so that you can leverage those two culling systems a little more.

 

That said, if you really want to have some proper occlusion culling for triangles, you can either check out the Frostbite approach (it's quite complicated iirc), or try implementing a simple Hi-Z culling system using Geometry Shaders (build a simple quad-tree out of your zbuffer and do quad-based culling on each triangle using the geometry shader). The later is simpler to implement and I've had pretty good results with it.




#5287363 How could I use bent normal map

Posted by on 17 April 2016 - 04:41 PM

Calculating bent normals is just an extension of AO calculation where the direction of each sample is also averaged along with the occlusion amount.

 

 

4491947_1329650854zqXO.png

 

 

The bent normals from substance painter seem correct. In any region that there is no AO, your normal won't be bent in any direction, therefore in tangent space it's pointing straight up along the surface normal. This is why you only see details in the ears.

 

I have no idea how Unity handles the bent normals but the usual way is to, instead of applying AO as a standard multiplier onto the lighting result (which gives this weird muddy look), use the dot product of the bent normal and the light instead. This is how we handle SSDO in CryENGINE for example.

 

I've only ever used a texture-based version of this technique once, however in this version the bent normals also contained information from the original tangent space normal map, and was used directly during lighting in place of the original.




#5281181 Doing SubSurfaceScattering / backlight

Posted by on 14 March 2016 - 05:52 AM

Why not try Pre-integrated skin shading? It was used in The Order 1886. You can check this blog post for a bit more info, maybe it's what you're looking for.




#5277465 DoF - near field bleeding

Posted by on 22 February 2016 - 10:44 AM

You don't need to premul the sharp image by 1-cocNear. :) You just need to premul far and near.

 

What you want is to blend from back to front, something like this:

float4 sharp = tex2D(sharpTex, coord);
float4 near = tex2D(nearTex, coord);
float4 far = tex2D(farTex, coord);

// undo premultiplication
far.rgb /= far.a > 0 ? far.a : 1.0; // catching division by 0
far.a = saturate(far.a);

near.rgb /= near.a > 0 ? near.a : 1.0; // catching division by 0
near.a = saturate(near.a); // you can multiply by some value to avoid the "soft focus" effect at low CoC values

// compute sharp far CoC at full res (in case farTex is half res)
float depth...
float farCoC...

// composite, starting from back to front
float4 final = lerp(sharp, far, farCoC); // SHARP far CoC, not blurred, can be stored in sharp or far alpha if those are full res
final = lerp(final, near, near.a); // near.a is BLURRED near CoC



#5274213 DoF - near field bleeding

Posted by on 04 February 2016 - 05:01 AM


Okay, we're moving forward. But:
This:
sampleNearCoC >= maxCoC ? 1 : saturate(maxCoC - sampleNearCoc)
should be this:
sampleNearCoC >= maxCoC ? 1 : (1.0 - saturate(maxCoC - sampleNearCoc))
YMMV, whatever works for you is fine. :) The second method kills the near field for me as it rejects most samples.
 


Also, I don't think that multiplying near field value by near field coc is correct because that will erase any colors from points with near coc == 0 that should have other pixels bleed onto them.
Nah, you're pre-multiplying the near field with the near CoC, not applying it after blurring. The amount a sample will contribute to a pixel depends on how strong that samples CoC is, so things closer to the camera bleed onto other pixels more. A pixel with CoC == 0 will still sample pixels that have a higher CoC and thus will still have pixels bleed onto them.
 


I think I have one last problem: how do I decide whether to write near field or far field into the buffer? It becomes very problematic for pixels that should be blurred by both fields.
For near field/far field, we output both to separate textures using MRT and blur them separately. We do this in a single shader, first blurring the near field (if the max tile has CoC) and then the far field. Then we composite both textures with the sharp image, starting with far and then near.
 

If you're not using multiple passes to increase the sample count as mentioned in the paper, then an alternative idea is to try the CoD approach which doesn't separate the near/far into different textures but performs a scatter-as-gather just like the McGuire motion blur approach. You can read about that here: http://advances.realtimerendering.com/s2014/sledgehammer/Next-Generation-Post-Processing-in-Call-of-Duty-Advanced-Warfare-v17.pptx




#5274114 DoF - near field bleeding

Posted by on 03 February 2016 - 02:44 PM

1. You meant "using the MAX tile"?

Sorry, yeah, I meant the max tile. smile.png

 

2. Let's say we're processing a pixel that has near CoC = 1 and is near the edge with the background (a pixel on the left hand side of the black line on my picture, near the line). Are we processing all samples in the kernel in this case, average them and output the result?

Yes, you always sample your entire kernel and then average them. Weighing is done in two ways (see below).

 

 

3. Let's say we're now processing a background pixel with CoC = 0. What samples should I use for this pixel and what weights should they have? From what I understand I defnitely need samples with CoC = 0.0 (the left side ones) but I also need the center sample, right?

Same as above.

 

Weighing is done in two ways. First, you pre-multiply the color by the CoC value, so that all of the colors you sample will have been weighed against your CoC already. Second you apply the max tile comparison during sampling. Looks something like:

sampleNearCoC >= maxCoC ? 1 : saturate(maxCoC - sampleNearCoc)

That only applies to near, far is just:

sampleFarCoc >= farCoC ? 1 : saturate(sampleFarCoc)



#5274087 DoF - near field bleeding

Posted by on 03 February 2016 - 11:52 AM

It's your lucky day, since I happened to work on the exact shader you're talking about. ;)

 

What you said is exactly right, we blur the near field CoC. We store the CoC in the alpha channel which gets blurred along with the color. We blend the sharp color with the far (using sharp CoC) and then the near field over that result.

 

A few important things to take note of:

  • Clamp the near field alpha before the blurring process (saturate(nearField.a)). If you don't, your near field bleeding will turn into a big blob of solid color and will look awful. This isn't mentioned in the paper (it's something I fixed after the fact) but you've likely seen it in a lot of Sprite DOF implementations.
  • Make sure you're pre-multiplying your color with the CoC as mentioned in the paper (you can/probably should do this on the near field too). Pics in the paper as to why this is important.
  • Don't forget to undo the above in the final blend pass just before blending so that your colors are restored to their original brightness (rgb /= alpha). Either check for 0 and don't divide, or make sure it can't be 0. Otherwise you'll probably get some darkening issues around focus boundaries in places.
  • "Scatter as gather" refers to the motion blur section of the paper, if it wasn't clear. Basically it means we reject samples using the min tile CoC like we do in motion blur. We only do this on the near field. The far field is tested against the full CoC.
  • If your blur amount/CoC is too weak, then you won't have enough "range" in your alpha to properly blend, causing a "soft focus" look. You can try to do some image balancing to bring the brightest point up to 1 to mitigate this and have a proper 0-1 blend range/mask, but we left this out because it's not really too important... and you need to figure out what the brightest pixel in the CoC map is in order to actually do this kind of compensation. Though, if you're really hellbent on it, you could probably merge it with an auto-exposure luminance downscaling shader if you have that.

Hope that helps. smile.png




#5273162 Combining shadows from multiple light sources

Posted by on 29 January 2016 - 05:17 AM

Why are you taking the min of both shadow terms and applying them to both light values? This is wrong (unless it's an artistic choice that you're after). You should be multiplying colorDirectional with shadowDirectional, and colorPoint by shadowPoint.

 

You should also only add ambient at the end as a separate "light" value, not as part of the PhongShade function, otherwise you're adding double the ambient for every new light added.




#5263830 Normal mapping erroneous results for diffuse lighting

Posted by on 27 November 2015 - 08:46 AM

Just a quick thought while I'm passing by, but:

 


output.tangentSpaceViewDir = float3(dot(normal.xyz, viewDir),
dot(tangent.xyz, viewDir),
dot(bitangent, viewDir));

output.tangentSpaceLightDir = float3(dot(normal.xyz, lightDir.xyz),
dot(tangent.xyz, lightDir.xyz),
dot(bitangent, lightDir.xyz));

 

Are you sure this is the correct order? I'm wondering if maybe instead of moving from WS to tangent space, you're doing the opposite (WS to whatever space). Maybe you need to transpose this matrix?




#5263679 HDR post-processing order

Posted by on 26 November 2015 - 04:42 AM

You don't have to do a brightpass with lerp, so you can directly downscale your bloom target from the original source. Skips a step. ^_^

 

As for the lerp: apply exposure onto the result *after* lerping. So long as your image is exposed properly you should have the right amount of bloom. This is how we're doing it in CryEngine. :)




#5263560 HDR post-processing order

Posted by on 25 November 2015 - 07:53 AM


- Old days bright pass runs in LDR with a threshold value. However is there a better way to handle this in HDR?
  Maybe scale threshold by max white or lower exposure?

 

A lot of modern games (usually with PBR) don't use any threshold at all and instead try to do some energy conservation - the simplest I know of is to lerp your HDR color and bloom color by a small amount (ex: lerp(color, bloom, 0.1)). This can help avoid aliasing from the threshold and can save you a pass (and is also "more correct" - quotes because bloom is a hack).




#5262456 Post-Process HDR or LDR nowadays

Posted by on 17 November 2015 - 01:58 PM

This really depends on the quality and post processes you want. Some color operations only work on 8-bit images, so you should do those after tone mapping for sure (FXAA, for example). Others, like bloom, DOF or motion blur look best before tone mapping because of the extra precision. But there's nothing preventing you from doing it after, if you need the performance... though bloom is a bit trickier to do in LDR without making it look awful...

 

I think the most common option in AAA games is to use optimized shaders and fast formats for sampling. For example, motion blur can be done using R11G11B10F format for all samples except the center one (in fact, if you use blending, you can skip the center fetch completely). This is what we did for Ryse (and in CryENGINE) - not sure about other engines though, but I'd expect they do similar optimizations.

 

If R11G11B10F isn't enough precision, then one trick that works is to pre-expose your image (i.e. pre-scaling it with your exposure value) but not tone mapping it. Instead, you output that to a 10-bit texture (R10G10B10A2 or something) and then feed that into your HDR post processes. This still looks quite good since you still have 0-1023 values and if used after exposure is arguably enough* - and you're still only using a 32-bit integer texture which should be better for bandwidth/sampling (IIRC using R11G11B10F for sampling is fine but writing to it is half rate compared to full rate for R10G10B10A2 if you're not alpha blending, but I could be wrong or that could be outdated info). Of course you can still use R11G11B10F, whatever works for you. smile.png

 

When you're done you can pipe that into your tone mapping shader as you would normally, composite it with bloom (which should also be scaled by exposure), etc.

 

*Many camera RAW formats are 10bits (or 12bits) - 10bits is also a common format for color correction software.




#5257488 Rendered texture is transparent in DirectX11

Posted by on 16 October 2015 - 08:34 AM


float4 PSMain(VertexShaderOutput input) : SV_Target
{
return ShaderTexture.Sample(Sampler, input.TextureUV).rgb;
}

 

First thing I'd look at is this. You're returning a float3 (.rgb) but the return type for your PSMain function is float4. Is that actually compiling?




#5223126 Phong reflection on a water surface.

Posted by on 14 April 2015 - 05:29 AM



shader somewhat looks like this


float PhongReflection(vec3 DirFromVertToLight, vec3 surfnormal, vec3 PerfectlyReflectedDir, vec3 DirVertexCam)
{
return 0.5*dot(DirFromVertToLight,surfnormal) + 0.8*dot(PerfectlyReflectedDir,DirVertexCam);
}


gl_FragColor = vec4(reflection_color,1.0)*PhongReflection(dfvtl, u_normalMap, Reflect(-dfvtl, vertex_normal), dvc);

 

I'm assuming the 0.5 and 0.8 are intensity tweaks?

 

One thing that comes to mind is that you're not clamping your dot products to 0-1 range. That means your specular result will be removing from your diffuse light whenever the dot product is negative. Same goes for diffuse.

 

Also the phong result should also be multiplied by the lambertian term for correctness, and taken to a power for glossiness.

 

It should look more like this (I've added your 0.5 and 0.8 to this, but I'm not sure you need or even want those):

float PhongReflection(vec3 DirFromVertToLight, vec3 surfnormal, vec3 PerfectlyReflectedDir, vec3 DirVertexCam)
{
float nDotL = clamp(dot(DirFromVertToLight,surfnormal), 0.0, 1.0);
float rDotV = clamp(dot(PerfectlyReflectedDir,DirVertexCam), 0.0, 1.0);
return 0.5*nDotL + (0.8 * nDotL * pow(rDotV, 32.0));
}

The 32 is the "glossiness". Higher = sharper highlights. Currently you're just using 1 which gives you a super spread out highlight.

 

Also, what is "reflection_color"? If all you want is the reflection highlight and not direct/diffuse lighting, then you don't need to add "0.5*nDotL" to that last line at all.




#5195706 Voxel Cone Tracing Problem

Posted by on 01 December 2014 - 08:54 AM

Are you using point filtering? You should be using trilinear filtering (GL_LINEAR_MIPMAP_LINEAR).

 

Otherwise, are you rounding your coordinates to voxel? (floor(pos), etc.) - If so, you shouldn't be. :)






PARTNERS