Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


CryZe

Member Since 19 Jun 2011
Offline Last Active Nov 05 2013 09:48 PM

#5008035 SSAO and skybox artifact

Posted by CryZe on 07 December 2012 - 02:15 AM

It will reduce performance of your SSAO shader

It actually will improve the performance of his shader. Here are the rules for if's in shaders:
- If's get compiled away if they can be evaluated at compile time and thus don't reduce your performance
- If's don't reduce your performance if they are using values from a constant buffer (Except the additional instructions for checking the condition)
- If's don't reduce your performance if they don't have a second code path like you would by having else. (Except the additional instructions for checking the condition)
- If's don't reduce your performance if the whole warp chooses the same code path. A warp consists of either 16 or 32 threads grouped together. So not even a whole thread group / block which is diverging into different code paths might negatively effect the performance, if the warps themselves only choose one code path. (Except the additional instructions for checking the condition)
- When none of these conditions are met, your if's will reduce the performance.

In his case, there's only one code path. So either a warp could take it or it doesn't. If all the threads inside the warp are working on pixels which are associated with the sky, the whole warp actually skips the whole code inside the if, which results in a performance increase.


#5004508 DrawInstanced problem

Posted by CryZe on 27 November 2012 - 07:37 AM

Think of it this way (pseudo code):

void DrawInstanced(..., int vertexCount, int instanceCount, ...)
{
    for (int instanceId = 0; instanceId < instanceCount; instanceId++)
    {
        for (int vertexId = 0; vertexId < vertexCount; vertexId++)
        {
            Vertex vertex;
            for (int inputLayoutIndex = 0; inputLayoutIndex < inputLayout.getCount(); inputLayoutIndex++)
            {
                InputLayoutElement element = inputLayout.get(inputLayoutIndex);
                int vertexBufferIndex = element.getVertexBufferIndex();
                int vertexBufferOffset = element.getVertexBufferOffset();
			   
                VertexBuffer vertexBuffer = vertexBuffers[vertexBufferIndex];
                int stride = vertexBuffer.getByteStride();
	
                Object value;
                if (element.getClassification() == Classification.Instance)
                {
                    value = vertexBuffer[instanceId * stride + vertexBufferOffset];
                }
                else
                {
                    value = vertexBuffer[vertexId * stride + vertexBufferOffset];
                }

                String semantic = element.getSemantic();
                vertex.setValue(semantic, value);
            }

            VertexShader(vertex);
        }
    }
}

That means that your input layout could look like this:

[Semantic, VertexBufferIndex, VertexBufferOffset, Classification]
["VERTEX_VALUE_0", 0, 0, Vertex]
["VERTEX_VALUE_1", 0, 8, Vertex]
["VERTEX_VALUE_2", 0, 16, Vertex]
["INSTANCE_VALUE_0", 1, 0, Instance]
["INSTANCE_VALUE_1", 1, 8, Instance]

You than simply use 2 vertex buffers:

VertexBuffer 0: [[vertexValue0, vertexValue1, vertexValue2], [vertexValue0, vertexValue1, vertexValue2], [vertexValue0, vertexValue1, vertexValue2], ...]
VertexBuffer 1: [[instanceValue0, instanceValue1], [instanceValue0, instanceValue1], [instanceValue0, instanceValue1], ...]

And it basically takes a cartesian product of both sets to call the vertex shader.


#4999901 Lens blur in frequency space

Posted by CryZe on 11 November 2012 - 07:57 AM

However the biggest problem is that for a filter-based DOF to look decent you need the ability to not filter across depth discontinuities, in order to avoid background bleeding onto the foreground and other similar artifacts. This isn't simple to do in the frequency domain. Plus you really want to vary the kernel size per pixel based on the depth, which also isn't simple.

I heard that it's possible to use a 3-dimensional FFT where the depth is an additional dimension, if you want to use a FFT for your depth of field implementation. But that would probably not be worth it though Posted Image


#4998802 Math for computing relative sun direction

Posted by CryZe on 08 November 2012 - 03:28 AM

As MJP correctly pointed out, the homogenous clip space coordinates need to be divided by w. So the correct code would look like this:

// Transform sun position to screen space
SunPos = mul( SunPos, ViewProj );

SunPos /= SunPos.w;

float4x4 TexAdj = { 0.5f,  0.0f, 0.0f, 0.0f,
   0.0f, -0.5f, 0.0f, 0.0f,
   0.0f,  0.0f, 1.0f, 0.0f,
   0.5f, 0.5f, 0.0f, 1.0f };

SunPos = mul( SunPos, TexAdj );


// Calculate vector from pixel to light source in screen space.  
float2 deltaTex = (TexPos - SunPos);

If your world space SunPos isn't (x, y, z, 1) you need to set w to 1 before the projection.

When I did this, I simply drew the sun on the screen to check if it aligns with the already drawn sun:

float3 color = distance(TexPos, SunPos) < 0.005 ? float3(1, 0, 0) : tex2D(sampler0, input.texCoord).rgb;



#4989387 In game Minimap Issue

Posted by CryZe on 12 October 2012 - 02:00 AM

It could either be a problem with:
- Backface / Frontface Culling
- Scissor Rectangles
- Depth / Stencil State and / or -Buffer
- The Viewport

My guess is, that your viewport might be wrong or not assigned at all.


#4988727 Simple diffuse light. Weird Behavior.

Posted by CryZe on 10 October 2012 - 08:11 AM

Looks ok, as far as I can tell. But you should multiply the factor with the image color instead of adding it. You might want to add a constant "ambient term" to your factor before multiplying it to have some kind of ambient lighting, so that the non-lit area's are not completely black (spherical harmonics or environment mapping are better approaches for ambient lighting though).

Also, the normals in your image are stored in the range (0, 1). But what you actually want are normals in the range (-1, 1). So you have to load them like this:
vec4 normalColor = 2 * texture2D( normal, tc ) - 1;

As a distance attenuation factor you might want to divide the resulting illumination by the light vectors length squared. This would be the most physically correct distance attenuation factor.


#4988713 Simple diffuse light. Weird Behavior.

Posted by CryZe on 10 October 2012 - 07:39 AM

Sorry about that, format messed up when I copied it. It should have line breaks now.

The forum can't handle code inside the code-tag right now unfortunately, so it's still a single line :(

Well the texture on the left right now is the actual image of the object that I'm applying lighting too (since it's a 2D game). And the normal map has the RGB values that are the XYZ of the normal at each pixel as I understand.

Like he said, it seems like it has some kind of lighting baked into it (hand painted though), which might look a bit odd when you're lighting it.

I mean, I'm getting the angle between the light source and the normal, and the brightness is proportional to that, shouldn't that work on its own?

The brightness should in the simplest case be proportional to the cosine of the angle between the light source and the normal. Not quite sure what you're doing as the code is unreadable with this forum bug :(
You might want to paste it directly without the code-tag, change the font to Courier New and you might also want to color the syntax a bit, to help the reader :D


#4988679 Unreal 4 voxels

Posted by CryZe on 10 October 2012 - 05:41 AM

The voxels are not used for the actual rendering, just for the secondary bounces of the lighting.


#4985997 PixelShaderWrapper has no SetUnorderedAccessView methods

Posted by CryZe on 02 October 2012 - 02:45 AM

This seems to be due to the fact, that there is no native ID3D11DeviceContext::PSSetUnorderedAccessViews method either. A bit mysterious if you ask me. Simply use the OutputMergerWrapper methods instead:
http://slimdx.org/do...pper_SetTargets


#4981695 Reinhard operator and HDR

Posted by CryZe on 19 September 2012 - 08:39 AM

Ok, let's call it the Serr Tone Mapping operator than Posted Image

I've just tweaked some settings so that it matches the Uncharted 2 Tone Mapping operator exactly. With and you get the exact same tone mapping operator. Here's a plot:

Posted Image

The thing is that it's way lighter on the GPU than the one from Uncharted 2 Posted Image
With these settings it's basically just


#4981635 Self-shadowing on curved shapes problem

Posted by CryZe on 19 September 2012 - 05:17 AM

It's faster if implemented this way though:
and are expensive instructions, that's why you should avoid them as often as possible.


#4981236 Reinhard operator and HDR

Posted by CryZe on 18 September 2012 - 07:17 AM

You could generalize it even more by adding an exponent to increase the "contrast", or let's say to make it more filmic Posted Image



With an exponent of , you basically get the same curve as the Uncharted 2 curve except for the really bright colors. The upper tail actually conserves more colors instead of abruptly cutting them off. In the actual shader you only need to calculate a single pow, so it's not that much slower than the one without the additional parameter.

Have you some comparision shots (color gradient in linear,reinhard,uncharted and modified reinhard) ? This would be quite interesting.

As soon, as I can test it out, I'm going to create some comparison shots. Even though I can already tell you, that it should basically look the same, as the Uncharted 2 Tone Mapping operator, except for the fact that bright colors are converging more smoothly to pure white. The rest of the colors look as "filmic", as the Uncharted 2 operator does.


#4980873 Blinn-Phong Specular Exponent to Trowbridge-Reitz Roughness

Posted by CryZe on 17 September 2012 - 07:54 AM

Is there a good formula to convert the specular exponent (glossiness) of the Blinn-Phong NDF to the roughness value of the Trowbridge-Reitz NDF?
I've tried something like , but that doesn't work that well. Is there a better approximation?

I'm currently changing the BRDF to an actual Cook-Torrance BRDF with Trowbridge-Reitz distribution, Schlick fresnel and Smith-Trowbridge-Reitz geometry factor. The BRDF itself is only 27 clock cycles on a Fermi or Kepler GPU (NDotL, NDotH, LDotH, ... not included). It's fast enough, so there's no reason for me to use a weak approximation of Cook-Torrance. But all my models are still storing Blinn-Phong glossiness, that's why I need to convert them.

I actually would want to use an approximation for though. That's the worst part of the whole BRDF. The 2 square roots alone take 12 clock cycles Posted Image


#4980053 Some questions about bloom

Posted by CryZe on 14 September 2012 - 07:52 AM

So thinking of a classic deferred approach I render my light in the light accumulation buffer, then combine that with albedo and everything else (ssao) in the composition pass and after that apply bloom & tonemapping as a quasi post-effect ? Is that correct ?

Yes, they are separate passes.
G-Buffer Generation -> Light Accumulation -> Light Composite (combine light accumulation with albedo) -> ... (some other passes) -> Bloom (and lens flares, ... ) -> Average Luminance Calculation -> Exposure Adjustment -> Tone Mapping


#4979966 Some questions about bloom

Posted by CryZe on 14 September 2012 - 12:36 AM

I calculate my lighting and write that to a light accumulation texture. This texture gets filtered in a bright pass for values >= a threshold which is then written to a separate RT that gets blurred a few times. This texture then gets added to the original light accumulation texture again (originalColor + blurredColor) and then send to the tonemapper.
If there's anything wrong with that tell me

Yes, there is something wrong. Like I said, the light accumulation texture is just an intermediate texture in your "Incoming Radiance Simulation Stage". Bloom is what you want to apply after you're done with everything. That means, that you want to combine your Light Accumulation Texture with your Albedo before in your Light Composite Pass (I = LightAccumulationDiffuse * Albedo + LightAccumulationSpecular).

Also there's no difference between Bloom and Glow. What you probably mean is using a Texture for an object as a glow map. You can do that. You basically extend your Light Composite Pass, so that the Glow Map is able to increase the visible radiance without the need for a light. The Bloom will make it seem all glowy afterwards. I = LightAccumulationDiffuse * Albedo + LightAccumulationSpecular + Emissive

Which leaves me to question 3. There still appears to be a problem when combing the blurred lightmap with the albedo in the end. The corners seem to bleed with the texture color which should not happen as only the front face of the cube should be lit and everything else should not. Is there any way to fix this problem ?

This problem won't occur anymore, if you apply Bloom after you combined the Light Accumulation and Albedo.




PARTNERS