Jump to content

  • Log In with Google      Sign In   
  • Create Account


CryZe

Member Since 19 Jun 2011
Offline Last Active Nov 05 2013 09:48 PM
*****

#4979621 GenerateMipMaps

Posted by CryZe on 13 September 2012 - 01:43 AM

Inside your DeviceContext, as far as I know.
http://slimdx.org/docs/#M_SlimDX_Direct3D11_DeviceContext_GenerateMips


#4979610 Some questions about bloom

Posted by CryZe on 13 September 2012 - 12:59 AM

I'm quite confused in what you're doing, so I'm going to tell you, how your engine should be structured if you want it to be as realistic as possible. I'm going to divide it into different ordered stages:

1. Incoming Radiance Simulation: In this stage you need to approximate the lighting that shines into your virtual camera / eye. This value should be completely unclamped and can reach values of 0.0001f and 1000000.0f (High Dynamic Range).
Typical Passes in this stage are: G-Buffer Generation, Light Accumulation, Shadow Mapping, SSAO, Sky Rendering, Volumetric Light Scattering, ...

2. Lens Simulation: In this stage you simulate how the incoming lighting gets modified by travelling through the lenses. Lenses typically cause interreflections of the incoming radiance which appear as lens flares on the final image, also lenses might need to focus onto a specific distance (depth of field). Bloom might also occur because the glass is not perfectly pure. Also the lenses might refrect the light, which causes chromatic aberration on cheap lenses.
Typical Passes in this stage are: Lens Flares, Bloom, Depth of Field, Chromatic Aberration

3. Aperture Simulation: The camera / eye needs to adapt to the current average luminance of the incoming lighting, unless you want to set the exposure manually. You should simulate this in this stage.
Typical Passes in this stage are: Average Luminance Calculation, Exposure Adjustment

4. Retina Simulation: In this stage you simulate how the incoming adjusted lighting affects the retina. You need to translate the incoming high dynamic range lighting to the range [0; 1]. The resulting image can be shown on the screen, but you might want to add HUD elements before.
Typical Passes in this stage are: Tone Mapping

You could swap the second and third stage, because the iris of an eye and the aperture of a camera are actually in front of the lenses. But the result should be the same and this is typically the way it's implemented, because exposure ajustment might be done inside the tone mapper.

Also, in your case, you don't want to apply bloom to an untextured render of your scene. Bloom is done inside your eye / camera in the real world and I don't think your eyes see the world untextured Posted Image

1. Should I use bloom & tonemapping on the specular component ? (to me it seems to distort the form and "energy" of my specular quite a bit)

You should apply it on everything the virtual eye sees.


#4978106 Separable gaussian blur too dark ?

Posted by CryZe on 08 September 2012 - 03:30 PM

He was / is not doing it for the sample with the index 0. Both samples with index 1 had higher weights, which would not have resulted in a proper gaussian blur.


#4978056 Separable gaussian blur too dark ?

Posted by CryZe on 08 September 2012 - 12:33 PM

float gaussianKernel(float x, float standardDeviation)
{
	return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}

float4 PS(VSO input) : SV_TARGET0
{
   const int numSamples = 3;
   const float standardDeviation = numSamples / 3.0;

   const float offset[numSamples] = {  0.0, 1.3846153846, 3.2307692308 };
   const float weight[numSamples] = { 0.40261952, 0.2442015368, 0.0544886997 }; //Either use these or the gaussianKernel function
   float3 texColor = TargetTexture.Sample(TargetTextureSampler, input.UV).xyz * gaussianKernel(0, standardDeviation); //You forgot about this weight here

   for (int i = 1; i < numSamples; i++)
   {
	  float weight = gaussianKernel(i, standardDeviation);
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV + float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
	  texColor += TargetTexture.Sample(TargetTextureSampler, input.UV - float2(offset[i], 0.0f) / _ScreenSize.x).rgb * weight;
   }
   return float4(texColor.rgb, 1.0f);
}

You might also want to check out the implementation I'm currently using in my engine (even though I'm currently switching to a more optimized compute shader implemention with a runtime of O(log n) per pixel):
#ifndef MIN_WEIGHT
	 #define MIN_WEIGHT 0.0001f
#endif
#ifndef FILTER
	 #error You have to define the filter. (FILTER = (GAUSSIAN|EXPONENTIAL))
#endif
#define GAUSSIAN 0
#define EXPONENTIAL 1
#if FILTER == GAUSSIAN
	 #ifndef STANDARD_DEVIATION
		  #error You have to define the standard deviation when using a gaussian kernel. (STANDARD_DEVIATION = float)
	 #endif
#elif FILTER == EXPONENTIAL
	 #ifndef MEAN_VALUE
		  #error You have to define the mean value when using an exponential kernel. (MEAN_VALUE = float)
	 #endif
#endif
#ifndef DIRECTION
	 #error You have to define the direction. (DIRECTION = (HORIZONTAL|VERTICAL|int2(x,y)))
#endif
#ifndef MIP
	 #define MIP 0
#endif
#define HORIZONTAL int2(1, 0)
#define VERTICAL int2(0, 1)
Texture2D SourceTexture : register(t0);
cbuffer InfoBuffer : register(b0)
{
	 float Width;
	 float Height;
};
struct PSIn
{
	 float4 Position : SV_POSITION;
	 float2 TexCoord : TEXCOORD0;
	 float2 ScreenPos : SCREEN_POSITION;
};
float gaussianKernel(int x, float standardDeviation)
{
	 return exp(-(x * x) / (2 * standardDeviation * standardDeviation)) / (sqrt(2 * 3.14159265) * standardDeviation);
}
float integratedExponentialKernel(float x, float m)
{
	 return 0.5 * (1 - exp(-x / m) / 2) * (sign(x) + 1) - 0.25 * exp(x / m) * (sign(x) - 1);
}
float exponentialKernel(int x, float m)
{
	 return integratedExponentialKernel(x + 0.5, m) - integratedExponentialKernel(x - 0.5, m);
}
float filter(int x)
{
	 #if FILTER == GAUSSIAN
		  return gaussianKernel(x, STANDARD_DEVIATION);
	 #elif FILTER == EXPONENTIAL
		  return exponentialKernel(x, MEAN_VALUE);
	 #endif
}
float3 sample(int2 position, int offset)
{
	 float3 textureColor = 0.0f;
	 float2 newOffset = offset * DIRECTION;

	 if (newOffset.x >= -8 && newOffset.x <= 7 && newOffset.y >= -8 && newOffset.y <= 7)
		  textureColor = SourceTexture.Load(
		   int3(position, MIP),
		   newOffset);
	 else
		  textureColor = SourceTexture.Load(int3(position + newOffset, MIP));
	
	 return textureColor;
}
float4 PSMain(PSIn Input) : SV_Target
{
	 float3 accumulatedColor = 0.0f;
 
	 float accumulatedWeight = 0, weight = 0;
	 [unroll]
	 for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
	 {
		  accumulatedWeight += (x != 0) ? (2 * weight) : weight;
	 }
	 [unroll]
	 for (int x = 0; (weight = filter(x)) > MIN_WEIGHT; ++x)
	 {
		  accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, x);
		  if (x != 0)
			   accumulatedColor += weight / accumulatedWeight * sample((int2)Input.ScreenPos, -x);
	 }
	 return float4(accumulatedColor, 1);
}



#4977120 An idea for rendering geometry

Posted by CryZe on 06 September 2012 - 02:51 AM

Well, if the geometry were completely static, you could bake the geometry into a buffer containing as many prerendered hash table images (images containing all the geometry needed for each pixel) as possible, rendered from as many directions on the hemisphere as possible (orthographic projection). You could than use this buffer to render the geometry from any point with any view direction in constant time (as long as the hash tables only contain 1 geometry intersection per bucket).

You'd still only have one sample per pixel though :/

Also this could only work in theory. The buffer would probably be multiple Terabytes to support rendering HD images in constant time.


#4975468 Best way of handling different Shader functions

Posted by CryZe on 01 September 2012 - 10:15 AM

If you're compiling them separately, there's no way to do both passes in a single shader. A shader is a program running on a Warp / Wavefront. To be able to blur the result of the SSAO, you would need to synchronize the whole pipeline accross all the different Warps / Wavefronts. The driver controlled by DirectX needs to synchronize them. That's why it's impossible. The Effect framework introduces such features in the HLSL language but actually executes them as DirectX function calls and splits the different passes and techniques into individual shaders, but hides that from the user.


#4973697 Dynamic vertex buffers?

Posted by CryZe on 27 August 2012 - 01:23 AM

Also, you shouldn't transform every vertex every frame on the CPU. Instead use a vertex shader. The GPU is optimized to transform your vertices. Simply upload a transformation matrix to the graphics card and let a vertex shader deal with the transformation.
If you actually want to transform each vertex in a different way, updating your vertex buffer every frame is the way to go.


#4972104 Passing matrices vs passing floats in instance buffer?

Posted by CryZe on 22 August 2012 - 01:28 AM

So there are no SV semantics for a matrix per vertex. Honestly you probably don't need a matrix per vertex unless you're doing skinning, in which case it's better to just pass it in as a matrix array in a constant buffer.

You can store per instance data in a second vertex buffer. The Input Assembler combines the per vertex data and the per instance data for each vertex shader call.

BTT: It's better to upload just a single matrix to the GPU, because this would result in just 4 DP4 instructions. While uploading position, rotation and scale would result in way more instructions. Quaternions are probably faster though.

Also, you don't need to use the TEXCOORD# semantics anymore. Since DirectX 10 you can use any semantic name you want. To upload a matrix you simply upload the 4 float4 values with the same semantic name but different indexes, eg. WVP0, WVP1, WVP2, WVP3.


#4970687 Horrible performance or not ?

Posted by CryZe on 17 August 2012 - 04:16 PM

You should never clear the whole GBuffer. Simply clearing the depth should be enough.


#4970508 f32tof16 confusion

Posted by CryZe on 17 August 2012 - 05:14 AM

Why are you manually converting the results anyway? If you're rendering to a R16G16B16_FLOAT resource, the Output Merger converts the values for you.

Also your original code converts the single precision float to a half precision float and reinterprets the bits as a single precision float. Since the most significant word is always 0, the resulting single precision floating point value is always 0.


#4969414 Weird issue while rendering screen quad

Posted by CryZe on 14 August 2012 - 05:54 AM

If SV_Position is a float4 the rasterizer assumes that it's a homogeneous coordinate. So it calculates the actual float3 value by dividing by the w component of the float4.


#4962965 Dual Sphere-Unfolding/Omni-directional shadow maps

Posted by CryZe on 25 July 2012 - 09:43 AM

It's just simply sphere mapping. The only difference is, that they use 2 sphere maps and let the vertex shader decide on which one the vertex gets projected, based on whether the y axis is positive or negative. They say that it would be a one pass method, but I really doubt it. I don't think that a triangle that has vertices on both sides of the xz-plane wouldn't cause artifacts. You'd probably still need a geometry shader or 2 passes to render these shadow maps without artifacts. I think it's worse than dual paraboloid shadow mapping, but I'll give it a try.


#4959780 Collada and SlimDX issues

Posted by CryZe on 16 July 2012 - 04:21 PM

To me it looks like you rendered the back faces instead of the front faces.


#4867367 [SOLVED] Dynamic Shader Linkage - What am I doing wrong?

Posted by CryZe on 29 September 2011 - 03:28 PM

I'm trying to implement dynamic shader linkage, but the compiler just doesn't recognize the interface.


D3D11: ERROR: ID3D11DeviceContext::PSSetShader: NumClassInstances should be zero for shaders that don't have interfaces. [ STATE_SETTING ERROR #2097306: DEVICE_SETSHADER_INTERFACE_COUNT_MISMATCH ]



The Shader:

struct PSIn
{
    float4 Position : SV_POSITION;
    float4 Normal : NORMAL0;
    float2 Texcoord : TEXCOORD0;

    noperspective float4 Dist : Dist;
};


interface IAlbedoProvider
{
    float3 ProvideAlbedo(PSIn input);
};

class ConstantAlbedoProvider : IAlbedoProvider
{
    float3 ProvideAlbedo(PSIn input)
    {
        return float3(0.5f, 0.3f, 0.7f);
    }
};

cbuffer Interfaces : register(b0)
{
    ConstantAlbedoProvider constantAlbedo;
}

IAlbedoProvider albedoProvider;


PSOut PSMain(PSIn Input)
{
    PSOut Output;

    Output.Albedo = float4(albedoProvider.ProvideAlbedo(Input), 1.f);

    ...

    return Output;
}

On the cpu-side I'm basically creating a class linkage per effect, creating the pixel shader object referring to the class linkage object, retrieving the class instance from the class linkage ("constantAlbedo", 0), and finally while setting the pixel shader, I'm passing over the class instance. Debugging with PIX revealed that everything went perfectly well on the cpu-side. But it just doesn't recognize the interface in the shader...

One might guess that the cbuffer is getting optimized away at compile time, since the shader itself is not explicitly using it. But the variable inside the cbuffer is only used as a specific class instance. But I could as well be creating the class instance outside the shader with the D3D11-API (ID3D11ClassLinkage::CreateClassInstance). Since the error implies that the shader doesn't have any interfaces that the class instance could be assigned to, the class instance "constantAlbedo" can't really be the problem. For some reason it just doesn't recognize the interface variable "albedoProvider", and I just can't figure out why.

What am I doing wrong?


Update: I can't work on it currently, but I figured out, that the class instance could indeed be the problem. The compiler throws away all zero-sized objects, so the class instance must have at least one data member. I guess, that it might even throw out the interface since the classes are already thrown out because they don't have any data members. I'm going to try it out later today, but maybe there's still another bug in there.

Update 2: Ok, that actually was the mistake...


#4846109 Declaring class objects as a pointer or not

Posted by CryZe on 08 August 2011 - 03:22 AM



Using pointers is more work, more code and increased risk for mistakes so I always avoid pointers unless there is a good reason to use pointers.


If you aren't using pointers the object itself is getting saved on the stack. So you should always use pointers. This way only the pointer variable itself is located on the stack and the object is located on the heap. That's it.


That's not true. If the variable is declared inside a heap-allocated class instance as a non-pointer, it will still be allocated on the heap. If you're inside function scope then sure, the variable will be allocated on the stack. Recommending someone to always use pointers is quite a blanket statement however.


Next time I shouldn't try to write my posts in like 10 seconds. I already knew all that stuff, and yes you shouldn't always allocate all of your objects on the heap. Sorry for the misunderstanding... I thought, that he as a beginner should just allocate all of them on the heap, so that he doesn't get problems like for example stack overflows. I simply should've said that. >.<




PARTNERS