Jump to content

  • Log In with Google      Sign In   
  • Create Account


TiagoCosta

Member Since 26 Nov 2008
Offline Last Active Today, 10:18 AM

#5179086 Help with GPU Pro 5 Hi-Z Screen Space Reflections

Posted by TiagoCosta on 09 September 2014 - 08:29 AM

1) Seems as if the author moved away from using spheres aligned with the traced cone (using cubes). I wonder why (better coverage of the cone most likely)


I don't think so. He still uses spheres. He uses the sphere's center (texture coordinates) and radius (mip level) to sample the color/visibility buffers. In the video he is drawing the texels (pink squares) used over the cone (if you pause the video you can actually see the cone behind some of the squares). I'm not sure what the color on the squares mean though... My guess would be that it is the weight based on visibility and trilinear interpolation that he mentions in the article.
 

2) The cone angle varies!!! Meaning the glossiness term drives the side of the opposite length of the triangle (as opposed to the actual cone angle)?! See as the reflected point gets closer to the character the cone angle widens dramatically. I wonder if I mis-understood the article or if the author is using a different approach in the video...

 

I don't know about that either. I think that the angle should stay the same (assuming that the glossiness of the ground is constant) as the reflection gets closer to the object and the cone should get smaller.




#5178975 Help with GPU Pro 5 Hi-Z Screen Space Reflections

Posted by TiagoCosta on 08 September 2014 - 06:06 PM

Here's an example of the case that needs to be solved. (note: you can see the color values that will be fetched by each sphere, the bigger the sphere, the blurrier the fetch). This is the case where a reflection approaches an edge. On the top image the ray hits the sphere, and on the bottom image, the ray hits the background and all of the contribution comes from the edge sphere (so the reflection result is the blurred sky). The question is how should the weight of each of those spheres be set so that there is a smooth transition between the left and the right reflection?

 

Are you weighting the spheres using the visibility buffer? The article mentions 3 weighting methods: basic averaging, distance-based weighting, and hierarchical pre-integrated visibility buffer weighting.


ct2.jpg
ct3.jpg

 

Btw, shouldn't each of those circles have a solid color (single texture fetch)?




#5178922 Help with GPU Pro 5 Hi-Z Screen Space Reflections

Posted by TiagoCosta on 08 September 2014 - 01:10 PM

Hi Tiago, not sure if you saw my post #18 but I proposed the same solution. I think it's probably what was intended.

 

Yes, I missed that post! Do you have any idea how the 37.5% was calculated? By applying the modified pre integration pass, Mip 2 should also have 50% visibility, however I'm not sure if it is correct

 

Anyway the formula on post #21 is probably incorrect because it should only calculate the percentage of empty volume between the coarser cell's min and max z, right?




#5178769 Help with GPU Pro 5 Hi-Z Screen Space Reflections

Posted by TiagoCosta on 07 September 2014 - 06:14 PM

Thinking about it, if really the output is meant to be "the percentage of empty voxel volume relative to the total volume of the cells", then (I think) we should calculate the integration value as:

 

Reading page 172/173, I think visibility is supposed to be "the percentage of empty space within the minimum and maximum of a depth cell" modulated with the visibility of the previous mip.

 
So I also think that there is an error on the pre-integration pass, but the correct code would be:
float4 integration = (fineZ.xyzw - minZ) * abs (coarseVolume) * visibility.xyzw;

This makes MIP 1 on page 159 diagram correct but I still have no idea how the 37.5% visibility on MIP 2 was calculated.

 

Can one of you try the line of code above in your implementation and see how it looks? I haven't had time to implement the article myself.

 

Btw, has anyone tried to contact the article author about the source code? I wasn't able to find it anywhere.

 




#5178402 Very fast 2D frustum culling

Posted by TiagoCosta on 05 September 2014 - 03:13 PM

Since you don't take the object's size into account, how can you tell it's outside the frustum? Eg: A large wall, whose origin (returned by getPos() is outside the frustum but part of it is still visible




#5177650 Culling out-of-frustum models that contribute shadows

Posted by TiagoCosta on 02 September 2014 - 07:31 AM

Is there a clever general approach to this? Should I be calculating the volume of the lights (I can do this in shaders already for debugging), calculating what models intersect those volumes and stop them from being frustum culled? Is that the best general approach?

 

You almost have the right idea.

 

When rendering shadow maps, you should perform frustum culling but instead of using the camera frustum, you should calculate a different frustum using the lights' view and projection matrices (that you use to draw the models to the shadow maps).

 

So when creating shadow maps you should do something like this:

foreach shadow_casting_light i
{
     generate frustum from light i view/projection matrices;
     
     test models against frustum;

     render visibles models to shadow map;
}

So if you have n shadow casting lights, you'll have n lists of visible models (one for each light) plus another list for the final rendering from the camera perspective.

 

For point lights, you can generate 6 lists for each light (one for each face of the cube shadow map) or simply test which models are within light bounding sphere, or both or something else.




#5177228 Constant buffer - Memory error

Posted by TiagoCosta on 31 August 2014 - 10:57 AM

First of all, enable the D3D Debug Layer, it will help you debug D3D related errors in the future.

 

That error is probably thrown because the size of Constant Buffers be a multiple of 16, and sizeof(CBPixelShader) = 36, which isn't a multiple of 16.

 

Check the remarks of D3D11_BUFFER_DESC:

If the bind flag is D3D11_BIND_CONSTANT_BUFFER, you must set the ByteWidth value in multiples of 16, and less than or equal to D3D11_REQ_CONSTANT_BUFFER_ELEMENT_COUNT.

 

To fix this: add some padding variables to your structures:

struct CBChangesEveryFrame{

	XMMATRIX mWorld;
	int cPrzez;
	int cKolorP;
        int pad0, pad1; //Padding
};

struct CBPixelShader{

	XMFLOAT4 kolorS;
	XMFLOAT4 kolorL;
	int Trans;
        int pad0, pad1, pad2; //Padding
};

struct INS{

	XMFLOAT4X4 IN_N;
	float Light_Simple;
        float pad0, pad1, pad2; //Padding
};

You don't have to add padding in your .fx files (HLSL) because the HLSL compiler automatically adds the necessary padding to pack data into 4 and 16 byte boundaries.

 

Some more info regarding packing rules for HLSL. I strongly advise you to read it.




#5176212 Directx 11 instancing

Posted by TiagoCosta on 26 August 2014 - 10:00 AM

Internally the compiler expands

row_major matrix instancePos : INSTANCEPOS; 

into something like:

float4 instancePos : INSTANCEPOS0;
float4 instancePos : INSTANCEPOS1;
float4 instancePos : INSTANCEPOS2;
float4 instancePos : INSTANCEPOS3;

 
So the input layout should be:

{ "INSTANCEPOS", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 0, D3D11_INPUT_PER_INSTANCE_DATA, 1 },
{ "INSTANCEPOS", 1, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 16, D3D11_INPUT_PER_INSTANCE_DATA, 1 },
{ "INSTANCEPOS", 2, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 32, D3D11_INPUT_PER_INSTANCE_DATA, 1 },
{ "INSTANCEPOS", 3, DXGI_FORMAT_R32G32B32A32_FLOAT, 1, 48, D3D11_INPUT_PER_INSTANCE_DATA, 1 }



#5169272 Anyone guess what exactly all these mean?

Posted by TiagoCosta on 26 July 2014 - 04:24 AM

To my understanding of vectors, the look at vector should be camera's position substracting the position of look at target.


That's incorrect. The look at vector should (position of look at target) - (camera's position).
 
The vector AB = B - A.
 
So:
cam.mvFacing = OgreVec3ToBourneVec3(mSceneMgr->getSceneNode("Worker00001Node")->getPosition()) - cam.mvPosition;

But it looks perhaps at the opposite direction. But I have tried to reverse the order of subtraction to no avail.

 
What exactly happens when you reverse the order of subtraction?
 
Does Ogre and Dolly use the same coordinate systems? Maybe you are mixing Left hand and Right Hand coordinates...


#5168105 Anyone guess what exactly all these mean?

Posted by TiagoCosta on 21 July 2014 - 04:04 AM

Assuming that mvFacing is the look direction, then mvCross is probably the right (or left) vector because it can be calculated by calculating the cross product of mvUp and mvFacing.

 

I'm not sure about mvView. My guess about mvView is that it is the position the camera is looking at. So mvFacing = mvView - mvPosition.

 

Can you post the code where those vectors are initialized? 




#5154636 GPU bottlenecks and Sync Points

Posted by TiagoCosta on 19 May 2014 - 09:02 AM

Hi,

 

After reading a few presentations from past GDC about DX performance I'm a little confused:

 

1 - (From GDC 2012 slide 44) How is it possible to be vertex shading limited? Aren't ALU units shared between shader stages (in D3D11 hardware anyway)? So there shouldn't be any hardware resources waiting for the vertex shader to finish...

 

2 - Regarding CPU-GPU sync points, currently my engine uses the same buffer to draw almost every object (so it's Map()/Unmap() using DISCARD hundreds or thousands of times per frame, every frame, the same cbuffer). Is this crazy unsure.png ? Most samples do it this way, but they're samples...

Anyway I'll add an option in debug builds to detect sync points like suggested in the presentation.

 

3 - "Buffer Rename operation (MAP_DISCARD) after deallocation" (slide 9 from 1st link above) - What are these rename operations?

 

Thanks.




#5154139 [SOLVED] D3DX11CreateShaderResourceViewFromFile - deprecated.

Posted by TiagoCosta on 16 May 2014 - 04:56 PM

DDS (DirectDraw Surface) is a texture format.
 
If you want to load a .jpg texture, use the WICTextureLoader (since it supports BMP, JPEG, PNG, TIFF, GIF, etc).
 
Anyway, I think that error is being caused by something else. Which Visual Studio version are you using? And did you use the appropriate project file to compile DirectXTex?

 

Did you call CoInitialize or CoInitializeEx? 

The library assumes that the client code will have already called CoInitialize or CoInitializeEx as needed by the application before calling any DirectXTex routines

 

 

EDIT:



I have tried compiling it into a library using visual studio project files from "DirectXTex\DirectXTex" and linking via "additional library paths" (or similar name) and putting a dependency of generated ".lib" in the linker. This failed to even compile with an unresolved external symbol (needles to say the linker didn't like it).

 

I'm pretty sure you have to compile it to a library in order to use it.

 

Can you write a copy of the errors here?

I've only used DIrectXTK (not DirectXTex) but I didn't have any problem linking.




#5152005 Managing instancing

Posted by TiagoCosta on 07 May 2014 - 04:02 AM

Hi,
 
Currently my engine only supports instancing in a few limited cases, and I'm trying to fix that by implementing a more generic system.
 
When do engines usually find objects that can be instanced? Dynamically every frame after culling? Or at a "higher-level" by keeping a list of objects that use the same model?
 
Currently my scene struct looks like this:
 

struct Scene
{
    uint             num_actors;
    InstanceData*    actors_instance_data;
    uint*            actors_node_handles; //handles of actors' scene graph node(used to get world matrix)
    Model**          actors_models;
    BoundingSphere** bounding_spheres;
};

I could do it every frame after culling, by sorting the actors by model and if two actors a and use the same model (actors_models[a] == actors_models[b]) then they can be instanced by copying actors_instance_data[a] and actors_instance_data[b] to a constant buffer.

 

Does this seem reasonable or should I go with a "higher level" option?

 

Thanks.




#5148558 HLSL float4x4 vs float3x3

Posted by TiagoCosta on 21 April 2014 - 11:13 AM

No.

For example, in order to transform an tangent space normal to objects space (while applying normal mapping) I use a 3x3 matrix since translation doesn't affect normals (vectors).




#5146494 Rendering a subset in DX11

Posted by TiagoCosta on 12 April 2014 - 05:12 AM

You just have to call DrawIndexed with the correct IndexCount (the numbers of indices in the index buffer of the subset you want to draw) and StartIndexLocation (the position of the first index in the index buffer of the subset you want to draw). The other draw calls have similar arguments.

struct Subset
{
    uint index_count;
    uint start_index;
    Shader* shaders;
    Texture* textures;
    CBuffer* cbuffers;
};

void EFFECTMESH::Render(uint num_subsets, Subset* subsets){
	unsigned int stride;
	unsigned int offset;


	// Set vertex buffer stride and offset.
	stride = sizeof(NMVertex); 
	offset = 0;
    
	// Set the vertex buffer to active in the input assembler so it can be rendered.
	GE->devcon->IASetVertexBuffers(0, 1, &VB, &stride, &offset);

	// Set the index buffer to active in the input assembler so it can be rendered.
	GE->devcon->IASetIndexBuffer(IB, DXGI_FORMAT_R32_UINT, 0);

	// Set the type of primitive that should be rendered from this vertex buffer, in this case triangles.
	GE->devcon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST);

        for(uint i = 0; i < num_subsets; i++)
        {
            //Set correct shaders (subsets[i].shaders)
            //Bind correct textures (subsets[i].textures)
            //Bind correct cbuffers (subsets[i].cbuffers)

            //Draw current subset
            GE->devcon->DrawIndexed(subsets[i].index_count, subsets[i].start_index, 0);
        }
}





PARTNERS