• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement

DX12 using World transform to move objects stored on a single vertex buffer

Recommended Posts

I'm currently learning how to store multiple objects in a single vertex buffer for efficiency reasons. So far I have a cube and pyramid rendered using ID3D12GraphicsCommandList::DrawIndexedInstanced; but when the screen is drawn, I can't see the pyramid because it is drawn inside the cube. I'm told to "Use the world transformation matrix so that the box and pyramid are disjoint in world space".

 

Can anyone give insight on how this is accomplished? 

 

     First I init the verts in Local Space

std::array<VPosData, 13> vertices =
    {
        //Cube
        VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }),
        VPosData({ XMFLOAT3(-1.0f, +1.0f, -1.0f) }),
        VPosData({ XMFLOAT3(+1.0f, +1.0f, -1.0f) }),
        VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }),
        VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }),
        VPosData({ XMFLOAT3(-1.0f, +1.0f, +1.0f) }),
        VPosData({ XMFLOAT3(+1.0f, +1.0f, +1.0f) }),
        VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }),

        //Pyramid
        VPosData({ XMFLOAT3(-1.0f, -1.0f, -1.0f) }),
        VPosData({ XMFLOAT3(-1.0f, -1.0f, +1.0f) }),
        VPosData({ XMFLOAT3(+1.0f, -1.0f, -1.0f) }),
        VPosData({ XMFLOAT3(+1.0f, -1.0f, +1.0f) }),
        VPosData({ XMFLOAT3(0.0f,  +1.0f, 0.0f) })
  }

Then  data is stored into a container so sub meshes can be drawn individually

SubmeshGeometry submesh;
	submesh.IndexCount = (UINT)indices.size();
	submesh.StartIndexLocation = 0;
	submesh.BaseVertexLocation = 0;

	SubmeshGeometry pyramid;
	pyramid.IndexCount = (UINT)indices.size();
	pyramid.StartIndexLocation = 36;
	pyramid.BaseVertexLocation = 8;



	mBoxGeo->DrawArgs["box"] = submesh;
	mBoxGeo->DrawArgs["pyramid"] = pyramid;

 

Objects are drawn

mCommandList->DrawIndexedInstanced(
		mBoxGeo->DrawArgs["box"].IndexCount,
		1, 0, 0, 0);

	mCommandList->DrawIndexedInstanced(
		mBoxGeo->DrawArgs["pyramid"].IndexCount,
		1, 36, 8, 0);

 

Vertex Shader

 

cbuffer cbPerObject : register(b0)
{
	float4x4 gWorldViewProj; 
	
};


struct VertexIn
{
	float3 PosL  : POSITION;
    float4 Color : COLOR;
};

struct VertexOut
{
	float4 PosH  : SV_POSITION;
    float4 Color : COLOR;
};

VertexOut VS(VertexIn vin)
{
	VertexOut vout;
	
	// Transform to homogeneous clip space.
	vout.PosH = mul(float4(vin.PosL, 1.0f), gWorldViewProj);
	
	// Just pass vertex color into the pixel shader.
    vout.Color = vin.Color;
    
    return vout;
}

float4 PS(VertexOut pin) : SV_Target
{
	
    return pin.Color;
}

 

Untitled.png

Edited by Tubby94

Share this post


Link to post
Share on other sites
Advertisement

Where's your code that makes the worldviewproj matrix?  Where's your code that makes the world matrix?  Do you understand what the world matrix does?  Do you understand what the view matrix does?  Do you understand what the Projection matrix does?

Share this post


Link to post
Share on other sites
20 hours ago, Infinisearch said:

Where's your code that makes the worldviewproj matrix?  Where's your code that makes the world matrix?  Do you understand what the world matrix does?  Do you understand what the view matrix does?  Do you understand what the Projection matrix does?

    //Initialization 
    XMFLOAT4X4 mWorld = MathHelper::Identity4x4();
    XMFLOAT4X4 mView = MathHelper::Identity4x4();
    XMFLOAT4X4 mProj = MathHelper::Identity4x4();


// Build the view matrix.
    XMVECTOR pos = XMVectorSet(x, y, z, 1.0f);
    XMVECTOR target = XMVectorZero();
    XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f);

    XMMATRIX view = XMMatrixLookAtLH(pos, target, up);
    XMStoreFloat4x4(&mView, view);


    XMMATRIX world = XMLoadFloat4x4(&mWorld);
    XMMATRIX proj = XMLoadFloat4x4(&mProj);
    XMMATRIX worldViewProj = world*view*proj;

	
	// Update the constant buffer with the latest worldViewProj matrix.
	ObjectConstants objConstants;
    XMStoreFloat4x4(&objConstants.WorldViewProj, XMMatrixTranspose(worldViewProj));


	

World matrix is used when converting coordinates from local space to world space.  

View Matrix is used when converting coordinates from world space to camera space.

Projection Matrix is used to map 3D coordinates on a 2D plane. 

 

Think I understand these concepts well enough, I'm just not sure how to apply this to only the first 8 or last 5 vertices of the single vertex buffer. 

Edited by Tubby94

Share this post


Link to post
Share on other sites

When you trigger a draw call, you provide a base vertex location, and a start index location. If you want to draw a cube and a pyramid from the same vertex and index buffer, you just have to issue two draw calls with the proper triangle count and offsets to the wanted geometry.

If you want to draw both the cube and the pyramid in a single draw call, considered a single geometry, you will have to do something like skinned geometry, provide a bone index in the vertices,  and read that in the vertex shader to index into an array of world matrix.

 

Also, i would advocate from describing everything as matrices but as transformation because It is not mandatory to be a matrix. As for an example, a world transformation could be a Quaternion for orientation, a vector3 for translation and a scalar for scale ( no, don't do non uniform scale, it is bad ! ).

 

 

Share this post


Link to post
Share on other sites
5 hours ago, galop1n said:

When you trigger a draw call, you provide a base vertex location, and a start index location. If you want to draw a cube and a pyramid from the same vertex and index buffer, you just have to issue two draw calls with the proper triangle count and offsets to the wanted geometry.

If you want to draw both the cube and the pyramid in a single draw call, considered a single geometry, you will have to do something like skinned geometry, provide a bone index in the vertices,  and read that in the vertex shader to index into an array of world matrix.

 

Also, i would advocate from describing everything as matrices but as transformation because It is not mandatory to be a matrix. As for an example, a world transformation could be a Quaternion for orientation, a vector3 for translation and a scalar for scale ( no, don't do non uniform scale, it is bad ! ).

 

 

The concept makes sense to me, but I still don't understand where to place this transformation in my code. 

Share this post


Link to post
Share on other sites

The "world" part of a WorldViewProj describes the objects position, rotation and size.

To make different objects not be all in the same place, instead of a single gWorldViewProj in the shaders, you would need a seperate "world" for each object (or a seperate gWorldViewProj if you wish to do the combining of world,view,projection in the cpu like you are at the moment)

You could either have an array of them inside a constantbuffer, or have a structurebuffer. If you are using executeindirect you dont need an array of them as seen by the vertexshader as you can use it to change the start of your constant buffer view to a different piece of data in a single larger buffer which is an array of all of them.

Your vertex could store its objectnumber so it can choose the correct array element in the vertexshader, but thats not ideal if you wanted to draw multiple of the same object in different positions in the world.

 

As this is DX12 you could solve the problem of transmitting the objectnumber to the gpu without imbedding it in a vertex by using a executeindirect instead of a drawindexed as it is tailor made to draw multiple different objects in a single draw

Executeindirect can change values in the rootsignature with each draw it issues so it could either be used to pass the objectnumber as a rootconstant or even better can change a single gWorldViewProj constantbufferview and not need an array in the vertexshader and this would be faster.

See the D3D12ExecuteIndirect sample https://msdn.microsoft.com/en-us/library/windows/desktop/mt186624(v=vs.85).aspx

That sample does this very thing by having a different position for each object by changing the constantbufferview to start at a different element in a bigger buffer which is basically an array of them.

Edited by CortexDragon

Share this post


Link to post
Share on other sites

No one said it so i will say it : "If you are not yet an expert at DX11 or if you don't know why your application is in the 1% that would benefit from DX12 explicitely, then you should stick to DX11".

DX12 is not a replacement for DX11, it is an edge case scenario just like Vulkan is an edge case scenario to OpenGL.

You are struggling with rendering two objects, DX12 is not for you for at least many years of intense DX11 usage.

Share this post


Link to post
Share on other sites
9 hours ago, Infinisearch said:

Look at the first few lines of your vertex shader.  Also look at the code that makes that constant buffer.

I get that I can create new world and MVP matrices to separately  describe the positions, but it still doesn't solve the problem of how I tell the shader to use MVPA on the cube and MVPB on the pyramid.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now


  • Advertisement