Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

176 Neutral

About Corefanatic

  • Rank
  1.   Thank you, that worked.
  2. Hi all,  I've been trying to startup the D3D12 debug layer, however have not been able to. Here's what I do   #include <d3d12sdklayers.h>   ...   HMODULE sdk_layers = LoadLibraryA("d3d12SDKLayers.dll");   ...   ID3D12Device* device = nullptr;   D3D12CreateDevice(        nullptr,        D3D_FEATURE_LEVEL_11_1,        __uuidof(ID3D12Device),       (void**)&device);   //device created successfully   ID3D12Debug* debug = nullptr;   //get the debug form device as per  device->QueryInterface(__uuidof(ID3D12Debug), (void**)&debug);   //debug is null   I am not sure what is wrong here, has anyone been able to get the debug layer working?   Thanks  
  3. Thank you both, will try this later today once I get to my Win10 machine.
  4. You can't map a default buffer, it's GPU access only, you have to use an upload buffer. So desc.Properties.Type = D3D12_HEAP_TYPE_DEFAULT should be changed to D3D12_HEAP_TYPE_UPLOAD. There's more information here, in the heap types section: Edit: I should really use the correct nomenclature. What I meant to say is that you can't map a resource that was created on a default heap, you can only map resources created on upload heaps (or readback heaps).     That makes sense, thank you.   The reason why I went with default type of heap is that the documentation recommends using it for data that will be used across multiple frames, such as the static buffer I am creating here. So the question is, how do I get data into a default heap/ resource on default heap?
  5. I am having a problem uploading data to a buffer created on a heap.   The heap is created as follows:   D3D12_HEAP_DESC desc; Zeromemory(&desc, sizeof(desc));   desc.Alignment = 0; desc.SizeInBytes = 1024 * 1024; desc.Flags = D3D12_HEAP_FLAG_ALLOW_ONLY_BUFFERS;   desc.Properties.CPUPageProperty = D3D12_CPU_PAGE_PROPERTY_UNKNOWN; desc.Properties.Type = D3D12_HEAP_TYPE_DEFAULT; desc.Properties.MemoryPoolPreference = D3D12_MEMORY_POOL_UNKNOWN; desc.Properties.CreationNodeMask = 0; //single_gpu desc.Properties.VisibleNodeMask = 0; //single_gpu   ID3D12Heap* heap = nullptr;   device->CreateHeap(&desc,  __uuidof(ID3D12Heap),  (void**)&heap); The heap is created fine. Then I go to create the resoource:   D3D12_RESOURCE_DESC desc; ZeroMemory(&desc, sizeof(desc));   desc.Alignment = 0; desc.DepthOrArraySize = 1; desc.Dimension = D3D12_RESOURCE_DIMENSION_BUFFER; desc.Flags = D3D12_RESOURCE_FLAG_NONE; desc.Format = DXGI_FORMAT_UNKNOWN; desc.Height = 1; desc.Layout = D3D12_TEXTURE_LAYOUT_ROW_MAJOR; desc.MipLevels = 1; desc.SampleDesc.Count = 1; desc.SampleDesc.Quality = 0; desc.Width = sizeof(Vertex) * 3; //(32)   ID3D12Resource* new_resource = nullptr;   HRESULT ok = device->CreatePlacedResource(heap, 0, &desc, D3D12_RESOURCE_STATE_VERTEX_AND_CONSTANT_BUFFER, nullptr, __uuidof(ID3D12Resource), (void**)&new_resource);   The resource is also created fine. Then I try to map the resource:   UINT8* data = 0; HRESULT ok = new_resource->Map(0, nullptr, reinterpret_cast<void**>(&data));   The mapping fails with invalid arguments enum returned.    Does anyone know what might be the problem?
  6.   Agreed, viewport sets the area of the render target you render into, therefore with every render target size, you need to set the viewport accordingly in order for the scene to render into the whole render target.
  7. Corefanatic

    AMD's Mantle API

    Some developers, including the one I work for, have been given access to the API documentation so that we can get a better idea of whats coming. I think that AMD developer partners will get access to the API before/around GDC and public release later in the year as mentioned in this thread.
  8. Corefanatic

    AMD's Mantle API

      This is my dream...
  9. Corefanatic

    AMD's Mantle API

    Interesting situation would be if Nvidia came out with similar low level API.   Also, I am wondering if this will let developers have control over CPU-GPU synchronization. If it did and developers could tightly control when and how data is transferred to and from the GPU, this would be a real winner.   
  10.   So, I tried building the mesh as in the second image, unfortunately the problems did not go away.    16x16 mesh:     normals:  
  11.   Setting the light to 0,1,0 doesn't help, the problem persists. Rendering the normals into the frame buffer shows the same problem.         I am building the mesh the way shown in the first picture, will try the other way that thanks.       So, I moved the rendering into indexed rendering, so there is only one normal per vertex and still get same result.    That may or may not mean that there is exactly one vertex normal being used at each grid point.  How many vertices are in your vertex buffer, how many indices in your index buffer, and how many primitives are you drawing?  Compare that with your grid size and make sure that you only have N+1 x N+1 vertices for a grid of size N x N.   For a 16x16 grid, there are 256 vertices, index buffer holds 1350 indices
  12.   So, I moved the rendering into indexed rendering, so there is only one normal per vertex and still get same result. 
  13. Hi all, I am having a weird problem with normals on my generated terrain. I am not sure whether is it is shader or mesh issue, but here is how it looks:           As you can see, I get this pattern along the edges of the triangles. This reminds me of per vertex shading  however I am aiming for per pixel shading.   Here are my vertex and pixel shaders:   VS: cbuffer cbToProjection { float4x4 matToProj; } struct VS_IN { float4 Position : POSITION; float3 Normal: NORMAL; }; struct VS_OUT { float4 Position : SV_POSITION; float3 NormalWS: TEXCOORD1; }; VS_OUT main(VS_IN IN) { VS_OUT OUT; OUT.Position = mul(IN.Position,matToProj); OUT.NormalWS = normalize(IN.Normal); return OUT; }   PS: struct PS_IN { float4 Position : SV_POSITION; float3 NormalWS: TEXCOORD1; }; float4 main(PS_IN IN): SV_TARGET0 { float3 normal = normalize(IN.NormalWS); float3 toLight = normalize(float3(1,3,-2)); float NDotL = saturate(dot(toLight,normal)); float4 color = float4(1.0f,1.0f,1.0f,1.0f); color.rgb *= NDotL; return color; }       so what am I doing wrong? 
  14. In this case it is the best to create your own format based on the needs of your application. Let's say your meshes are in following format: each vertex: Vec3 position Vec3 normal Vec2 uv than it is easy to output your mesh from an aplication such as maya/3d studio max, or to convert formats such as obj into your own format outside of your application. What you have to do is to create the vertex buffer and save it as binary. The final format in binary could look something like this: int numberOfvertices int vertexSize (in char) {vertices} int numberOfIndices {indices} It is as simple as that, I have done it for my meshes, and it speeds up the loading dramatically, as there is no or little parsing done, and vertex data and index data can be sent straight to directx to create buffers. I hope this helps.
  15. Corefanatic

    Simulating object damage using shaders

    I suggest you mask out the pixels where the damage occurs, something along the lines of this:
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!