Sign in to follow this  

DX12 Porting to DX12

Recommended Posts

piluve    288

Hey guys,

I started working on a port from dx11 to dx12. The first thing, was to setup everything to work with Dx11On12. Right now I've got that done. Basically, the render frame goes as follows:


D3D12Prepare(); 		// setups the command list and command allocators (as well as basic clear and set render targets)
GetWrappedResources(); 	// Dx11on12 step to adquire resources

Render();				// Basically all the Dx11 rendering code etc

D3D12End();				// On D3D12Prepare we left the command list opened so we can add aditional	
						// commands, now close and execute it


Flush();				// Flush all the dx11 code

Dx12Sync();				// Wait for fence


That setup is working and I changed some commands inside Render() from dx11 to dx12. (Basic stuff like setviewport)

I want to start porting more stuff inside the Render(), for example, we have a simple method to draw a quad (without vertex or index buffers, we use the vertex_id inside the shader).

Basically, it should translate to this:


mCmdList->DrawInstanced(4, 1, 0, 0);

But even that simple piece of code is just not working. I would like to get some advice from someone that has done a similar process (using dx11on12), what are the limitations, things that wont work etc

My main concern right now, is that if I want to start setting up commands that touch the IA, I would have to also create the PSO, root signatures etc etc.






Edited by piluve

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By NikiTo
      Would it be a problem to create in HLSL ~50 uninitialized arrays of ~300000 cells each and then use them for my algorithm(what I currently do in C++(and I had stack overflows problems because of large arrays)).
      It is something internal to the shader. Shader will create the arrays in the beginning, will use them and not need them anymore. Not taking data for the arrays from the outside world, not giving back data from the arrays to the outside world either. Nothing shared.
      My question is not very specific, it is about memory consumption considerations when writing shaders in general, because my algorithm still has to be polished. I will let the writing of HLSL for when I have the algorithm totally finished and working(because I expect writing HLSL to be just as unpleasant as GLSL). Still it is useful for me to know beforehand what problems to consider.
    • By mark_braga
      I am working on optimizing our descriptor management code. Currently, I am following most of the guidelines like sorting descriptors by update frequency,...
      I have two types of descriptor ranges: Static (DESCRIPTOR_RANGE_FLAG_NONE) and Dynamic(DESCRIPTORS_VOLATILE). So lets say I have this scenario:
      pCmd->bindDescriptorTable(pTable); for (uint32_t i = 0; i < meshCount; ++i) { // descriptor is created in a range with flag DESCRIPTORS_VOLATILE // setDescriptor will call CopyDescriptorsSimple to copy descriptor handle pDescriptor[i] to the appropriate location in pTable pTable->setDescriptor("descriptor", pDescriptor[i]); } Do I need to call bindDescriptorTable inside the loop?
    • By nbertoa
      I want to implement anti-aliasing in BRE, but first, I want to explore what it is, how it is caused, and what are the techniques to mitigate this effect. That is why I am going to write a series of articles talking about rasterization, aliasing, anti-aliasing, and how I am going to implement it in BRE.
      Article #1: Rasterization
      All the suggestions and improvements are very welcome! I will update this posts with new articles
    • By mark_braga
      I am working on optimizing barriers in our engine but for some reason can't wrap my head around split barriers.
      Lets say for example, I have a shadow pass followed by a deferred pass followed by the shading pass. From what I have read, we can put a begin only split barrier for the shadow map texture after the shadow pass and an end only barrier before the shading pass. Here is how the code will look like in that case.
      DrawShadowMapPass(); ResourceBarrier(BEGIN_ONLY, pTextureShadowMap, SHADER_READ); DrawDeferredPass(); ResourceBarrier(END_ONLY, pTextureShadowMap, SHADER_READ); // Uses shadow map for shadow calculations DrawShadingPass(); Now if I just put one barrier before the shading pass, here is how the code looks.
      DrawShadowMapPass(); DrawDeferredPass(); ResourceBarrier(NORMAL, pTextureShadowMap, SHADER_READ); // Uses shadow map for shadow calculations DrawShadingPass(); Whats the difference between the two?
      Also if I have to use the render target immediately after a pass. For example: Using the albedo, normal textures as shader resource in the shading pass which is right after the deferred pass. Would we benefit from a split barrier in this case?
      Maybe I am completely missing the point so any info on this would really help. The MSDN doc doesn't really help. Also, I read another topic 
      but it didn't really help either. 
    • By ZachBethel
      I'm reading through the Microsoft docs trying to understand how to properly utilize aliasing barriers to alias resources properly.
      "Applications must activate a resource with an aliasing barrier on a command list, by passing the resource in D3D12_RESOURCE_ALIASING_BARRIER::pResourceAfter. pResourceBefore can be left NULL during an activation. All resources that share physical memory with the activated resource now become inactive or somewhat inactive, which includes overlapping placed and reserved resources."
      If I understand correctly, it's not necessary to actually provide the pResourceBefore* for each overlapping resource, as the driver will iterate the pages and invalidate resources for you. This is the Simple Model.
      The Advanced Model is different:
      Advanced Model
      The active/ inactive abstraction can be ignored and the following lower-level rules must be honored, instead:
      An aliasing barrier must be between two different GPU resource accesses of the same physical memory, as long as those accesses are within the same ExecuteCommandLists call. The first rendering operation to certain types of aliased resource must still be an initialization, just like the Simple Model. I'm confused because it looks like, in the Advanced Model, I'm expected to declare pResourceBefore* for every resource which overlaps pResourceAfter* (so I'd have to submit N aliasing barriers). Is the idea here that the driver can either do it for you (null pResourceBefore) or you can do it yourself? (specify every overlapping resource instead)? That seems like the tradeoff here.
      It would be nice if I can just "activate" resources with AliasingBarrier (NULL, activatingResource) and not worry about tracking deactivations.  Am I understanding the docs correctly?
  • Popular Now