• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 How to render my DirectX C++ Engine to a C# Panel

This topic is 667 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So the title pretty much sums it all up. I wanted to know how to set the HWND for my DirectX Engine to a C# Panel. I know it involves making a C++/CLI Wrapper. I'm learning DirectX from Frank Luna's DirectX 12 Book so the engine is here: https://github.com/d3dcoder/d3d12book. If someone could download his source code and make a C++/CLI Wrapper out of Chapter 4 that would be great. All you need to do is go to chapter 4 and open it in Visual Studio and you'll have all the files there. I know this is a lot of ask but I've been trying to do this for days and any help would be so appreciated. 

Share this post


Link to post
Share on other sites
Advertisement

What exactly are you trying to do? Use a c++ renderer in a c# application? If so, passing the HWND to the renderer is just one of many many things that you need to implement, so us doing that one thing for you won't get you very far. What seems to be the problem with handing the window handle to the c++ side?

 

FWIW, my editor which is a C# application does this to pass the handle to my C++ renderer, which in my case is in another process so I use the network rather than a C++/CLI wrapper to do the communication, but since the value is an integer you should have no problem sending that across the language border.

public void InitRenderWindow()
{
    if (EditorApp.Instance.IsConnectedToGame)
    {
        IntPtr handle = m_renderPanel.Handle;
        EditorApp.Instance.GameRPC.initRenderWindow(handle.ToInt32(), m_renderPanel.Width, m_renderPanel.Height, m_renderPanel.Left, m_renderPanel.Top);
    }
}

But again, if you are having trouble with understanding C++/CLI itself we are not going to do this one thing for you because then you will be stuck on the very next task.

Share this post


Link to post
Share on other sites

This is a forum not a hire platform so do it yourself. You need to create a function in C++ that is exported by VS Compiler first taking a void* ptr.

You do this by declaring as follows

__declspec(dllexport) void SetNetHandle(void* ptr)
{
   //convert into HWND here and pass to DX
}

You then may bind this from C# using dllimport tag on an extern function (keep an eye on calling convention)

and pass an IntPtr from your C# control to that function.

Share this post


Link to post
Share on other sites

As others have indicated, this forum is not the appropriate place to ask people to do work for you. If you have specific questions or need some general guidance then we'll be happy to help, but please refrain from asking others to write code for you.

Share this post


Link to post
Share on other sites

So I asked a similar question over at stackoverflow (don't worry this time I didn't ask someone to do it for me) and here's the answer I got (http://gamedev.stackexchange.com/questions/124249/how-do-i-render-my-directx-c-engine-to-a-c-panel/124275#124275)

So I went and tried to implement this but I think I might've messed up some of my DLL Engine code because I get this error when I compile 

 

An unhandled exception of type 'System.BadImageFormatException' occurred in GUI.exe

Additional information: An attempt was made to load a program with an incorrect format. (Exception from HRESULT: 0x8007000B)

 

So at this point I have no idea how to fix this so I was wondering if someone here could check out my code for the project and point out what's causing the issues. I know this is really broad but if I could point out what the error is I would and then ask you guys for help. My Code (https://drive.google.com/open?id=0BwmdbcDXQMDyMHNNV2RtVzlvWmM)

Share this post


Link to post
Share on other sites

BadImageFormatException is likely a bitness problem.

 

if you mix C++/CLI you're usually fixed to either 32bit or 64bit. Make sure the C# part matches the bitness (e.g. change "Any CPU" to the bitness your C++/CLI wrapper is in  x64=64 bit, x86 = 32bit)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement