• Advertisement
  • Popular Tags

  • Popular Now

  • Advertisement
  • Similar Content

    • By Jason Smith
      While working on a project using D3D12 I was getting an exception being thrown while trying to get a D3D12_CPU_DESCRIPTOR_HANDLE. The project is using plain C so it uses the COBJMACROS. The following application replicates the problem happening in the project.
      #define COBJMACROS #pragma warning(push, 3) #include <Windows.h> #include <d3d12.h> #include <dxgi1_4.h> #pragma warning(pop) IDXGIFactory4 *factory; ID3D12Device *device; ID3D12DescriptorHeap *rtv_heap; int WINAPI wWinMain(HINSTANCE hinst, HINSTANCE pinst, PWSTR cline, int cshow) { (hinst), (pinst), (cline), (cshow); HRESULT hr = CreateDXGIFactory1(&IID_IDXGIFactory4, (void **)&factory); hr = D3D12CreateDevice(0, D3D_FEATURE_LEVEL_11_0, &IID_ID3D12Device, &device); D3D12_DESCRIPTOR_HEAP_DESC desc; desc.NumDescriptors = 1; desc.Type = D3D12_DESCRIPTOR_HEAP_TYPE_RTV; desc.Flags = D3D12_DESCRIPTOR_HEAP_FLAG_NONE; desc.NodeMask = 0; hr = ID3D12Device_CreateDescriptorHeap(device, &desc, &IID_ID3D12DescriptorHeap, (void **)&rtv_heap); D3D12_CPU_DESCRIPTOR_HANDLE rtv = ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart(rtv_heap); (rtv); } The call to ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart throws an exception. Stepping into the disassembly for ID3D12DescriptorHeap_GetCPUDescriptorHandleForHeapStart show that the error occurs on the instruction
      mov  qword ptr [rdx],rax
      which seems odd since rdx doesn't appear to be used. Any help would be greatly appreciated. Thank you.
       
    • By lubbe75
      As far as I understand there is no real random or noise function in HLSL. 
      I have a big water polygon, and I'd like to fake water wave normals in my pixel shader. I know it's not efficient and the standard way is really to use a pre-calculated noise texture, but anyway...
      Does anyone have any quick and dirty HLSL shader code that fakes water normals, and that doesn't look too repetitious? 
    • By turanszkij
      Hi,
      I finally managed to get the DX11 emulating Vulkan device working but everything is flipped vertically now because Vulkan has a different clipping space. What are the best practices out there to keep these implementation consistent? I tried using a vertically flipped viewport, and while it works on Nvidia 1050, the Vulkan debug layer is throwing error messages that this is not supported in the spec so it might not work on others. There is also the possibility to flip the clip scpace position Y coordinate before writing out with vertex shader, but that requires changing and recompiling every shader. I could also bake it into the camera projection matrices, though I want to avoid that because then I need to track down for the whole engine where I upload matrices... Any chance of an easy extension or something? If not, I will probably go with changing the vertex shaders.
    • By NikiTo
      Some people say "discard" has not a positive effect on optimization. Other people say it will at least spare the fetches of textures.
       
      if (color.A < 0.1f) { //discard; clip(-1); } // tons of reads of textures following here // and loops too
      Some people say that "discard" will only mask out the output of the pixel shader, while still evaluates all the statements after the "discard" instruction.

      MSN>
      discard: Do not output the result of the current pixel.
      clip: Discards the current pixel..
      <MSN

      As usual it is unclear, but it suggests that "clip" could discard the whole pixel(maybe stopping execution too)

      I think, that at least, because of termal and energy consuming reasons, GPU should not evaluate the statements after "discard", but some people on internet say that GPU computes the statements anyways. What I am more worried about, are the texture fetches after discard/clip.

      (what if after discard, I have an expensive branch decision that makes the approved cheap branch neighbor pixels stall for nothing? this is crazy)
    • By NikiTo
      I have a problem. My shaders are huge, in the meaning that they have lot of code inside. Many of my pixels should be completely discarded. I could use in the very beginning of the shader a comparison and discard, But as far as I understand, discard statement does not save workload at all, as it has to stale until the long huge neighbor shaders complete.
      Initially I wanted to use stencil to discard pixels before the execution flow enters the shader. Even before the GPU distributes/allocates resources for this shader, avoiding stale of pixel shaders execution flow, because initially I assumed that Depth/Stencil discards pixels before the pixel shader, but I see now that it happens inside the very last Output Merger state. It seems extremely inefficient to render that way a little mirror in a scene with big viewport. Why they've put the stencil test in the output merger anyway? Handling of Stencil is so limited compared to other resources. Does people use Stencil functionality at all for games, or they prefer discard/clip?

      Will GPU stale the pixel if I issue a discard in the very beginning of the pixel shader, or GPU will already start using the freed up resources to render another pixel?!?!



       
  • Advertisement
  • Advertisement
Sign in to follow this  

DX12 Something something NVidia dropping DX9 from GTX-2080 and will only be supporting DX12 something something.

This topic is 454 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone,

 

I recently saw a post on Reddit where a gamer said that NVidia will drop DX9 support with their next generation of cards and that the developers of <insert DX9 game here> would need to update to DX12 to function with the new cards. The same user also said that it's no big deal since DX9 is dead and that the devs would only need to flip a switch to use the DX12 libs instead of DX9, so he obviously doesn't fully understand what he's talking about. I asked for the source(s), where NVidia have "clearly stated" that DX9 will no longer function and that people who wanted to keep using DX9 would have to get another card, but I have not yet received any. I assume there's a misunderstanding and that what NVidia actually said is something along the lines of "Our DX9 support will be layered on top of DX12 from now on to simplify driver development" or similar, but I was wondering if anyone here could clarify what NVidia actually said etc..?

 

Will NVidia (and/or AMD) base their legacy DX support on top of DX12 from now on, and if so, will it affect gamers and developers? What about performance? Should Microsoft offer a legacy DX compatibility layer on top of DX12 or newer, so that GPU vendors can focus on current APIs only?

Edited by DvDmanDT

Share this post


Link to post
Share on other sites
Advertisement

Should Microsoft offer a legacy DX compatibility layer on top of DX12 or newer, so that GPU vendors can focus on current APIs only?

 

Given D3D11On12 exists, I don't think it'd be too much of a stretch to say that doing the same thing to D3D9 would be possible. I'm not aware of any plans from anyone to drop native D3D9 driver support though.

Share this post


Link to post
Share on other sites

I would double check your source(s). The only way that is possible is if Nvidia also say that you cannot use the card with Windows XP ( and I know its since long dead, but still )....

Share this post


Link to post
Share on other sites
Ah the rumor mill is alreading producing for next generation GPUs. All I have to say is to take every thing you see with an ocean of salt. At least until the manufacturer does a press release or something.

Share this post


Link to post
Share on other sites

This smells a lot like somebody trying to start a "my hardware vendor is better than your hardware vendor" war.  Treat with extreme suspicion.

Share this post


Link to post
Share on other sites

I would double check your source(s). The only way that is possible is if Nvidia also say that you cannot use the card with Windows XP ( and I know its since long dead, but still )....

 

I don't really have a source except a seemingly clueless user. I do think he read something though. I don't think he made it all up, but rather misunderstood what the source actually said. According to him, the source says the GTX2080 will be a DX12 only card. It can't possibly mean the card won't run anything older than DX12, but it could mean that the card is only supported on DX12 compatible OSes (that is Windows 10+). It seems unlikely to me, but not impossible. I can imagine that it would be really tempting for NVidia etc to drop DX9 and older and rely on something similar to D3D11On12 though, especially if Microsoft is the one who maintains that layer. I haven't heard anything about it myself though, which is why I'm asking.

 

Ah the rumor mill is alreading producing for next generation GPUs. All I have to say is to take every thing you see with an ocean of salt. At least until the manufacturer does a press release or something.

 

My interpretation/guess is that NVidia has made a statement, either as a press release or some form of magazine preview/teaser thing. My guess is that it was worded in such a way that some readers misunderstood the meaning. Kindof like the whole "no OpenGL on Vista" thing, where panic spread like wildfire because some reporter/blogger/whatever did not understand what Microsoft actually said in their statement.

Share this post


Link to post
Share on other sites
Many of the old games moved to steam. I am playing an ol' skool game. Commandos: Behind Enemy Lines. it runs on directX 8 and pentium II 300 minimum. It still is as challenging as it was back then.

Directx libraries will be just drivers, and I presume that Nvidia makes older directx driver compatible by boxing previous directx driver with new hardware. We from the gaming industry or any other perticular industry are made to think that directx and graphic cards are only for gaming software.
But the fact is that directx and graphic cards are having a bigger role in the software applications for totally distint industries. Such as Medical, Enviornmental, geological , neuroscience, oil and gas etc.. These industries have critical hardware, oil and gas for example that still run on older versions and get the job done. Millions are poured into it and these hardware required to run for decades. Changing is not an option, as its a costly. Software even.
It totally makes sense that backward compatibility needs to be maintained. Otherwise we might have a small chaos.

Some industries such as automotive services are still using dos based foxpro. They do not care to upgrade because we all know that the electronic hardware/software will rake your pockets if you keep upgrading ever couple of years. Just think about upgrading washing machine or juicer every year. It totally sucks. We really need electronics that sustains very well at-least for a decade just like a car.

Share this post


Link to post
Share on other sites

It would be interesting to see how they would remap SM3.0...

 

EDIT: how do current drivers & OS deal with pre-DX9 code?

Edited by Alessio1989

Share this post


Link to post
Share on other sites
EDIT: how do current drivers & OS deal with pre-DX9 code?

 

DX8 and below "support" was discontinued in Vista. Programs using those APIs run on a software virtualization layer written by MS and integrated into Windows, which implements the entire API functionality in shader code. It's drastically slower than DX8 in XP, but by the time we got to that point nobody cared.

Edited by Promit

Share this post


Link to post
Share on other sites

DX8 and below "support" was discontinued in Vista. Programs using those APIs run on a software virtualization layer written by MS and integrated into Windows, which implements the entire API functionality in shader code. It's drastically slower than DX8 in XP, but by the time we got to that point nobody cared.
 

 

Where'd you hear that? 

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement