Sign in to follow this  
FrozenS

OpenGL DirectX12's terrible documenation -- some questions

Recommended Posts

FrozenS    190

Having only used OpenGL and read Mantle's documentation (which is excellent) before, I'm having a very hard time learning DirectX12. The documentation is almost nonexistent, some of it hyperlinks on accident to pages of an old preliminary versions that differs greatly, and in code examples they've done mistakes such as trying to call CreateConstantBufferView using a command list instead of a device. So not only is it very sparse, it's plain wrong. This may not seem like a big deal to some since it's easily caught, but it's very exhausting to have to deal with wrong documentation on top of sparse documentation.

 

Concepts are not explained at all which makes learning the API very exhausting. You have to remember the whole API without understanding it, and try to interpret what it is you're supposed to do based on how the API has been layed out and based on what little has been said. It would be great if someone put in the time to create a short tutorial explaining concepts such as ConstantBufferViews which aren't even mentioned in the documentation except for acknowledging its existence. Some knowledge can be gained from DirectX11 which has better resources online, but not everything.

 

Therefore I'm hoping to use this thread to get answers to a few questions I have now and questions I may have later. Here are my questions:

 

1) What are the differences between Shader Resource Views and Constant Buffer Views?

2) What are the different ways for uploading data to the GPU and does the recommended manner differ based on usage patterns?

 

Thanks.

Edited by FrozenS

Share this post


Link to post
Share on other sites
FrozenS    190

Thanks Radikalizm,

 

Though it's still a bit unclear to me when one would pick a CBV over a SRV as SRVs can reference buffers which presumably can be accessed in shaders. And if they can be accessed in shaders then it must be in a similar manner?

 

On a different note it's interesting to see the different approaches Mantle and DX12 have taken with memory management. Both seem to have the concept of preferred memory pools because in the DX12 heap properties, the memory pool entry is named "MemoryPoolPreference". So it sounds to me like the driver makes the decision for you as it does in Mantle, though I'm not sure. However, only Mantle has you specify the priority of the heap which says how important it is to have the heap in the preferred memory pool vs having them in less preferred pools or all together paged out.

 

Then when you submit a Command List to a queue you have to specify all the heaps used so the driver can make sure they're resident. In DX12 however, everything is automatically resident unless you call Evict which suggests to the driver that he can, well.. Evict the memory.

 

It seems to me that you have more control with the DirectX API whereas Mantle holds your hand a bit and helps you (or forces you) to have a page-out friendly implementation. That's my newbie take on the matter and it may be wrong. In any case it's interesting to learn two such APIs and their differences because it helps you recognize what decisions the driver developers have decided to handle for you, versus what decisions were made simply to fit modern GPU hardware. Both approaches are interesting and I look forward to seeing what Vulkan will do.

Edited by FrozenS

Share this post


Link to post
Share on other sites
Quat    568


Though it's still a bit unclear to me when one would pick a CBV over a SRV as SRVs can reference buffers which presumably can be accessed in shaders. And if they can be accessed in shaders then it must be in a similar manner?

 

This is a good question.  Yes you can put data you would typically put in a constant buffer in a structured buffer and then bind an SRV to the structured buffer and index it in your vertex shader.  You would have to profile and see if one performs more optimally.  

 

In the d3d11 days, I assumed constant buffers were distinguished in that they were designed for changing a small amount of constants often (per draw call), and so had special optimizations for this usage, whereas a structured buffer would not be changed by the CPU very often and would be accessed more like a texture.  

 

I'm not sure if future hardware will continue to make a distinction or if it is all the same.

Share this post


Link to post
Share on other sites
TheChubu    9454

This might be related:

 

http://www.yosoygames.com.ar/wp/2015/01/uniform-buffers-vs-texture-buffers-the-2015-edition/

 

I'm not terribly sure whats the difference between "shader storage buffer" (a "structured buffer" in D3Dland) and "texture buffer object". I mean, I know that TBOs uses samplers, texture cache and all of that, but it looks like both *could* be resolved the same way in the GPU, I'm not sure. If it is the case, the particulars noted in the article could be useful (typed buffer access vs untyped buffer access). Just in case, "uniform buffer" is a "constant buffer" in D3D.

 

I hope it helps. Maybe Mathias will pop in the discussion...

Share this post


Link to post
Share on other sites
Matias Goldberg    9580

This might be related:

 

http://www.yosoygames.com.ar/wp/2015/01/uniform-buffers-vs-texture-buffers-the-2015-edition/

 

I'm not terribly sure whats the difference between "shader storage buffer" (a "structured buffer" in D3Dland) and "texture buffer object". I mean, I know that TBOs uses samplers, texture cache and all of that, but it looks like both *could* be resolved the same way in the GPU, I'm not sure. If it is the case, the particulars noted in the article could be useful (typed buffer access vs untyped buffer access). Just in case, "uniform buffer" is a "constant buffer" in D3D.

 

I hope it helps. Maybe Mathias will pop in the discussion...

From an API perspective, shader storage buffer (aka SSBO, structured buffers) are very different from a texture buffer object (aka TBO). SSBOs can have read and/or write access, may alias to other objects (e.g. a structured buffer is read-only but turns out another variable points to the same memory and has write access; see restrict in C for an explanation of the same problem in CPUs) and are subject to less restrictions than TBOs (more relaxed alignment rules, unbounded indexing).

If it were C/C++, a TBO is more like a read-only array in the stack that may need lots of padding (an array that can be huge though... like 128MB huge) that is guaranteed to not alias with anything that has write access, while SSBOs is much more like a random pointer C with less padding/packing.

 

From a GCN arch point of view, there is virtually no difference between the two; however the shader compiler may produce better code with TBOs due to the strong assumptions enforced by the API (always read only, size may be known at compile time, doesn't alias with other variables that may have write access, the alignment restrictions can help avoiding bank conflicts, etc)

 

Needless to say from a historical point of view, lots of old GPUs couldn't do SSBOs (a feature introduced with DX11 hardware) but they could do TBOs (supported since the GeForce 8).

 

NVIDIA has now also published it's three-part series of "recommended practices" in Structured Buffers. Note that alignment still plays a vital role with SSBOs, but now you have to pad your objects explicitly, whereas TBOs forced the padding (e.g. either you performed two fetches of RGBA_FLOAT32, or avoid padding with 6 fetches of R_FLOAT32.).

Share this post


Link to post
Share on other sites
@alisonst    130

 

1) What are the differences between Shader Resource Views and Constant Buffer Views?

2) What are the different ways for uploading data to the GPU and does the recommended manner differ based on usage patterns?

 

 

Re #1: Take a look at Amar's binding tutorial on our official DirectX YT channel: https://www.youtube.com/playlist?list=PLeHvwXyqearVU8fvo2Oq7otKDlLLDAaHW fo

Re #2: We are hoping to post a video soon relating to #2 on the channel as well soon. I'll let you know when we do.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
    • By Picpenguin
      Hi
      I'm new to learning OpenGL and still learning C. I'm using SDL2, glew, OpenGL 3.3, linmath and stb_image.
      I started following through learnopengl.com and got through it until I had to load models. The problem is, it uses Assimp for loading models. Assimp is C++ and uses things I don't want in my program (boost for example) and C support doesn't seem that good.
      Things like glVertexAttribPointer and shaders are still confusing to me, but I have to start somewhere right?
      I can't seem to find any good loading/rendering tutorials or source code that is simple to use and easy to understand.
      I have tried this for over a week by myself, searching for solutions but so far no luck. With tinyobjloader-c and project that uses it, FantasyGolfSimulator, I was able to actually load the model with plain color (always the same color no matter what I do) on screen and move it around, but cannot figure out how to use textures or use its multiple textures with it.
      I don't ask much: I just want to load models with textures in them, maybe have lights affect them (directional spotlight etc). Also, some models have multiple parts and multiple textures in them, how can I handle those?
      Are there solutions anywhere?
      Thank you for your time. Sorry if this is a bit confusing, English isn't my native language
  • Popular Now