Sign in to follow this  
Starfox

OpenGL D3D10 / D3D11 for OpenGL developers

Recommended Posts

Starfox    504
Any good references for that that MINIMALLY depend on D3DX? The SDK samples aren't so good. If I wanna know something like (real example there) what's the D3D10 equivalent for glBindTexture(), where do I go? :D

Share this post


Link to post
Share on other sites
ET3D    810
Why not D3DX? D3DX for D3D10/11 is pretty limited in scope, and in any case I can't see how it'd interfere with your learning of these API's. I'm sure that once you follow the D3D10 tutorials in the SDK you will know the equivalent of glBindTexture.

So my suggestion is that you read the tutorials and the programming guide in the SDK. You can also try this free books which teaches advanced rendering techniques using Direct3D 10. Unfortunately I can't find now the D3D10 tutorial I found a couple of days ago looking for something else.

Share this post


Link to post
Share on other sites
Starfox    504
The reason I don't want to use D3DX is because this is mainly a learning experience - I'm not aiming to make a real commercial product, I just want to glean some lessons from the interface design of D3D10. I've been working with GPUs on a very low level (RSX, libgcm) and some GL extensions seem perfectly rational at exposing functionality when you're used to how GPUs work at a low level. I'd like to know more about how the D3D10 model maps to GPU hardware.

Share this post


Link to post
Share on other sites
Nik02    4349
While D3D10 is still a relatively high level of abstraction of what actually happens in the GPU hardware, there are some key architectural improvements over D3D9 style (just a few examples):

-IMHO the biggest change is that the same video memory block can be used in multiple purposes, with multiple layouts. This reflects the fact that usually the GPU memory is actually just a linear blob of memory that only gets its meaning by the systems that access it (texture samplers, input assembler etc.). In order to use a GPU resource such as a texture, you need a separate "view" to the underlying data for the hardware to interpret it in an useful way. While this may sound restricting, notice that you can create several different views to the same physical GPU memory. This, in turn, means that a lot of redundant copying of data can be left off, increasing performance and decreasing memory usage.

-State blocks are immutable. The hardware commonly has blocks of memory allocated for state settings, and if you modify even a single byte of such a block, it is still just as expensive as updating the whole block because of the API transition, driver work and hardware layer transition. Therefore, it makes sense to be able to update the states only on per-block basis.

-Constant registers are now grouped in constant buffers. This reflects the fact that commonly hardware allocates large blocks of memory for this purpose anyway (larger than one register) for performance by not having to constantly allocate new registers as needed. As a side effect, since you can create multiple constant buffers, you can optimize the uploading of your constant data by grouping the buffers based on the frequency of use, and only upload the data to various buffers when you actually need to change them.

The SDK docs have a "programming guide" section for each API that enumerates this stuff, albeit with varying levels of detail. For example, the topic "Creating Texture Resources (Direct3D 10)" in the programming guide is a very detailed guide on how to create and bind textures to the pipeline with and without D3DX.

All this said, I recommend learning D3D11 instead of D3D10. It is even more optimized, and it also offers more flexibility in using both older and newer hardware for maximum performance.

Share this post


Link to post
Share on other sites
InvalidPointer    1842
Quote:
Original post by Nik02
-IMHO the biggest change is that the same video memory block can be used in multiple purposes, with multiple layouts. This reflects the fact that usually the GPU memory is actually just a linear blob of memory that only gets its meaning by the systems that access it (texture samplers, input assembler etc.). In order to use a GPU resource such as a texture, you need a separate "view" to the underlying data for the hardware to interpret it in an useful way. While this may sound restricting, notice that you can create several different views to the same physical GPU memory. This, in turn, means that a lot of redundant copying of data can be left off, increasing performance and decreasing memory usage.

He's(?) probably used to this already; though I haven't actually done any meddling with libgcm or the DirectX variant the 360 uses, my understanding is that you can do this sort of 'casting' already, even going beyond simple format conversions and treating MSAA resources as non-MSAA and vice-versa. This is something I really wish would be included in more traditional versions of DirectX, as it's really useful for fillrate management.

To answer the OP's suggestion, I would honestly suggest firing up either MSDN or the API reference included with the DX SDK (in Documentation\DirectX9\windows_graphics.chm from the SDK root directory) and looking at the various interfaces available. Most of the interesting stuff's going to be in the ID3D10Device object.

And, in order to speed things along, the rough equivalent to glBindTexture (ugh, I cringe every time I say it D:) would be GSSetShaderResources/PSSetShaderResources/VSSetShaderResources, depending on which shader stage(s) you want to sample said texture from.

Finally, any reason why you aren't using DX11? It's a strict superset of 10 and the API is fairly similar, barring the splitting of resource *creation* and *use* into two separate interfaces. (i.e. ID3D11Device and ID3D11DeviceContext, respectively)

Share this post


Link to post
Share on other sites
Starfox    504
Well, I think the grouping of states into state blocks in D3D10 makes much more sense from a logical standpoint, not all HW groups things into blocks.

The reason I'm not using D3D11 is petty: We use the August 07 SDK across all our dev machines, and I don't wanna get into multi-sdk hell. I want to make it clear that we're using DInput and DSound, so having the August 07 SDK is not problematic - if we were using D3D commercially it'd have been disastrous. My D3D ventures are for learning purposes only.

I have a question about textures: Can I make a buffer that I use BOTH as a texture and as a vertex attribute source? From what I gleaned from the docs, I can make a 32bit by 4 texture and use it as float32 or int32 per element, which makes sense from what I know about console GPUs, but I couldn't confirm / deny whether using a buffer as both a vertex attribute source and a texture would be possible. I know it'd involve some problems on console GPUs (preferable texture storage there is tiled, for good reasons relating to texture filtering)

Share this post


Link to post
Share on other sites
InvalidPointer    1842
Quote:
Original post by Starfox
Well, I think the grouping of states into state blocks in D3D10 makes much more sense from a logical standpoint, not all HW groups things into blocks.

The reason I'm not using D3D11 is petty: We use the August 07 SDK across all our dev machines, and I don't wanna get into multi-sdk hell. I want to make it clear that we're using DInput and DSound, so having the August 07 SDK is not problematic - if we were using D3D commercially it'd have been disastrous. My D3D ventures are for learning purposes only.

I have a question about textures: Can I make a buffer that I use BOTH as a texture and as a vertex attribute source? From what I gleaned from the docs, I can make a 32bit by 4 texture and use it as float32 or int32 per element, which makes sense from what I know about console GPUs, but I couldn't confirm / deny whether using a buffer as both a vertex attribute source and a texture would be possible. I know it'd involve some problems on console GPUs (preferable texture storage there is tiled, for good reasons relating to texture filtering)

Like a bastardized render-to-vertex-buffer? No, I do not believe so. That being said, however, it would be possible to sample from said texture using some SV_VertexID dark magic and a few custom logic routines. I guess it really depends on what you're trying to do.

Share this post


Link to post
Share on other sites
MJP    19791
Quote:
Original post by Starfox
I have a question about textures: Can I make a buffer that I use BOTH as a texture and as a vertex attribute source? From what I gleaned from the docs, I can make a 32bit by 4 texture and use it as float32 or int32 per element, which makes sense from what I know about console GPUs, but I couldn't confirm / deny whether using a buffer as both a vertex attribute source and a texture would be possible. I know it'd involve some problems on console GPUs (preferable texture storage there is tiled, for good reasons relating to texture filtering)


This isn't possible, to my knowledge. What you can definitely do is simply sample your texture in the vertex shader, or you can stream out a vertex buffer from the geometry shader stage.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now