Jump to content
  • Advertisement
Sign in to follow this  
Starfox

OpenGL D3D10 / D3D11 for OpenGL developers

This topic is 3008 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Any good references for that that MINIMALLY depend on D3DX? The SDK samples aren't so good. If I wanna know something like (real example there) what's the D3D10 equivalent for glBindTexture(), where do I go? :D

Share this post


Link to post
Share on other sites
Advertisement
Why not D3DX? D3DX for D3D10/11 is pretty limited in scope, and in any case I can't see how it'd interfere with your learning of these API's. I'm sure that once you follow the D3D10 tutorials in the SDK you will know the equivalent of glBindTexture.

So my suggestion is that you read the tutorials and the programming guide in the SDK. You can also try this free books which teaches advanced rendering techniques using Direct3D 10. Unfortunately I can't find now the D3D10 tutorial I found a couple of days ago looking for something else.

Share this post


Link to post
Share on other sites
The reason I don't want to use D3DX is because this is mainly a learning experience - I'm not aiming to make a real commercial product, I just want to glean some lessons from the interface design of D3D10. I've been working with GPUs on a very low level (RSX, libgcm) and some GL extensions seem perfectly rational at exposing functionality when you're used to how GPUs work at a low level. I'd like to know more about how the D3D10 model maps to GPU hardware.

Share this post


Link to post
Share on other sites
While D3D10 is still a relatively high level of abstraction of what actually happens in the GPU hardware, there are some key architectural improvements over D3D9 style (just a few examples):

-IMHO the biggest change is that the same video memory block can be used in multiple purposes, with multiple layouts. This reflects the fact that usually the GPU memory is actually just a linear blob of memory that only gets its meaning by the systems that access it (texture samplers, input assembler etc.). In order to use a GPU resource such as a texture, you need a separate "view" to the underlying data for the hardware to interpret it in an useful way. While this may sound restricting, notice that you can create several different views to the same physical GPU memory. This, in turn, means that a lot of redundant copying of data can be left off, increasing performance and decreasing memory usage.

-State blocks are immutable. The hardware commonly has blocks of memory allocated for state settings, and if you modify even a single byte of such a block, it is still just as expensive as updating the whole block because of the API transition, driver work and hardware layer transition. Therefore, it makes sense to be able to update the states only on per-block basis.

-Constant registers are now grouped in constant buffers. This reflects the fact that commonly hardware allocates large blocks of memory for this purpose anyway (larger than one register) for performance by not having to constantly allocate new registers as needed. As a side effect, since you can create multiple constant buffers, you can optimize the uploading of your constant data by grouping the buffers based on the frequency of use, and only upload the data to various buffers when you actually need to change them.

The SDK docs have a "programming guide" section for each API that enumerates this stuff, albeit with varying levels of detail. For example, the topic "Creating Texture Resources (Direct3D 10)" in the programming guide is a very detailed guide on how to create and bind textures to the pipeline with and without D3DX.

All this said, I recommend learning D3D11 instead of D3D10. It is even more optimized, and it also offers more flexibility in using both older and newer hardware for maximum performance.

Share this post


Link to post
Share on other sites
Quote:
Original post by Nik02
-IMHO the biggest change is that the same video memory block can be used in multiple purposes, with multiple layouts. This reflects the fact that usually the GPU memory is actually just a linear blob of memory that only gets its meaning by the systems that access it (texture samplers, input assembler etc.). In order to use a GPU resource such as a texture, you need a separate "view" to the underlying data for the hardware to interpret it in an useful way. While this may sound restricting, notice that you can create several different views to the same physical GPU memory. This, in turn, means that a lot of redundant copying of data can be left off, increasing performance and decreasing memory usage.

He's(?) probably used to this already; though I haven't actually done any meddling with libgcm or the DirectX variant the 360 uses, my understanding is that you can do this sort of 'casting' already, even going beyond simple format conversions and treating MSAA resources as non-MSAA and vice-versa. This is something I really wish would be included in more traditional versions of DirectX, as it's really useful for fillrate management.

To answer the OP's suggestion, I would honestly suggest firing up either MSDN or the API reference included with the DX SDK (in Documentation\DirectX9\windows_graphics.chm from the SDK root directory) and looking at the various interfaces available. Most of the interesting stuff's going to be in the ID3D10Device object.

And, in order to speed things along, the rough equivalent to glBindTexture (ugh, I cringe every time I say it D:) would be GSSetShaderResources/PSSetShaderResources/VSSetShaderResources, depending on which shader stage(s) you want to sample said texture from.

Finally, any reason why you aren't using DX11? It's a strict superset of 10 and the API is fairly similar, barring the splitting of resource *creation* and *use* into two separate interfaces. (i.e. ID3D11Device and ID3D11DeviceContext, respectively)

Share this post


Link to post
Share on other sites
Well, I think the grouping of states into state blocks in D3D10 makes much more sense from a logical standpoint, not all HW groups things into blocks.

The reason I'm not using D3D11 is petty: We use the August 07 SDK across all our dev machines, and I don't wanna get into multi-sdk hell. I want to make it clear that we're using DInput and DSound, so having the August 07 SDK is not problematic - if we were using D3D commercially it'd have been disastrous. My D3D ventures are for learning purposes only.

I have a question about textures: Can I make a buffer that I use BOTH as a texture and as a vertex attribute source? From what I gleaned from the docs, I can make a 32bit by 4 texture and use it as float32 or int32 per element, which makes sense from what I know about console GPUs, but I couldn't confirm / deny whether using a buffer as both a vertex attribute source and a texture would be possible. I know it'd involve some problems on console GPUs (preferable texture storage there is tiled, for good reasons relating to texture filtering)

Share this post


Link to post
Share on other sites
Quote:
Original post by Starfox
Well, I think the grouping of states into state blocks in D3D10 makes much more sense from a logical standpoint, not all HW groups things into blocks.

The reason I'm not using D3D11 is petty: We use the August 07 SDK across all our dev machines, and I don't wanna get into multi-sdk hell. I want to make it clear that we're using DInput and DSound, so having the August 07 SDK is not problematic - if we were using D3D commercially it'd have been disastrous. My D3D ventures are for learning purposes only.

I have a question about textures: Can I make a buffer that I use BOTH as a texture and as a vertex attribute source? From what I gleaned from the docs, I can make a 32bit by 4 texture and use it as float32 or int32 per element, which makes sense from what I know about console GPUs, but I couldn't confirm / deny whether using a buffer as both a vertex attribute source and a texture would be possible. I know it'd involve some problems on console GPUs (preferable texture storage there is tiled, for good reasons relating to texture filtering)

Like a bastardized render-to-vertex-buffer? No, I do not believe so. That being said, however, it would be possible to sample from said texture using some SV_VertexID dark magic and a few custom logic routines. I guess it really depends on what you're trying to do.

Share this post


Link to post
Share on other sites
Quote:
Original post by Starfox
I have a question about textures: Can I make a buffer that I use BOTH as a texture and as a vertex attribute source? From what I gleaned from the docs, I can make a 32bit by 4 texture and use it as float32 or int32 per element, which makes sense from what I know about console GPUs, but I couldn't confirm / deny whether using a buffer as both a vertex attribute source and a texture would be possible. I know it'd involve some problems on console GPUs (preferable texture storage there is tiled, for good reasons relating to texture filtering)


This isn't possible, to my knowledge. What you can definitely do is simply sample your texture in the vertex shader, or you can stream out a vertex buffer from the geometry shader stage.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!