Jump to content
  • Advertisement
Sign in to follow this  
crynas

OpenGL Direct3D 10 InputLayout

This topic is 2615 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi everyone,

I'm working on my own simple abstract rendering API. I would like to implement that API using Direct3D 10 and OpenGL. Currently, I would like to implement input layout abstraction. I have just one question. Why exactly does Direct3D 10 needs compiled shader when creating ID3D10InputLayout instance? Just to compare input layout with shader input? Is it ok to just pass in a compiled "dummy" shader and accept the warning?

Thx in advance

Share this post


Link to post
Share on other sites
Advertisement

Hi everyone,

I'm working on my own simple abstract rendering API. I would like to implement that API using Direct3D 10 and OpenGL. Currently, I would like to implement input layout abstraction. I have just one question. Why exactly does Direct3D 10 needs compiled shader when creating ID3D10InputLayout instance? Just to compare input layout with shader input? Is it ok to just pass in a compiled "dummy" shader and accept the warning?

Thx in advance


Its possible to pass in a dummy one (at least on my nvidia card on my laptop) but i wouldn't recommend it. The reason Microsoft have implemented this layer is that it, in a sense, precompiles the requirements for binding a vertex buffer to a shader, where as in GL and DirectX9 this was done by the driver *every time* you bind a vertex buffer & shader for rendering. I'd recommend mimicing the InputLayout interface in OpenGL, and using that to store VBO offsets per vertex stream etc.

Share this post


Link to post
Share on other sites
You can also think of this requirement as a form of a unit test. If the input layout changes somehow, or your shader input signature changes somehow, then you will get an error when they don't match anymore - which will save you lots of debugging time down the road.

Is there some architectural problems that are making you not want to use them together?

Share this post


Link to post
Share on other sites
Hi,

firstly I would like to thank you for your time and responses. Ok, back to topic.

I wanted my render API to allow me usage like this:

1. Create some kind of a vertex layout object (like in Direct3D 10). No compiled shader would by necessary at creation time.
2. Use created layout object with any compatible shader. In debug build I would check shader input against active layout (at shader bind time) by my self.

Is this design/usage a bad idea? This idead seems viable to me. Problem is, that I dont know too much about what is happing under the hood. So may be it is not that viable at all.

Thx

Share this post


Link to post
Share on other sites

Hi,

firstly I would like to thank you for your time and responses. Ok, back to topic.

I wanted my render API to allow me usage like this:

1. Create some kind of a vertex layout object (like in Direct3D 10). No compiled shader would by necessary at creation time.
2. Use created layout object with any compatible shader. In debug build I would check shader input against active layout (at shader bind time) by my self.

Is this design/usage a bad idea? This idead seems viable to me. Problem is, that I dont know too much about what is happing under the hood. So may be it is not that viable at all.

Thx


Sounds viable, by the look of what you described the DX10 InputLayout would technically reside with the shader, as multiple shaders would refer to the one vertex decl and the IL is what binds the two.

Share this post


Link to post
Share on other sites
You can do this, but then how will you validate your shaders against the used input layout? Unless you have another way to ensure that they match, then why not just create one at shader compilation time? Unless you create lots and lots of shader objects this shouldn't really pose a problem.

Share this post


Link to post
Share on other sites

You can do this, but then how will you validate your shaders against the used input layout? Unless you have another way to ensure that they match, then why not just create one at shader compilation time? Unless you create lots and lots of shader objects this shouldn't really pose a problem.


So you are proposing, that I should mimic Direct3D 10 API? Have a method like CreateVertexLayout(VertexLayoutDescription, CompiledShader)? I was thinking about checking compatibility of VL and Shader in debug build using shader introspection or something like that.

Thx

Share this post


Link to post
Share on other sites
Honestly, I am not up to speed on how OpenGL works with its shaders, so I can't really advise on that part. My suggestion is to use the validation provided in the InputLayout creation mechanism on D3D10. In my D3D11 framework, the input layout is created the first time that a shader is used (this may not be the best practice, but lazy creation of it makes things much simpler). However, if you use other layout objects that are created for multiple shaders, then you are taking the responsibility to ensure that they work with each shader (as opposed to letting the function do it for you).

In your situation, it might make sense to use the shader inspection methods since you are dealing with two APIs that have slightly different functionality.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!