Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 12 May 2004
Offline Last Active Nov 10 2013 02:48 AM

Topics I've Started

Passing shader parameters from a scene node to the renderer

29 September 2012 - 06:05 AM

I commented on this post asking a further question but I think it would be better in it's own post and with the question expanded.
I've done a few toy landscape render programs in the past but until now I always used a "traditional" OOP approach of making models, landsape segments, ui elements etc all implement an IDrawable interface something like this -

class Model : public IDrawable
	virtual void render(Device& device, Camera& camera);
where the render function uses the d3d11 device to directly draw itsself.

After reading the post above I entirely understand how it might be better to instead make drawable objects produce a Mesh to draw which can then be drawn separately by the rendering system. It simplifies the code, and separates concerns much better and potentially makes the code much more reusable without lots of unnecessary abstraction.

Instead of directly drawing the things, the scene node items will instead put the data for a rendering operation onto a queue... Including mesh, camera and so on.

Now the problem I have is this -
Inside my Model class the constructor would previously obtained references to the shaders it used and the textures and so on, and when the render function is called, it knows which shaders it's using and it knows what parameters those shaders require and can fill in a constant buffer and set it directly. Nothing outside of the Model class need have any knowledge of the shaders it uses, or what data they require. Which is a good thing.

Except that it doesn't fit with the new way of drawing things.

I'm thinking I could create a structure for the parameters for each "class" of shader and pass that into the render function as opaque data to give to the shader but that seems ugly. I could create a constant buffer for each class of shader and get my geometry classes to fill that in when they create the render operations but that feels ugly too.

How do people suggest that the part of code that wants to draw a model passes the shader parameters (and the material in general I guess) to the renderer in an elegant way? Although I want my code to be for D1D11 I''d like to keep an eye on doign things in a way that would make it easy to change in future so some form of abstraction is needed here even if it's only passing a handle to a constant buffer or something...

Does this even make any sense?

edit: To explain further, what makes this hard is that there is no real common structure. Each shader requires different numbers of vectors, values, textures etc. And the IRender interface shouldn't really have to know anything about the shader... I could pass a big map of name - value pairs for them all, but doing synamic memory allocation for a map in a renderer doesn't seem like a great idea

How to handle input layouts in DX11?

12 March 2012 - 03:43 AM

How do people handle input layouts elegantly in directx11?

Simplifying things a little, I have an Effect class which is responsible for how to draw something, and amongst other things holds the vertex shader to use. I also have a Element class which holds the geometry to draw amongst other things (vertex buffer, index buffer etc).

The problem is that only my Element class knows about the format of the vertex buffers if uses so it's really the only object that can create and set the input layout, However in order to do this it needs to know which vertex shader is being used for that draw call, and in fact needs to create a different input layout for each shader that can be used to draw it, as well as to know at run time which shader is being used so it can select which input layout is the appropriate one.

I can make my Element object cache the ID3D11InputLayout in a map indexed on vertex structure type, and vertex shader type, and share that map between all of the elements rather than create the input layout each time it's asked to draw something (which I suspect would be horribly inefficient anyway...) but that just seems inelegant and mixes together the responsibilities for the Effect and the Element far too much.

So how do people handle selecting which input layout to use, creating the ID3D11InputLayout objects, and so on? Is there something I'm missing, or is my overall design flawed and could be altered?

Thanks for any suggestions :)

Multithreading on d3d 11

17 September 2011 - 04:48 AM

In directx11 is it thread safe to safe to call ID3D11Device::CreateBuffer and CreateText2D (and associated views) which other work is going on in another thread using a device context on the same device?
If it's not thread safe is it allowed to call it on other threads as long as I protect the calls with suitable locks?

I'm sure this is documented somewhere but i can't find it anywhere.

Dynamically loading and unloading of directx

04 December 2010 - 05:19 AM

I'm working on a small game project where I want to be able to use d3d11 but to fall back to d3d9 if it's not available / installed.

I can abstract out the drawing interface so that my game with work with either, but if I have to link with the d3d11 DLLs then my program won't even load if they are missing.

So I thought the best thing to do was to use LoadLibrary and GetProcAddress to load the d3d9.dll and locate the "Direct3DCreate9" function, and to do similar for D3D11 and just use whichever loaded. (I'm fully aware they are not very similar and will need a considerable amount of abstraction to present the same interface to my game).

My question though, is are there any problems with loading directx dynamically like this? And if I shut down the directx device can I safely unload the DLLs should I wish to?

I'd ideally like to do the same with opengl as well, although that seems more painful as there are hundreds of entry points to use GetProcAddress on so I might leave that out.

My question though is as above - can I safely load d3d9 and d3d11 dynamically, and can I unload the DLLs when I'm finished with them? A quick check seems to indicate that I can load the sucessfully and use them, but perhaps there is something I'm unaware of.

DirectX with C#

28 June 2010 - 02:01 AM

I'm looking at doing a small 3d graphics project in C# mostly to enhance my C# skills. My options for direct 3d seem to be 1) managed directx which is obsolete and unsupported and only directx9 anyway, 2) Slimdx which is a very thin wrapper around d3d9/10/11 and 3) XNA, which is more of a framework around dx9 than a library.

Assuming my understanding is correct, I'm not entirely happy with any of those approaches. XNA looks good but I want to try directx11 really, and I'm not looking for a framework, just a library. Slimdx looks good but it looks like a thin wrapper over directx and it somehow doesn't feel comfortable to me to making many d3d calls via the pinvoke interface. Even if it's not an overhead that slows it down, it just feels like the wrong level of abstraction for the native interface to me, whether than makes sense or not :)

I've done a reasonable amount of d3d in C++ so I was thinking of writing a small dll that I could call from C# using pinvoke that had a higher level API with calls such as "DrawModel" or "DrawListOfVertexBuffersWithAssociatedRenderStates" (only with a better name) which would considerably reduce the number of calls I'd have to make from C# to native code and would move them up a level in abstraction.

It would be more efficient of course, but it just feels better to me and will let me concentrate on high level logic on the C# code (What to draw, and the game logic rather than the lowest level interface to the native API).

Does that approach make sense? Or are there any hidden pitfalls.