Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 12 May 2004
Offline Last Active Nov 10 2013 02:48 AM

Posts I've Made

In Topic: How does one use Color Key in D3D 11?

13 October 2012 - 11:28 AM

I imagine you could do it in your pixel shader quite easily. If the pixel you sample is your color key then simply don't output it.

In Topic: starting game engine

09 October 2012 - 05:23 AM

Interestingly saying "I want to build a rendering component that has no unncecessary coupling to the rest of the program" sounds better than "I want to build an engine" although the two are not really very different.

In Topic: Passing shader parameters from a scene node to the renderer

30 September 2012 - 07:40 AM

Thanks for the replies they are all very interesting :)

Looking ATC's postm that is exactly what I want to do. My problem really is understanding how to define "Material" in a useful way.

I see "Material" is being a fairly dumb collection of data items. Which shaders to use, and the parameters to send to that shader.
The problem is that the parameters are different for each shader.

For example if I'm calling Render::Draw to draw a segment of a landscape the shader would require a heightmap, some textures, and texture blend map.
If I was calling Render::Draw to draw a 3D model it would require a texture, and perhaps some animation data to send to the shader.
If I was calling Render::Draw to draw a bit of water it would require textures, wave heights, and frequencies to send to the shader (perhaps...)

The problem is that Render::Draw shouldn't have to look which shader it is being told to use by the material and pick out the appropriate paramaters to set for that shader. My first thought was standard OO design, Material could have a virtual function called "setShaderParameters" which could do that for the Draw function, but that seems both ugly and highly inefficient.

My other thought is that material could contain a set of type/value pairs that say to "draw" set *this* value in *this* shader constant slot, then the draw function wouldn't have to know anything about what the data means.

In Topic: "Mesh" should be an interface or an abstract class?

28 September 2012 - 08:11 AM

I love the example given there.
A real render function would likely need a "material" as a parameter too though?
I'm wondering how you'd go about representing this.
struct Material
    VertexShader* vertexShader;
    PixelShader* pixelShader;
    ... shader parameters ...

How would you represent the shader parameters though as they are likely to be different for each shader and the render() code shouldn't really have to know what paramaters exist and are required to be set for each different type of shader should it?

In my older code the display objects are responsible for drawing them selves (which is what you are avoiding here) so it's a lot easier to know that an "Animated3dModel" sets a specific shader on the device object and it already knows what that shader requires so can set the parameters directly. It's not so clear to me how to communicate all this in a RenderOperation object.

In Topic: How to handle input layouts in DX11?

12 March 2012 - 05:00 AM

Before issuing a draw-call, I look at the currently bound Element's vertex description and the currently bound Effect's vertex description (to use your terminology) and then use a 2D look-up-table to fetch the appropriate input layout to use.

(I don't like my terminology for Element, I plan to change that to something more descriptive!)
Do you calculate all the possible combinations in advance, or generate and cache them when needed?
I was trying to avoid my drawing code having to know anything about the geometry that it was asking the element to draw, but I guess it's unavoidable because of the coupling between the two things