Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 16 Oct 2006
Offline Last Active Sep 05 2014 11:35 AM

Topics I've Started

Efficient Omnidirectional Shadow Maps (ShaderX3)

21 November 2013 - 06:50 PM

I'm trying to implement some of the techniques described in this article for culling objects during shadow map generation. In particular I'm trying to get working the method of computing the projected shadow caster bounding volumes to cull objects that may be visible to a light frustum but don't actually cast a shadow into the camera view frustum. The author describes this as follows:



Therefore, if shadow casting objects are represented as bounding boxes, a frustum can be used to represent the projected shadows, and we can reuse the frustum-frustum culling test developed above.


To build this frustum, the demo computers a tight bounding cone surrounding the light position and each shadow caster's world-space bounding box. This cone is trivially converted into a  centered frustum.


It sounds like he's describing a cone with vertex at the light position that extends towards the object and encloses its bounding box. However this volume definitely will not contain the shadow projected by the object (since the shadow will going to extend beyond the object in the direction of the light). 


The attached figure is in the chapter, showing the case that this technique is meant to allow us to cull, complete with a picture of a frustum that looks like the one described which doesn't look very helpful for deciding to cull that object.



I don't have the source code to the demo so I can't be sure what his exact implementation looks like. Does anyone else have another interpretation of this that makes more sense?

ID3D11ShaderReflection struct field names

23 May 2013 - 06:37 PM

I'm trying to use the ID3D11ShaderReflection API to basically dump all the constant information about my shaders: the location of every variable and every field in every structure. I'm iterating over each variable in each constant buffer, and when I encounter a variable of type D3D_SVC_STRUCT, I'm trying to iterate over each field. I cannot however seem to get the names of the struct fields. 


The type description for the struct itself will have a name like "material_t" (the name of the struct type), but the actual field types obtained with GetMemberTypeByIndex just have names like "float3" "float4" etc. I realize this follows the model established by the struct type name, but I was hoping to be able to get the actual field name as well so I can use the information I'm extracting to locate specific fields within the struct.


I'm trying to fit this inside an existing code base that expects to be able to set individual fields by name. Does anyone know of a way I can get this information with the reflection API? I realize I can get the disassembly and parse the field names out of there but that feels pretty clumsy when there is a (apparently almost functional) reflection API provided by Microsoft.




I should add, I looked at ID3D11ShaderReflectionType::GetMemberTypeName which takes an index and returns a name, and *sounds* like returns the information I want, however it seems to always return NULL on the shaders I've tested.




Of course I was asking the field type itself for a member type name (and it has no members) I needed to ask the parent variable for the name! So this is resolved :D

Sampling with half texel offsets and point filtering

22 October 2012 - 08:03 PM

I'm looking a shader that is anti-aliasing lines by sampling at the 8 neighbors around each pixel and blending the colors together to smooth out the line. The offsets being used to find the neighbors though are apparently 0.5 / texture_size and it's producing inconsistent behavior where the width of a particular line will change from frame to frame as the camera moves. This is especially evident for perfectly horizontal and vertical lines.

I assume sampling halfway between two texels has some meaning when using linear filtering, but this code is using point filtering. I'm not sure what the expected behavior should be here? Should it be consistently rounding up or down? Doesn't it make more sense to use 1.0 / texture_size to locate neighboring texels?

I'm know I'm being a little vague in my description, I can provide more details about the implementation but I wanted to see if anyone could educate me first on the behavior I should be expecting using these offsets with nearest neighbor sampling.


Emulating DrawPrimitiveUP/DrawIndexedPrimitiveUP with OpenGL

10 April 2011 - 01:41 PM

I'm trying to add support in my opengl renderer, for what DX9 calls "user primitives". This basically means, the user passes an array of vertex data (and index data in the indexed case) that they want drawn as a list of triangles. Once the DrawUserPrimitives call returns, they should be able to do whatever they want with the original pointer/arrays they passed.

I could implement this with immediate mode but I'm trying to not use any deprecated functionality, so what I originally came up with was this:

void DrawUserIndexedPrimitives(vertexDeclaration, vertexArray, indexArray)
	foreach vertexElement in vertexDeclaration
		glEnableVertexAttribArray vertexElement
		glVertexAttribPointer vertexArray + vertexElementOffset

	glDrawElements indexArray

I'm getting strange behavior (missing and corrupted triangles) with this though, which I'm thinking is because the driver is actually storing the values of my pointers and not copying the data, so if reuse or delete the arrays it ends up using bad vertex/index data when it gets around to actually rendering the triangles.

Does anyone know if this is actually the case? I can't seem to find any documentation on how this is implemented (whether it will make a copy or not).

If this *is* the case, then I'm thinking of using dynamic draw vertex/index buffers to hold the user primitive data instead. But this leads to a similar uncertainty about when it's safe to discard the contents of the buffer. Can I safely discard it after I issue the DrawElements call, should I do it at the end of the frame?

EDIT: Gah, or it could be me making typos... nevermind.

Textures and sampler states

08 April 2011 - 02:04 PM

Since texture parameters are bound to a particular texture object in OpenGL, if you want to use the same texture in two places with different sampler settings (assuming pre-3.0 so no sampler objects) you have to create two different texture objects. Does anyone know if there's a way to at least let these two objects share the same underlying data or do you have to physically duplicate *all* data?