• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

262 Neutral

About doesnotcompute

  • Rank
  1. Efficient Omnidirectional Shadow Maps (ShaderX3)

    The larger frustum on the right was meant to be the camera, the one on the left was for the light. I guess I should have made that more clear    The algorithm you described earlier says to choose the camera planes that the light is in front of, in this case they are A, B, and C. The red object is also in front of all these planes and it is contained in the light frustum so it would pass all the plane tests and need to be rendered.   
  2. Efficient Omnidirectional Shadow Maps (ShaderX3)

    I think the part I was missing in the implementation from the article is that the object shadow frustum doesn't have to actually originate at the light position, the near plane can be pushed all the way to the edge of the object's bounding box/sphere, which makes it a much tighter fit to the area where the shadow can be cast.   Here's another picture showing one of these object frusta in green. I think this also shows a case where your algorithm will identify the object as needing to be drawn when it actually casts a shadow that is not visible to the view frustum.
  3. Efficient Omnidirectional Shadow Maps (ShaderX3)

    I see. Your algorithm definitely seems more efficient than what I've been working on. I tried to implement what you just described and I'm getting fewer objects culled than I had previously though. I implemented it like this: static void GetVisibleForPointLight(BoundingFrustum const & viewfrustum, ForwardPoint const * light, BoundingFrustum const & lightFrustum, GeometryList const & everything, GeometryList & visible) { Plane planes[12]; int count = 0; for (int i = 0 ; i < 6 ; i++) { if (viewfrustum.Planes[i].DotCoordinate(light->Position()) > 0.0f) planes[count++] = viewfrustum.Planes[i]; } Memory::Copy(&planes[count], lightFrustum.Planes, 6); count += 6; for (auto o = everything.Begin() ; o != everything.End() ; ++o) { BoundingSphere const & s = (*o)->WorldBound(); int inFront = 0; for (int i = 0 ; i < count ; i++) { if (planes[i].DotCoordinate(s.Center) < -s.Radius) break; ++inFront; } if (inFront == count) visible.Add(*o); } } I'm calling this once per light direction rather than for all directions at once, otherwise I would have to frustum test the objects again per direction (if I'm not mistaken) since not all the objects are visible in each direction.   In my old method (that I was trying to implement from the ShaderX article) I was computing the light -> object frustum for each object and testing that against the camera frustum and putting everything in one big list and then culling per direction again.    There are about 300 objects in the scene I'm testing and I'm seeing the new version culling anywhere from 20-30 fewer objects up to more than 100 for some camera orientations. In both cases everything looks correct so it's not incorrectly culling anything. Maybe I have an error in my code somewhere though. 
  4. Efficient Omnidirectional Shadow Maps (ShaderX3)

    Ok that makes sense, thanks. I was also just generally interested in understanding what this article was describing because it's presented like: you calculate this special frustum for each object and just do a frustum-frustum test against the view frustum and that's it. I think if you construct the cone/frustum described in those sentences I quoted and then extend it to the light's far clip plane it should enclose any possible shadow cast by the object. But it doesn't account for the case shown in the picture so I guess an extra test like what you described is needed. 
  5. I'm trying to implement some of the techniques described in this article for culling objects during shadow map generation. In particular I'm trying to get working the method of computing the projected shadow caster bounding volumes to cull objects that may be visible to a light frustum but don't actually cast a shadow into the camera view frustum. The author describes this as follows:     It sounds like he's describing a cone with vertex at the light position that extends towards the object and encloses its bounding box. However this volume definitely will not contain the shadow projected by the object (since the shadow will going to extend beyond the object in the direction of the light).    The attached figure is in the chapter, showing the case that this technique is meant to allow us to cull, complete with a picture of a frustum that looks like the one described which doesn't look very helpful for deciding to cull that object.     I don't have the source code to the demo so I can't be sure what his exact implementation looks like. Does anyone else have another interpretation of this that makes more sense?
  6. OpenGL OpenGL Windows Question

    At a minimum on Window you should use GLEW. There are no system headers/libs for accessing modern OpenGL on Windows so you need to query every function at runtime using wglGetProcAddress. GLEW automates all of this, it queries all the function pointers your card supports and puts them in global function pointers so you can use it like a regular C API. It also provides a header with all the typedefs and constants you'll need to work with these functions.   It's worth pointing out that gDebugger is basically a legacy application now. It was purchased by AMD and has been integrated into their CodeXL profiler/debugger.
  7. Replacement for ID3DXConstantTable?

    Yes the names are preserved. We use it to generate a look-up table so any constant can be accessed by its original name.   Here's a sample of some disassembly: // // Generated by Microsoft (R) HLSL Shader Compiler 9.30.9200.16384 // // Parameters: // // struct // { // float4 diffuse; // float4 specular; // // } MaterialColor; // // row_major float4x4 WorldViewProj; // // // Registers: // // Name Reg Size // ------------- ----- ---- // WorldViewProj c0 4 // MaterialColor c4 1 // vs_2_0 dcl_position v0 // vertex<0,1,2> #line 521 "_vs_dump_1000_00000000_00000000_00040009.1955" mul r0, v0.y, c1 mad r0, v0.x, c0, r0 mad r0, v0.z, c2, r0 add oPos, r0, c3 // ::main<0,1,2,3> #line 516 mov oD0, c4 // ::main<4,5,6,7> // approximately 5 instruction slots used Don't be surprised if the compiler does things like allow constants to overlap if it determines part of an array or struct is unreferenced. If you have a lot of different shaders it make take some trial and error to work through some of these cases.    In fact I think MaterialColor.specular is unreferenced in this shader, so you'll see that it is reporting the size of the struct as 1 register instead of 2. 
  8. Replacement for ID3DXConstantTable?

    We had to support run-time compilation/reflection here so I implemented a solution using D3DDisassemble and then manually parsed the dissassembly to generate a look-up table for constants. It wasn't exactly easy (or pretty) but it seems to be working out fine.
  9. The Features of Direct3D 11.2

    Another big 11.2 feature is the ability to use the HLSL compiler in App Store apps (and hence mobile apps) to do runtime shader compilation.
  10. I'm trying to use the ID3D11ShaderReflection API to basically dump all the constant information about my shaders: the location of every variable and every field in every structure. I'm iterating over each variable in each constant buffer, and when I encounter a variable of type D3D_SVC_STRUCT, I'm trying to iterate over each field. I cannot however seem to get the names of the struct fields.    The type description for the struct itself will have a name like "material_t" (the name of the struct type), but the actual field types obtained with GetMemberTypeByIndex just have names like "float3" "float4" etc. I realize this follows the model established by the struct type name, but I was hoping to be able to get the actual field name as well so I can use the information I'm extracting to locate specific fields within the struct.   I'm trying to fit this inside an existing code base that expects to be able to set individual fields by name. Does anyone know of a way I can get this information with the reflection API? I realize I can get the disassembly and parse the field names out of there but that feels pretty clumsy when there is a (apparently almost functional) reflection API provided by Microsoft.   EDIT:   I should add, I looked at ID3D11ShaderReflectionType::GetMemberTypeName which takes an index and returns a name, and *sounds* like returns the information I want, however it seems to always return NULL on the shaders I've tested.   EDIT2:   Of course I was asking the field type itself for a member type name (and it has no members) I needed to ask the parent variable for the name! So this is resolved :D
  11. I'm looking a shader that is anti-aliasing lines by sampling at the 8 neighbors around each pixel and blending the colors together to smooth out the line. The offsets being used to find the neighbors though are apparently 0.5 / texture_size and it's producing inconsistent behavior where the width of a particular line will change from frame to frame as the camera moves. This is especially evident for perfectly horizontal and vertical lines. I assume sampling halfway between two texels has some meaning when using linear filtering, but this code is using point filtering. I'm not sure what the expected behavior should be here? Should it be consistently rounding up or down? Doesn't it make more sense to use 1.0 / texture_size to locate neighboring texels? I'm know I'm being a little vague in my description, I can provide more details about the implementation but I wanted to see if anyone could educate me first on the behavior I should be expecting using these offsets with nearest neighbor sampling. Thanks.
  12. So it's the renormalization that introduces the "non-linearity" I guess. Because otherwise you could just distribute the dot product into that expression for 'N' and get the same result as in the per-vertex case. Here are some screen shots from RenderMoneky showing per-vertex nDotL along with per-pixel with and without the renormalization Per Vertex: [url="http://img194.imageshack.us/i/rmpervertex.png/"][img]http://img194.imageshack.us/img194/4374/rmpervertex.th.png[/img][/url]. Per Pixel w/out normalize [url="http://img850.imageshack.us/i/rmperpixelnonormalize.png/"][img]http://img850.imageshack.us/img850/7438/rmperpixelnonormalize.th.png[/img][/url] Per Pixel w/normalize [url="http://img27.imageshack.us/i/rmperpixel.png/"][img]http://img27.imageshack.us/img27/990/rmperpixel.th.png[/img][/url]
  13. [quote name='rdragon1' timestamp='1325276847' post='4898235'] [quote name='doesnotcompute' timestamp='1325274943' post='4898227'] I think you can make the same argument that the diffuse lighting calculation should be the same whether it's done per pixel or per vertex. In the per-vertex case you're computing the N*L dot product at the vertex and interpolating that to the each pixel. In the per pixel case you're interpolating the normal and computing the dot product per pixel but the dot product is a linear operation so it interpolates the same. With specular lighting you have a non-linear term (a value raised to the specular exponent) which does not interpolate the same. Which is why vertex-lit meshes usually have weird looking specular highlights. [/quote] Definitely not. You're talking about the difference between vertex lighting and per-pixel lighting. Calculating NdotL at each vertex and interpolating the result is not the same as interpolating the normal and calculating NdotL at each pixel. Think of a quad with vertex normals pointing away from the center, and a point light source directly above the center of the quad. [/quote] Yeah you're right (I just confirmed it RenderMonkey), I thought that seemed a little suspect as I was writing it. I don't quite see how it fails an interpolation calculation like the one I did above though.
  14. I think you can make the same argument that the diffuse lighting calculation should be the same whether it's done per pixel or per vertex. In the per-vertex case you're computing the N*L dot product at the vertex and interpolating that to the each pixel. In the per pixel case you're interpolating the normal and computing the dot product per pixel but the dot product is a linear operation so it interpolates the same. With specular lighting you have a non-linear term (a value raised to the specular exponent) which does not interpolate the same. Which is why vertex-lit meshes usually have weird looking specular highlights.
  15. OpenGL gluLookat replacement

    This is my LookAt function. I think I started from the Mesa implementation originally. My matrix class is row-major so that's why everything is transposed from you. I have to pass transpose=true to glUniformMatrix4fv because of this. [CODE] INLINE static void CreateLookAt(Vector3 const & position, Vector3 const & target, Vector3 const & upVector, Matrix & result) { Vector3 forward; Vector3::Subtract(target, position, forward); forward.Normalize(); Vector3 right; Vector3::Cross(forward, upVector, right); right.Normalize(); Vector3 up; Vector3::Cross(right, forward, up); up.Normalize(); result.elements[0] = right.X; result.elements[1] = right.Y; result.elements[2] = right.Z; result.elements[4] = up.X; result.elements[5] = up.Y; result.elements[6] = up.Z; result.elements[8] = -forward.X; result.elements[9] = -forward.Y; result.elements[10] = -forward.Z; result.elements[12] = 0.0f; result.elements[13] = 0.0f; result.elements[14] = 0.0f; result.elements[3] = -Vector3::Dot(right,position); result.elements[7] = -Vector3::Dot(up,position); result.elements[11] = Vector3::Dot(forward,position); result.elements[15] = 1.0f; } [/CODE]
  • Advertisement