Jump to content

  • Log In with Google      Sign In   
  • Create Account

Digitalfragment

Member Since 29 Aug 2002
Offline Last Active Dec 25 2014 05:43 PM

#5104209 Spherical Harminics: Need Help :-(

Posted by Digitalfragment on 24 October 2013 - 04:58 PM


 

This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?

Most video cards can render the scene into 6 128x128 textures pretty damn quick, given that most video cards can render complex scenes at resolutions of 1280x720 and up without struggling. You would then also perform the integration of the cubemap in a shader as well, instead of pulling the rendered textures down form the GPU and performing it on the CPU. This yields a table of colours in a very small texture (usually a 9 pixel wide texture) which can then be used in your shaders at runtime. You dont need to recreate your SH probes every frame either, so the cost is quite small.

The Stupid SH tricks paper also has functions for projecting directional, point and spot lights directly onto the SH coefficients table, however the cubemap integration is much more ideal as you can trivially incorporate area and sky lights.




#5090246 Passing an array for each vertex

Posted by Digitalfragment on 29 August 2013 - 05:28 PM

Vertex attributes cannot be arrays. Attributes can only be one of the known dxgi formats (and even then a few are reserved for texture use only). Any change to a vertex format to add/remove members typically always requires a change to the vertex shaders that refer to it.

If you really want to go down this route, settle on a count thats a multiple of 4 and just use float4 attributes, so they take up entire attribute registers.




#5089705 skeletal animation on peices of a model

Posted by Digitalfragment on 27 August 2013 - 10:40 PM

Are you talking about packing the matrices in a texture rather than sending it as a uniform?

 

Also my render state may not be sorted by animation state, but by shaders and materials and meshes, although I'm considering sorting by animation state as well.

 

I'm also considering trying instancing at some point.

Packing them into a texture is the preferred approach. The texture containing bones is only written to once and can then be read from as many meshes as needed.
It avoids running into shader constant limits. It also makes use of the texture fetch hardware which runs in parallel to the shader math which can yield faster execution than shader constant fetching (google "shader constant waterfalling" for an explanation on this)




#5089660 Deferred Shading Decals on Animated Characters

Posted by Digitalfragment on 27 August 2013 - 07:13 PM

Its a matter of having the projection matrix for the decal texture attached to the skeleton appropriately. This way, as the rigged character moves around, the projection axis for the blood splatter also moves with the model. 

 

For characters that are skinned to multiple bones, you effectively want to anchor your decal projection matrix to multiple bones too. Basically, take the vertex closest to the center of the projection and use its skinning weights etc.




#5089316 Question about downsampling

Posted by Digitalfragment on 26 August 2013 - 05:07 PM

Going straight to 1/4 of the size can lose single pixel size bright spots - due to the 2x2 sample you pointed out. For bloom this can make a huge difference, as now your highlights can shimmer in and out of existence as that bright pixel moves in screenspace. The upsampling on each level gives a smoother result than going straight from 1/4 back to 1/1.

 

That said, depending on your hardware (i'm looking at both main current gen consoles here, but it probably applies to PC GPUs) its faster to go straight to the 1/4 size with a shader that samples 4 times instead. This is due to actual cost of the physical write to texture memory for the intermediate step being expensive itself. Doing this also means no precision loss that would otherwise be incurred by downsampling RGBA8 textures, as the shader does the filtering in 32-bit.




#5081819 Normals Calculation and Z-Culling

Posted by Digitalfragment on 30 July 2013 - 05:56 PM

Z-Culling is culling objects that are obscured by depth. Culling triangles that are not facing towards the camera is typically instead done by checking the winding order of the triangles. I.e. triangles that when after projection to the screen, would have their vertices plotted either clockwise or counter-clockwise.

This can be done by checking for triangle A,B,C which side of line A,B point C falls:

 

float2 AB = (B-A);

float2 AC = (C-A);

float2 tangentAB = float2(AB.y, -AB.x);

bool frontFacing = dot(tangentAB , AC) >= 0;

 

If you want to cull by depth, you need to implement a depth buffer of some description.




#5078310 "Rough" material that is compatible with color-only lightmaps?

Posted by Digitalfragment on 16 July 2013 - 05:23 PM

No, its not possible to remove the viewing direction from a BRDF that requires it without breaking its appearance.

In these cases, a lot of engines use baked lighting under multiple basis vectors, such as RNM or SH maps, and there is also directional lightmapping, where you have a bake of the direction to the most dominant light per pixel.




#4986248 Proper order of operations for skinning

Posted by Digitalfragment on 02 October 2012 - 07:44 PM

Export the inverse of the worldspace bind pose for each joint.

Is the worldspace bind pose the same as the bind pose used to transform the vertices to get the "post-transformed vertex data"? i.e. The matrix obtained by rootMatrix*modelMatrix? If so the bind pose is the same for each joint... so I don't understand.

The worldspace bindpose matrix for a bone is the the world space matrix for that bone itself in blender. If you only have local space matrices for the bones in blender, then you have to work up the tree from the root node to generate them.

So you're saying the animation matrix should be created like the following?

Matrix4f animationTransform = new Quaternion(rotation).toMatrix();
   animationTransform.m00 *= scale.x;
   animationTransform.m01 *= scale.x;
   animationTransform.m02 *= scale.x;
  
   animationTransform.m10 *= scale.y;
   animationTransform.m11 *= scale.y;
   animationTransform.m12 *= scale.y;
  
   animationTransform.m20 *= scale.z;
   animationTransform.m21 *= scale.z;
   animationTransform.m22 *= scale.z;
  
   animationTransform.m30 = position.x;
   animationTransform.m31 = position.y;
   animationTransform.m32 = position.z;


Where does the armatureMatrix and parent animation transforms come into play?

That animationTransform matrix is the local-space matrix generated by a single frame of animation, relative to the space of the parent.
The parent animation transforms come into play when generating the world transform for the parent, as its the world transform of the parent you need to pass down to the children in the tree.

The armatureMatrix in your case might be a pre-concatenation matrix for the animation transform?
I've never needed anything beyond: parentBoneWorldspace * childAnimatedLocalspace * inverseBindPose

I know that

   Matrix4f.mul(transformMatrix, boneMatrix, transformMatrix);
   Matrix4f.mul(transformMatrix, skinMatrix, transformMatrix);

Results in the identity matrix. But if I multiply in the animationTransform from above, things get wonky. I made an animation that's just the bind pose. But the animationTransform is not the identity matrix when the model is in the bind pose, so what's the deal?

animationTransform is a local space matrix. When your animationTransforms are converted to world space by walking the tree and concatenating the transforms, the resulting matricies should look like the boneMatrix values. Unless your boneMatrix (and corresponding skinMatrix) are in a different coordinate space.

I suggest writing some debug rendering code that draws the matrices as axis-markers connected by lines showing the hierarchy.


#4985911 Proper order of operations for skinning

Posted by Digitalfragment on 01 October 2012 - 05:45 PM

Edit as I missed a section of your post and ended up misreading the question as a result.

I'm not sure why you have a seperate "skinMatrix" and "transformMatrix" being applied to the animation:

Export your vertex data post-transformed, so that the model looks correct when skinned with identity matricies.

Export the inverse of the worldspace bind pose for each joint.
This matrix is used to bring the vertex back into local space as necessary.

At runtime, the skinning matrix is the animated worldspace matrix multiplied by the inverse-bind-pose matrix.
If the worldspace matrices match the original bind pose, you get identity matrices and so your model is correct. If you move the joints around, you will see the mesh move correctly.


The local-space animation matrix is built by composing a rotation matrix from the quaternion (which should just end up being the top left 3x3), multiplying the topleft 3x3 by the scale components (no need to multiply the bottom row or right column) then setting the translation row (or column depending on whether your matrices are transposed) You are better off writing the matrix manually instead of multiplying in the data:



On the note of inverse bind pose matrices, the big reason to stick with storing the inverse bind pose and leaving the mesh data in transformed space on export is for multiple influences per vertex. If you were to transform the mesh into local space, it would be a unique local space per bone influence.

When dealing with multiple influences you can either run the vertex through each of the matrices separately, then blend the results, or blend the matrices together then multiply the vertex through it. There are other ways of doing this blending that give nicer results, less 'candy wrapping' etc, and are worth looking at once you have your core steps up and running.


#4975039 Direct3D 11 question about loading .OBJ models

Posted by Digitalfragment on 30 August 2012 - 11:03 PM

faces are polygons in the obj format, there can be any number of points listed. Standard approach is to split them like a triangle fan when converting to triangles.


#4967996 Q: Fast and simple way to store/manipulate pixels?

Posted by Digitalfragment on 10 August 2012 - 01:25 AM

Hodgman: Thanks I think I get it. I'm unfamiliar with static_cast but it looks like you can read a section of memory as a different data type. I'm going to check it out.


Memory is just blocks of bytes, nothing more. You can read it however you want, but to prevent people from shooting themselves in the foot somewhat, C based languages are type-strict - you have to be explicit when you want to treat memory as different types.

In C++ there are 4 types of casts. static_cast, const_cast, dynamic_cast & reinterpret_cast.

static_cast is the same as doing a cast in C, like (Uint32*)Screen; It will take the pointer and give the pointer to the memory of a new type, assuming that type is 'castable' from the previous type.

For times where the types aren't castable (for example, casting Cheese* to Monkey*) you can call reinterpret_cast, which is like casting C see with a void* layer in between

dynamic_cast is the akin to the "as" cast in C#

const_cast will cast a const pointer to a non const pointer.


#4966490 HLSL Assembly Question

Posted by Digitalfragment on 05 August 2012 - 05:36 PM

The first. the z in c0[a0.w].xxyz would get dropped as it doesn't have a corresponding output in r4.yzw


#4957184 Stream Multiplication Performance with Functions

Posted by Digitalfragment on 09 July 2012 - 01:35 AM

If you want to gauge the performance difference, just look at the assembly difference between your different versions - if you see it calling out to a function on the inside of the loop as opposed to running it inline, then you have a perf hit right there.

As ApochPiQ pointed out, the values in memory can have some heavy impact on the performance of your functionality.

But, assembly and data aside, theres also the layout of your memory & whether or not your data is being pre-fetched from the cache in time.


#4954719 Roads with kerbs

Posted by Digitalfragment on 01 July 2012 - 07:06 PM

Personally I've tried both procedural approaches and artist-built tiling models. Both work well, the artist built method ends up yielding better results but fitting the tiles properly is a bit of a pain.

In the tiled model approach, the models occupy a square, and are tessellated enough that they can be bent to fit a spline in a vertex shader, and have skirting geometry to fill in any cracks that may arise from T junctions, and also punch down under the ground enough to compensate for the change in topology. This does yield a bit of potential overdraw but by drawing the ground before the roads, early-z optimizations make this a non issue.

With the procedural approach, from the spline data I build a triangle strip following the path like a thick line renderer, then use a simple grammar to describe how far to extrude, how to shape the curbs, how wide to make footpaths etc. The same grammar is also used to populate the sidewalk with street lights etc too.


#4953785 Roads with kerbs

Posted by Digitalfragment on 28 June 2012 - 05:44 PM

A lot of games companies will use a mix of their own hand-written tools and external tools such as Maya/3DSMax.

As far as roads with kerbs go, its nothing hard just time consuming, it boolean-subtracts its x/z shape from the triangles supplied by the terrain-system, and reshapes its y values to follow what the terrain dictates along the edge of the subtracted region.

Don't build everything in Max as one massive mesh. Buildings should be made seperately, along with appropriate lodding models, then introduced into the world via locators so that your game can switch between LODs as needed (And so that you can re-use assets where possible)

With terrain, you can poly-mesh the entire thing and auto generate lods, if you want overhangs etc. Or you can look at voxel modelling the terrain, then generating your poly mesh from that (or even just rendering the voxel mesh if you choose)




PARTNERS