Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Digitalfragment

Member Since 29 Aug 2002
Offline Last Active Jul 30 2015 05:11 PM

#5115458 HLSL keywords in inout

Posted by Digitalfragment on 08 December 2013 - 04:17 PM

'in' IIRC, denotes a vertex input going into a main function in CG.

'out' acts just like 'out' in C#, no value is passed into the function, but the function must write a value to it, which then gets passed back to the calling function.

'inout' acts like 'ref' in C# as you guessed, the argument gets passed both ways, allowing the function to read and modify the value.




#5112577 Difference Tiled and Clustered shading

Posted by Digitalfragment on 27 November 2013 - 04:16 PM

Essentially yes. Though IIRC, the author of the original clustered shading whitepaper talks about adding in several dimensions, by also breaking the shading into similar angles.
So its (x/y) -> (position x/ position y/ position z/ normal x / normal z)




#5109069 How many influancing weights?

Posted by Digitalfragment on 13 November 2013 - 05:30 PM

 

DX11, PS4 and XBone games are trending towards 8 weights for the face

I wonder how they weight the mesh then. Do they use some procedural try&see tecnique? Considering other technologic facts, wheather the weights are dynamic or constant. I think they will go for per vertex animation in near future.

 

Check out the presentation "Ryse: Son of Rome - Defining the next gen"
http://www.crytek.com/cryengine/presentations

They do both skinning and vertex animation. Both have pros & cons, so take the best of both.




#5108357 Voxel Cone Tracing Experiment - Part 2 Progress

Posted by Digitalfragment on 10 November 2013 - 04:42 PM

 


Obviously your profiler is broken somehow, as I doubt your experiment manages to hold ever increasing data in the same exact amount of ram.

 

Actually I'm using the task manager to get the amount of ram that my application is using.

 

Sounds like you hit your video cards memory limit and the drivers are now using system memory - which is also why your frame rate tanks. Task Manager only shows system memory usage, not the memory internal to the video card.




#5108356 How to compute the bounding volume for an animated (skinned) mesh ?

Posted by Digitalfragment on 10 November 2013 - 04:37 PM

Calculate a bounding box for each joint in the body in the orientation of that joint, based only on the vertices that are skinned to that joint.

Then at runtime, project the joint-oriented bounding boxes into worldspace, taking the min and max of each of those bounding boxes.

 

There's no real reason to have an artist make the bounding volumes.




#5108353 How many influancing weights?

Posted by Digitalfragment on 10 November 2013 - 04:33 PM

DX11, PS4 and XBone games are trending towards 8 weights for the face, but staying with 4 for everywhere else.




#5104209 Spherical Harminics: Need Help :-(

Posted by Digitalfragment on 24 October 2013 - 04:58 PM


 

This what I don't understand from your reply: It's necessary to render the cube maps? I mean I understand that SH will help me "compress" the cube maps (becouse storing them for let's say 32x32x32 grid of probes would requires too much memory)....and then lookup them using SH...but wouldn't be too expensive create those cube maps in real time? I'm using RSM to store virtual point light of the scene...is it possible to store incoming radiance into SH without using Cube Maps?

Most video cards can render the scene into 6 128x128 textures pretty damn quick, given that most video cards can render complex scenes at resolutions of 1280x720 and up without struggling. You would then also perform the integration of the cubemap in a shader as well, instead of pulling the rendered textures down form the GPU and performing it on the CPU. This yields a table of colours in a very small texture (usually a 9 pixel wide texture) which can then be used in your shaders at runtime. You dont need to recreate your SH probes every frame either, so the cost is quite small.

The Stupid SH tricks paper also has functions for projecting directional, point and spot lights directly onto the SH coefficients table, however the cubemap integration is much more ideal as you can trivially incorporate area and sky lights.




#5090246 Passing an array for each vertex

Posted by Digitalfragment on 29 August 2013 - 05:28 PM

Vertex attributes cannot be arrays. Attributes can only be one of the known dxgi formats (and even then a few are reserved for texture use only). Any change to a vertex format to add/remove members typically always requires a change to the vertex shaders that refer to it.

If you really want to go down this route, settle on a count thats a multiple of 4 and just use float4 attributes, so they take up entire attribute registers.




#5089705 skeletal animation on peices of a model

Posted by Digitalfragment on 27 August 2013 - 10:40 PM

Are you talking about packing the matrices in a texture rather than sending it as a uniform?

 

Also my render state may not be sorted by animation state, but by shaders and materials and meshes, although I'm considering sorting by animation state as well.

 

I'm also considering trying instancing at some point.

Packing them into a texture is the preferred approach. The texture containing bones is only written to once and can then be read from as many meshes as needed.
It avoids running into shader constant limits. It also makes use of the texture fetch hardware which runs in parallel to the shader math which can yield faster execution than shader constant fetching (google "shader constant waterfalling" for an explanation on this)




#5089660 Deferred Shading Decals on Animated Characters

Posted by Digitalfragment on 27 August 2013 - 07:13 PM

Its a matter of having the projection matrix for the decal texture attached to the skeleton appropriately. This way, as the rigged character moves around, the projection axis for the blood splatter also moves with the model. 

 

For characters that are skinned to multiple bones, you effectively want to anchor your decal projection matrix to multiple bones too. Basically, take the vertex closest to the center of the projection and use its skinning weights etc.




#5089316 Question about downsampling

Posted by Digitalfragment on 26 August 2013 - 05:07 PM

Going straight to 1/4 of the size can lose single pixel size bright spots - due to the 2x2 sample you pointed out. For bloom this can make a huge difference, as now your highlights can shimmer in and out of existence as that bright pixel moves in screenspace. The upsampling on each level gives a smoother result than going straight from 1/4 back to 1/1.

 

That said, depending on your hardware (i'm looking at both main current gen consoles here, but it probably applies to PC GPUs) its faster to go straight to the 1/4 size with a shader that samples 4 times instead. This is due to actual cost of the physical write to texture memory for the intermediate step being expensive itself. Doing this also means no precision loss that would otherwise be incurred by downsampling RGBA8 textures, as the shader does the filtering in 32-bit.




#5081819 Normals Calculation and Z-Culling

Posted by Digitalfragment on 30 July 2013 - 05:56 PM

Z-Culling is culling objects that are obscured by depth. Culling triangles that are not facing towards the camera is typically instead done by checking the winding order of the triangles. I.e. triangles that when after projection to the screen, would have their vertices plotted either clockwise or counter-clockwise.

This can be done by checking for triangle A,B,C which side of line A,B point C falls:

 

float2 AB = (B-A);

float2 AC = (C-A);

float2 tangentAB = float2(AB.y, -AB.x);

bool frontFacing = dot(tangentAB , AC) >= 0;

 

If you want to cull by depth, you need to implement a depth buffer of some description.




#5078310 "Rough" material that is compatible with color-only lightmaps?

Posted by Digitalfragment on 16 July 2013 - 05:23 PM

No, its not possible to remove the viewing direction from a BRDF that requires it without breaking its appearance.

In these cases, a lot of engines use baked lighting under multiple basis vectors, such as RNM or SH maps, and there is also directional lightmapping, where you have a bake of the direction to the most dominant light per pixel.




#4986248 Proper order of operations for skinning

Posted by Digitalfragment on 02 October 2012 - 07:44 PM

Export the inverse of the worldspace bind pose for each joint.

Is the worldspace bind pose the same as the bind pose used to transform the vertices to get the "post-transformed vertex data"? i.e. The matrix obtained by rootMatrix*modelMatrix? If so the bind pose is the same for each joint... so I don't understand.

The worldspace bindpose matrix for a bone is the the world space matrix for that bone itself in blender. If you only have local space matrices for the bones in blender, then you have to work up the tree from the root node to generate them.

So you're saying the animation matrix should be created like the following?

Matrix4f animationTransform = new Quaternion(rotation).toMatrix();
   animationTransform.m00 *= scale.x;
   animationTransform.m01 *= scale.x;
   animationTransform.m02 *= scale.x;
  
   animationTransform.m10 *= scale.y;
   animationTransform.m11 *= scale.y;
   animationTransform.m12 *= scale.y;
  
   animationTransform.m20 *= scale.z;
   animationTransform.m21 *= scale.z;
   animationTransform.m22 *= scale.z;
  
   animationTransform.m30 = position.x;
   animationTransform.m31 = position.y;
   animationTransform.m32 = position.z;


Where does the armatureMatrix and parent animation transforms come into play?

That animationTransform matrix is the local-space matrix generated by a single frame of animation, relative to the space of the parent.
The parent animation transforms come into play when generating the world transform for the parent, as its the world transform of the parent you need to pass down to the children in the tree.

The armatureMatrix in your case might be a pre-concatenation matrix for the animation transform?
I've never needed anything beyond: parentBoneWorldspace * childAnimatedLocalspace * inverseBindPose

I know that

   Matrix4f.mul(transformMatrix, boneMatrix, transformMatrix);
   Matrix4f.mul(transformMatrix, skinMatrix, transformMatrix);

Results in the identity matrix. But if I multiply in the animationTransform from above, things get wonky. I made an animation that's just the bind pose. But the animationTransform is not the identity matrix when the model is in the bind pose, so what's the deal?

animationTransform is a local space matrix. When your animationTransforms are converted to world space by walking the tree and concatenating the transforms, the resulting matricies should look like the boneMatrix values. Unless your boneMatrix (and corresponding skinMatrix) are in a different coordinate space.

I suggest writing some debug rendering code that draws the matrices as axis-markers connected by lines showing the hierarchy.


#4985911 Proper order of operations for skinning

Posted by Digitalfragment on 01 October 2012 - 05:45 PM

Edit as I missed a section of your post and ended up misreading the question as a result.

I'm not sure why you have a seperate "skinMatrix" and "transformMatrix" being applied to the animation:

Export your vertex data post-transformed, so that the model looks correct when skinned with identity matricies.

Export the inverse of the worldspace bind pose for each joint.
This matrix is used to bring the vertex back into local space as necessary.

At runtime, the skinning matrix is the animated worldspace matrix multiplied by the inverse-bind-pose matrix.
If the worldspace matrices match the original bind pose, you get identity matrices and so your model is correct. If you move the joints around, you will see the mesh move correctly.


The local-space animation matrix is built by composing a rotation matrix from the quaternion (which should just end up being the top left 3x3), multiplying the topleft 3x3 by the scale components (no need to multiply the bottom row or right column) then setting the translation row (or column depending on whether your matrices are transposed) You are better off writing the matrix manually instead of multiplying in the data:



On the note of inverse bind pose matrices, the big reason to stick with storing the inverse bind pose and leaving the mesh data in transformed space on export is for multiple influences per vertex. If you were to transform the mesh into local space, it would be a unique local space per bone influence.

When dealing with multiple influences you can either run the vertex through each of the matrices separately, then blend the results, or blend the matrices together then multiply the vertex through it. There are other ways of doing this blending that give nicer results, less 'candy wrapping' etc, and are worth looking at once you have your core steps up and running.




PARTNERS