Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

395 Neutral

About ProgrammerDX

  • Rank

Personal Information

  • Role
    3D Artist
  • Interests
  1. Hi! So I'm struggling with this problem about skinning. Let's say you have a vertex that is skinning blended to 4 bones. How do you build the bone matrix / transform in vertex shader? Can you make a 'total' bone matrix based on the sum of bone matrices according to their weights together. Like in Urho3D: //Skin() Method void Skin(inout VertexShaderInput input, uniform int boneCount) { float4x4 skinning = 0; [unroll] for (int i = 0; i < boneCount; i++) { skinning += Bones[input.Indices[i]] * input.Weights[i]; } input.Position = mul(input.Position, skinning); } Or what I see in every topic on gamedev, where you multiply the bone matrix for each vector position multiplied by the weight, like here: https://www.gamedev.net/forums/topic/549125-hlsl-skinning/ VS_OUTPUT_BASE Out = (VS_OUTPUT_BASE) 0; float3 Pos = 0.0f; float4 src = float4(inPos.xyz,1.0f); Pos += mul(u_pose[int(indicies.x)] ,src) * weights.x/255; Pos += mul(u_pose[int(indicies.y)] ,src) * weights.y/255; Pos += mul(u_pose[int(indicies.z)] ,src) * weights.z/255; Pos += mul(u_pose[int(indicies.w)] ,src) * weights.w/255; Out.Pos = mul(u_model_view_proj_matrix,float4(Pos,1.0f) ); I think the first method is right, but I'm still curious. I think it's right because with the second method it's impossible to build the vertex buffer with vector positions that are multiplied by the inverse of the bind pose bone matrix, but it's still weird that it's out there being discussed.
  2. Howdy! I was wondering let's say you have a mesh and the most blended vertex is attached to 4 bones (and so 4 weights that aren't 0 or 1). So the rest of the mesh vertices are also attached to 4 bones. However let's say one vertex is only attached to 1 single bone, so it has a 1.0 weight. What other 3 bones do you attach that vertex to? 1. The root bone with 0.0 weights? Or; 2. attach it to -1 ('no bone') and then use if() statement in hlsl for the transformation calculation? Thanks for your input!
  3. ProgrammerDX

    Converting float4x4 to float4x3

    Thanks for the replies. Yes fais, input was float3, so you are right that it got automatically converted into float4 with the w as 0.f in this operation. I read that if I declare the input as float4, hlsl will automatically set w to 1.f even though verrtex decleration for it is float3, so I might do that aswell. I tried as you said now, making it float4(input,1) and it actually worked! I can't believe it, because I felt like I tried every combination. So this is the final line that works, if anyone is interested: float4 position = float4(mul(float4(input.position, 1.0f), (float4x3)Bone), 1.0f); Hodgman I think it doesn't matter that the last column will be interpreted as [0,0,0,0] if I remove it by casting to float4x3, because later I set float4 with w valued as 1 again. I'm going to try to optimize further by sending float4x3 matrices to the vertex shader now, which was the final intent of this optimization.
  4. Hi there, Currently I have a working skinned shader that uses float4x4 bone matrices. Code snippet in Vertex Shader: float4 position = mul(float4(input.position, 1.0f), Bone); Bone is the float4x4 matrix of the bone. input.position is the vertex position as it comes into the vertex shader. This works perfectly fine, now I want to convert to float4x3 for the Bone matrix as an optimization, but it doesn't work: float4 position = float4(mul(input.position, (float4x3)Bone), 1.0f) What happens is that everything in the mesh is rendered in the origin of the model, so basicly it seems that the translation of the bone is not being applied now. I've been trying to figure it out by swapping arguments around and trying float3x4 and such but nothing makes it right. Anyone with a clue?
  5. ProgrammerDX

    What is this artifact in my shadow map?

    Can you share with us the final hlsl script? Thanks in advance.
  6. Hi all,   I have this simple post process blur shader:   sampler ColorSampler1 : register(s0); #define SAMPLE_SIZE 15 float2 texelSize; float offsets[SAMPLE_SIZE]; float weights[SAMPLE_SIZE]; float4 PS_BlurH(float2 texCoord : TEXCOORD0) : COLOR0 {     float4 sum = float4(0.f, 0.f, 0.f, 1.f);          [loop]     for (int i = 0; i < SAMPLE_SIZE; i++)         sum += tex2D(ColorSampler1, float2(texCoord.x + (offsets[i] * texelSize.x), texCoord.y)) * weights[i];          clip(sum.a < 0.01f ? -1 : 1);          return sum; } float4 PS_BlurV(float2 texCoord : TEXCOORD0) : COLOR0 {     float4 sum = float4(0.f, 0.f, 0.f, 1.f);          [loop]     for (int i = 0; i < SAMPLE_SIZE; i++)         sum += tex2D(ColorSampler1, float2(texCoord.x, texCoord.y + (offsets[i] * texelSize.y))) * weights[i];          clip(sum.a < 0.01f ? -1 : 1);          return sum; } technique Glow {     pass BlurHorizontal     {         PixelShader = compile ps_2_0 PS_BlurH();     }          pass BlurVertical     {         PixelShader = compile ps_2_0 PS_BlurV();     } }   But if I change ps_2_0 to ps_3_0 the shader doesn't work anymore? No errors on compiling...   Anyone with a clue?
  7. ProgrammerDX

    Why is my struct destructor not called?

    Ah right it's the assignment operator that gets called. Weird, for some reason I expected the old a to be destructed. Must be because it's late. Thanks.
  8. Hey all check this out: struct A {     int val = 0;     A() { printf( "Construct A [%d]", val ); }     ~A() { printf( "Destruct A [%d]", val ); } }; int main() {     A a;     a.val = 500;     a = A();     printf( "Finally %d", a.val );     a.val = 300;     return 0; } Produces output: Construct A [0] Construct A [0] Destruct A [0] Finally 0 Destruct A [300] What the!? Why am I not getting Destruct A [500]?! Memory leaks if I would have some pointers in A cuz its destructor is never called!? I want to see something like this: Construct A [0] [color=red]Destruct A [500] Construct A [0] Destruct A [0] Finally 0 Destruct A [300] What am I doing wrong? Hellup
  9. ProgrammerDX

    Dynamic Textures for Skinning

    Hm, a buffer reorganisation / draw splitting routine would be useful yeah. It's like having a 16 bit index buffer but still allowing use of a vertex buffer containing more than 65k vertices?     You were able to fit the whole vertex info into 128 bits in one buffer? What I currently do is seperate vertex position (3 floats), normals (3 floats), bone index (1 float only, there's no blended skinning, so no weights also), uv coords (2 floats) each in seperate vertex buffers. At rendering, they are seperate streams. So for me, I'd have a dynamic vertex buffer for the positions and normals only, to transform those at render time.   I'm not planning to 'thread' the transformations into the dynamic vertex buffer, because I build the bone matrices for a given frame practically right before rendering, so it seems pointless for the rendering thread to wait for a seperate thread to finish what the rendering thread itself could do. It might be useful if I'd do other things between building the bone matrices and actual rendering, but it'd have to be faster than the whole thread overhead.   Lastly, I noticed there are no usage flags for creating read-only vertex buffers? I should just leave out D3DUSAGE_WRITEONLY at creating the vertex buffer, and use D3DLOCK_READONLY when I lock the vertex buffer for reading? Is this the fastest read-only static vertex buffer? Or should I just not use d3d vertex buffers at all and go with std vectors or so...
  10. ProgrammerDX

    Dynamic Textures for Skinning

    Ah right, yeah most players are having cards where VTF is working fine. It's just that few that don't that I still want to support (ie. some Windows XP users). I noticed that even some people with Windows 7 don't support D3DFMT_A32B32G32R32F. They have built-in intel chips. Kind of strange because I always thought Windows 7 required DX11 cards.   I prefer not to split the mesh into submeshes based by bones, seems like alot of work, I already split it per material. So I'm thinking of the other options.   Dual Quaternions I looked it up but it seems like alot of work to convert to that at this point (not to mention any unexpected dreadful artifacts that might appear). Storing just 4x3 matrices to be able to support 80 bones is still not enough.   It looks like one static buffer with base vertex data, and one dynamic vertex buffer for rendering after transforming the vertices with the bone matrices on the CPU might just be the best idea! And here I was thinking about going back to DrawPrimitiveUP building big buffers on the CPU.. good I started this topic.   I'm also having trouble with some cards not supporting Index Buffers for 32 bit indices. It's one really big mesh and I can't split it up. The only solution to that seems to be DrawPrimitiveUP (a backup technique used only when the card doesn't support 32 bit indices, ofcourse!)   Thanks for your insights
  11. ProgrammerDX

    Dynamic Textures for Skinning

    Ah if tex2lod won't work then trying D3DFMT_A8R8G8B8 is useless.   I don't know about feature levels because I'm solely using Direct3D 9.0c. I'm just trying to make all things work on that only.   I changed to texture buffers because 'cbuffers' on vs_3_0 only support less than 64 matrices, and that's not enough. Quite annoying having to think of it again. I'll see if I can use cbuffers.   I was thinking of doing the matrix calculations on the CPU if the D3DFMT_A32B32G32R32F is not supported. Thing is, I've designed my code so that I have vertex buffers with the vertex, normals, tex coords etc in different buffers for different streams. Is there anyway I can transform those vertices inside a static vertex buffer on the cpu before it goes to the gpu without resolving to 'DrawPrimitiveUP' functions ? that would be the simplest solution.
  12. Hey there,   I'm using Dynamic Textures with format D3DFMT_A32B32G32R32F to place the Matrix Palette of bones in. In the vertex shader I use tex2lod. However, some cards don't support this format. I was wondering if some of you would know if D3DFMT_A8R8G8B8 would also support matrices before I try.   Thanks for your time
  13.   I'll have a look about D3DCaps viewer. Anyway, I do not specify the shaders inside the .fx file as vs_3_0_sw but simply vs_3_0. (no sw) and it is still works.   Also, the PixelShaderVersion in d3dcaps9 structure gives 2.0, while specifying ps_3_0 shaders in the .fx file works normally too. Not sure if that's related.
  14. Not sure what happened here. But I'll just take the answer that the shaders run in software mode
  15. Code to query caps: lpD3D->GetDeviceCaps( D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, &d3dCaps )   Video Card: Intel(R) G33/G31 Express Chipset Family lpD3D->GetAdapterIdentifier( D3DADAPTER_DEFAULT, 0, &d3dAdapterIdentifier ) )
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!