Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

395 Neutral

About ProgrammerDX

  • Rank

Personal Information

  • Role
    3D Animator
    3D Artist
    Amateur / Hobbyist
    Art Director
    Artificial Intelligence
    Audio Engineer
    Business Development
    Character Artist
    Concept Artist
    Creative Director
    Environment Artist
    Game Designer
    Game Trailers
    Level Artist
    Level Designer
    Pixel Artist
    Quality Assurance
    Sound Designer
    Technical Artist
    Technical Director
    UI/UX Designer
    Visual Effects Artist
    Voice Talent
    Voxel Artist
  • Interests

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Changing to DX11 isn't that trivial, sadly. Do you happen to know what "D3DCOMPILE_PREFER_FLOW_CONTROL" is?
  2. Well that's sad, because I'll end up writing the exact same thing as ID3DXEffect, just 'cause of a "define" problem. Anyway, do you know how to set a texture? I see there's ID3DXConstantTable that I'll need to use to set uniform values, but there's no SetTexture like there's for ID3DXEffect. Is there any way? texture BonesMap; sampler BonesMapSampler = sampler_state { Texture = (BonesMap); MipFilter = None; MinFilter = Point; MagFilter = Point; AddressU = Clamp; AddressV = Clamp; };
  3. Hello, I'm using .fx files for my shader code. The problem is that when compiling Effect (ID3DXEffect) the vertex and pixel shader code are compiled together. The problem is Vertex Shader has limitation max 4 sampler registers, while pixel shader can hold 16. So the fx file won't compile because I want more than 4. Now what I would like to do is put a #ifdef around the samplers (because they're not used in the vertex shader code anyway) so that my FX file will compile. But of course they have to be seen when the ID3DXEffect is compiled for the Pixel Shader part only.. So I need to be able to set a Macro only for when the ID3DXEffect compiles the Pixel Shader, but not the Vertex Shader. However I can't seem to find out how 😥 It seems I can only do it by compiling vertex and pixel shaders myself and doing all that work myself, basicly losing the benefit of ID3DXEffect... I was really hoping maybe D3DX sets a macro themselves that I can use, but I can't find any information about it anywhere. Anyone has the answer? 🙏
  4. Hi! So I'm struggling with this problem about skinning. Let's say you have a vertex that is skinning blended to 4 bones. How do you build the bone matrix / transform in vertex shader? Can you make a 'total' bone matrix based on the sum of bone matrices according to their weights together. Like in Urho3D: //Skin() Method void Skin(inout VertexShaderInput input, uniform int boneCount) { float4x4 skinning = 0; [unroll] for (int i = 0; i < boneCount; i++) { skinning += Bones[input.Indices[i]] * input.Weights[i]; } input.Position = mul(input.Position, skinning); } Or what I see in every topic on gamedev, where you multiply the bone matrix for each vector position multiplied by the weight, like here: https://www.gamedev.net/forums/topic/549125-hlsl-skinning/ VS_OUTPUT_BASE Out = (VS_OUTPUT_BASE) 0; float3 Pos = 0.0f; float4 src = float4(inPos.xyz,1.0f); Pos += mul(u_pose[int(indicies.x)] ,src) * weights.x/255; Pos += mul(u_pose[int(indicies.y)] ,src) * weights.y/255; Pos += mul(u_pose[int(indicies.z)] ,src) * weights.z/255; Pos += mul(u_pose[int(indicies.w)] ,src) * weights.w/255; Out.Pos = mul(u_model_view_proj_matrix,float4(Pos,1.0f) ); I think the first method is right, but I'm still curious. I think it's right because with the second method it's impossible to build the vertex buffer with vector positions that are multiplied by the inverse of the bind pose bone matrix, but it's still weird that it's out there being discussed.
  5. Howdy! I was wondering let's say you have a mesh and the most blended vertex is attached to 4 bones (and so 4 weights that aren't 0 or 1). So the rest of the mesh vertices are also attached to 4 bones. However let's say one vertex is only attached to 1 single bone, so it has a 1.0 weight. What other 3 bones do you attach that vertex to? 1. The root bone with 0.0 weights? Or; 2. attach it to -1 ('no bone') and then use if() statement in hlsl for the transformation calculation? Thanks for your input!
  6. ProgrammerDX

    Converting float4x4 to float4x3

    Thanks for the replies. Yes fais, input was float3, so you are right that it got automatically converted into float4 with the w as 0.f in this operation. I read that if I declare the input as float4, hlsl will automatically set w to 1.f even though verrtex decleration for it is float3, so I might do that aswell. I tried as you said now, making it float4(input,1) and it actually worked! I can't believe it, because I felt like I tried every combination. So this is the final line that works, if anyone is interested: float4 position = float4(mul(float4(input.position, 1.0f), (float4x3)Bone), 1.0f); Hodgman I think it doesn't matter that the last column will be interpreted as [0,0,0,0] if I remove it by casting to float4x3, because later I set float4 with w valued as 1 again. I'm going to try to optimize further by sending float4x3 matrices to the vertex shader now, which was the final intent of this optimization.
  7. Hi there, Currently I have a working skinned shader that uses float4x4 bone matrices. Code snippet in Vertex Shader: float4 position = mul(float4(input.position, 1.0f), Bone); Bone is the float4x4 matrix of the bone. input.position is the vertex position as it comes into the vertex shader. This works perfectly fine, now I want to convert to float4x3 for the Bone matrix as an optimization, but it doesn't work: float4 position = float4(mul(input.position, (float4x3)Bone), 1.0f) What happens is that everything in the mesh is rendered in the origin of the model, so basicly it seems that the translation of the bone is not being applied now. I've been trying to figure it out by swapping arguments around and trying float3x4 and such but nothing makes it right. Anyone with a clue?
  8. ProgrammerDX

    What is this artifact in my shadow map?

    Can you share with us the final hlsl script? Thanks in advance.
  9. Hi all,   I have this simple post process blur shader:   sampler ColorSampler1 : register(s0); #define SAMPLE_SIZE 15 float2 texelSize; float offsets[SAMPLE_SIZE]; float weights[SAMPLE_SIZE]; float4 PS_BlurH(float2 texCoord : TEXCOORD0) : COLOR0 {     float4 sum = float4(0.f, 0.f, 0.f, 1.f);          [loop]     for (int i = 0; i < SAMPLE_SIZE; i++)         sum += tex2D(ColorSampler1, float2(texCoord.x + (offsets[i] * texelSize.x), texCoord.y)) * weights[i];          clip(sum.a < 0.01f ? -1 : 1);          return sum; } float4 PS_BlurV(float2 texCoord : TEXCOORD0) : COLOR0 {     float4 sum = float4(0.f, 0.f, 0.f, 1.f);          [loop]     for (int i = 0; i < SAMPLE_SIZE; i++)         sum += tex2D(ColorSampler1, float2(texCoord.x, texCoord.y + (offsets[i] * texelSize.y))) * weights[i];          clip(sum.a < 0.01f ? -1 : 1);          return sum; } technique Glow {     pass BlurHorizontal     {         PixelShader = compile ps_2_0 PS_BlurH();     }          pass BlurVertical     {         PixelShader = compile ps_2_0 PS_BlurV();     } }   But if I change ps_2_0 to ps_3_0 the shader doesn't work anymore? No errors on compiling...   Anyone with a clue?
  10. ProgrammerDX

    Why is my struct destructor not called?

    Ah right it's the assignment operator that gets called. Weird, for some reason I expected the old a to be destructed. Must be because it's late. Thanks.
  11. Hey all check this out: struct A {     int val = 0;     A() { printf( "Construct A [%d]", val ); }     ~A() { printf( "Destruct A [%d]", val ); } }; int main() {     A a;     a.val = 500;     a = A();     printf( "Finally %d", a.val );     a.val = 300;     return 0; } Produces output: Construct A [0] Construct A [0] Destruct A [0] Finally 0 Destruct A [300] What the!? Why am I not getting Destruct A [500]?! Memory leaks if I would have some pointers in A cuz its destructor is never called!? I want to see something like this: Construct A [0] [color=red]Destruct A [500] Construct A [0] Destruct A [0] Finally 0 Destruct A [300] What am I doing wrong? Hellup
  12. ProgrammerDX

    Dynamic Textures for Skinning

    Hm, a buffer reorganisation / draw splitting routine would be useful yeah. It's like having a 16 bit index buffer but still allowing use of a vertex buffer containing more than 65k vertices?     You were able to fit the whole vertex info into 128 bits in one buffer? What I currently do is seperate vertex position (3 floats), normals (3 floats), bone index (1 float only, there's no blended skinning, so no weights also), uv coords (2 floats) each in seperate vertex buffers. At rendering, they are seperate streams. So for me, I'd have a dynamic vertex buffer for the positions and normals only, to transform those at render time.   I'm not planning to 'thread' the transformations into the dynamic vertex buffer, because I build the bone matrices for a given frame practically right before rendering, so it seems pointless for the rendering thread to wait for a seperate thread to finish what the rendering thread itself could do. It might be useful if I'd do other things between building the bone matrices and actual rendering, but it'd have to be faster than the whole thread overhead.   Lastly, I noticed there are no usage flags for creating read-only vertex buffers? I should just leave out D3DUSAGE_WRITEONLY at creating the vertex buffer, and use D3DLOCK_READONLY when I lock the vertex buffer for reading? Is this the fastest read-only static vertex buffer? Or should I just not use d3d vertex buffers at all and go with std vectors or so...
  13. ProgrammerDX

    Dynamic Textures for Skinning

    Ah right, yeah most players are having cards where VTF is working fine. It's just that few that don't that I still want to support (ie. some Windows XP users). I noticed that even some people with Windows 7 don't support D3DFMT_A32B32G32R32F. They have built-in intel chips. Kind of strange because I always thought Windows 7 required DX11 cards.   I prefer not to split the mesh into submeshes based by bones, seems like alot of work, I already split it per material. So I'm thinking of the other options.   Dual Quaternions I looked it up but it seems like alot of work to convert to that at this point (not to mention any unexpected dreadful artifacts that might appear). Storing just 4x3 matrices to be able to support 80 bones is still not enough.   It looks like one static buffer with base vertex data, and one dynamic vertex buffer for rendering after transforming the vertices with the bone matrices on the CPU might just be the best idea! And here I was thinking about going back to DrawPrimitiveUP building big buffers on the CPU.. good I started this topic.   I'm also having trouble with some cards not supporting Index Buffers for 32 bit indices. It's one really big mesh and I can't split it up. The only solution to that seems to be DrawPrimitiveUP (a backup technique used only when the card doesn't support 32 bit indices, ofcourse!)   Thanks for your insights
  14. ProgrammerDX

    Dynamic Textures for Skinning

    Ah if tex2lod won't work then trying D3DFMT_A8R8G8B8 is useless.   I don't know about feature levels because I'm solely using Direct3D 9.0c. I'm just trying to make all things work on that only.   I changed to texture buffers because 'cbuffers' on vs_3_0 only support less than 64 matrices, and that's not enough. Quite annoying having to think of it again. I'll see if I can use cbuffers.   I was thinking of doing the matrix calculations on the CPU if the D3DFMT_A32B32G32R32F is not supported. Thing is, I've designed my code so that I have vertex buffers with the vertex, normals, tex coords etc in different buffers for different streams. Is there anyway I can transform those vertices inside a static vertex buffer on the cpu before it goes to the gpu without resolving to 'DrawPrimitiveUP' functions ? that would be the simplest solution.
  15. Hey there,   I'm using Dynamic Textures with format D3DFMT_A32B32G32R32F to place the Matrix Palette of bones in. In the vertex shader I use tex2lod. However, some cards don't support this format. I was wondering if some of you would know if D3DFMT_A8R8G8B8 would also support matrices before I try.   Thanks for your time
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!