Jump to content
  • Advertisement

Finoli

Member
  • Content Count

    7
  • Joined

  • Last visited

Community Reputation

143 Neutral

About Finoli

  • Rank
    Newbie

Personal Information

Social

  • Github
    RavioliFinoli
  • Twitch
    f1noli
  • Steam
    finoli
  1. I was so sure it was a normalized value.. Not sure how I've been getting by with that erroneous information. Anyways, thank you for correcting me on this! The anisotropic was the samplerState i had bound originally but i had tried changing to point-filtering. I wasn't aware you could simply index into a texture like that; great to know! Is it equivalent to using randomTexture.Load()?
  2. Hello, I'm implementing SSAO for my engine but I noticed the kernel wasn't rotating properly. What happens is when I sample my random-vector texture I always get the same result, regardless of input texture coordinates. Here's my shader: Texture2D randomTexture : register(t2); SamplerState smplr { Filter = D3D11_FILTER_ANISOTROPIC; AddressU = Wrap; AddressV = Wrap; }; float4 PS_main( in float4 screenPos : SV_Position) : SV_TARGET0 { //screensize 1280x720, randomTexture size 64x64 float2 rvecCoord = float2(screenPos.x * 1280.f / 64.f, screenPos.y * 720.f / 64.f); float3 rvec = randomTexture.Sample(smplr, rvecCoord).xyz; return float4(rvec, 1.0f); } Off-topic code omitted. I cant for the love of my life figure out why this sample would always give me the same result. Changing the line : float2 rvecCoord = float2(screenPos.x * 1280.f / 64.f, screenPos.y * 720.f / 64.f); to float2 rvecCoord = float2(screenPos.xy); seems to make no difference. Here's a print of the shader state. Any help appreciated! ❤️
  3. m_ortho_matrix = glm::ortho(0.0f, (float)width, (float)height, 0.0f, 0.1f, 100.0f); width and height here should not be the screen width and height, but rather the width and height in world space. The width usually depends om how ”zoomed in” you want to be, and height will be ”width * (resHeight/ resWidth)” hope this helps
  4. Nothing unusual. Just the usual polymesh youd expect. Havent checked node editor though. Anything i should look for? Cant check until tomorrow morning.
  5. Was just about to recommend Game Engine Architecture. That together with googling resources is all you need.  Frank. D Luna's book is also ok, but i wouldn't buy it just for that chapter. EDIT: Some useful links: http://mmmovania.blogspot.se/2012/11/skeletal-animation-and-gpu-skinning.html http://gamedevs.org/uploads/skinned-mesh-and-character-animation-with-directx9.pdf https://www.opengl.org/discussion_boards/showthread.php/179419-Need-help-with-skeletal-animation https://www.gamedev.net/resources/_/technical/graphics-programming-and-theory/skinned-mesh-animation-using-matrices-r3577 https://www.gamedev.net/resources/_/technical/directx-and-xna/animating-characters-with-directx-r3632
  6. So ive been working with an FBX converter for our game engine. Saul Goodman except sometimes when I export from Maya 2017, the SDK does not recognize my meshes as meshes. I tried exporting as ASCII in order to debug and found this: Objects:  {     NodeAttribute: 1247074338064, "NodeAttribute::", "Null" {         Properties70:  {             P: "Look", "enum", "", "",0     }     TypeFlags: "Null" }     Geometry: 1247044636400, "Geometry::", "Mesh" {         Vertices: *864 {             a: -1.07012701034546,26.473445892334,-2.64814710617065,-2.09461998939514,25.7309494018555,                -0.828222990036011,-0.172909006476402,27.0927505493164,-1.70674204826355,-0.626856029033661                ...                ... The "null"s seem to be the cause of this issue, however I cannot find a way to fix it. This problem seems to happen whenever I load an OBJ into Maya and then export it as FBX, which I often need to do. Any idea what might be causing this?  Our school project is due friday so I could really use some quick assistance on this.  (Posting this on Autodesk forums seems more appropriate but I can't imagine it's a very active forum, and since this is quite a pressing matter I decided to try here first) Thanks in advance!   EDIT: Thought I should post the code that checks for eMesh. for (int i = 0; i < pFbxRootNode->GetChildCount(); i++) {     FbxNode* pCurrentNode = pFbxRootNode->GetChild(i);          //Check for NULL     if (pCurrentNode->GetNodeAttribute() == NULL) continue;     //Get type of current node     FbxNodeAttribute::EType AttributeType = pCurrentNode->GetNodeAttribute()->GetAttributeType();     //if node is mesh     if (AttributeType == FbxNodeAttribute::eMesh)     {         //Do stuff     }
  7. Makes sense. We're supposed to make our own converter for the course we're taking, but we'll have homemade models soon anyway so it wont be a Big issue.
  8. EDIT: #1 was due to older formats of FBX. #2 was a bug in our FBX converter, where a vertex's weights were not properly intialized. #3 was due to a team-member hardcoding a rotation in order to fix the wrong rotations the objects had when they were previously exported without converting to FbxAxisSystem::DirectX Got a few problems importing and using data from FBX SDK. #1 Been working a while with an importer for our game engine. Most things worked as they should with homemade models, however when I e.g. download a random model online and try to export the mesh, FBXSDK does not recognize it as a mesh. FbxNodeAttribute::EType AttributeType = pCurrentNode->GetNodeAttribute()->GetAttributeType(); if (AttributeType != FbxNodeAttribute::eMesh) continue; //Does never pass unless I made the model myself directly in Maya Any ideas what might be the cause?   #2 When importing our animated meshes we obviously export weights with the vertices. The odd thing is that some vertices do not have normalized weights. I.e. a vertex's weight for bone1 might be 0.95 and for bone2 0.45 or whatever. It is usually only a few vertices that are affected, but needs a fix regardless. Any ideas?   #3 For testing, we have an animated sphere bouncing up and down in maya. Problem is that in our engine the sphere is bouncing along the floor plane. I thought this had to do with difference in coordinate systems used, and tried this fix: //Set to DirectX axis system /////// FbxAxisSystem SceneAxisSystem = scene->GetGlobalSettings().GetAxisSystem(); FbxAxisSystem OurAxisSystem(FbxAxisSystem::DirectX); if (SceneAxisSystem != OurAxisSystem) { OurAxisSystem.ConvertScene(scene); } Obviously we are not using the same coordinate system, since the scene is indeed converted.  However, when animated in our engine, the model winds up quite distorted.. no clue whats going on here. Feel free to adress as many or as few of these problems as you wish. Any help is greatly appreciated!  :) If you need more information I will gladly supply it, such as code, graphics debugging etc!
  9.   Why do the indices and weights overlap? Indices are 16 bytes starting at offset 24 bytes, but weights starts just 12 bytes later. You have an overlap between BoneIndices.w and Weights.x.   Yes I noticed this after I posted it. Fixed it, weights seem to work like they should now!   This was the problem, thank you for pointing it out! Animation works as expected now! Thank you all for your help! Now i can finally start making some actual animations and clean the code a bit!  :D
  10. You're reading them in the VS as integers. What type are they in the buffers and the input layout? You probably want to declare them as float inputs in the VS and use a _UNORM data format.   The weights are float, indices are int :P Layout looks like this: D3D11_INPUT_ELEMENT_DESC input_desc[] = { { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "NORMAL", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "BONEINDICES", 0, DXGI_FORMAT_R32G32B32A32_SINT, 0, 24, D3D11_INPUT_PER_VERTEX_DATA, 0}, { "WEIGHTS", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 36, D3D11_INPUT_PER_VERTEX_DATA, 0} }; Maybe format is the problem? DXGI_FORMAT_R32G32B32A32_SINT
  11. Alright, so, I managed to get the locals to show up in VS2015, and.. well...   This is actually the first time I've tried uploading an array of matrices like this to a shader, and im quite obviously doing something wrong. The code that creates the buffer is in the original post. I update them every frame like this: void Animator::DrawAndUpdate(float deltaTime) { PreDraw(); this->Update(deltaTime); this->UpdateConstantBuffers(); this->deviceContext->Draw(this->mesh->verts.size(), 0); } void Animator::UpdateConstantBuffers() { D3D11_MAPPED_SUBRESOURCE mappedResource; ZeroMemory(&mappedResource, sizeof(D3D11_MAPPED_SUBRESOURCE)); deviceContext->Map(cbJointTransforms, 0, D3D11_MAP_WRITE_DISCARD, 0, &mappedResource); memcpy(mappedResource.pData, &aFinalMatrices, sizeof(XMMATRIX) * aFinalMatrices.size()); deviceContext->Unmap(cbJointTransforms, 0); }  Does anyone see a problem with it? If not, what could be the reason for these values? Worth noting is that the data that i update the buffer with is only three XMMATRIX, since the mesh im using uses only 3 joints. I don't see why that would be an issue though, maybe worth mentioning. Here's a print of what they should look like:  
  12. Thanks for the quick reply! This might be part of the problem, however i also noticed that my weights.x values are way off. Sounds really weird that they choose row-major for the built in matrix structs and column-major for HLSL... oh well, im sure theres a reason for it (???). As for the shader debug info, how do I enable that? I compile the shaders at runtime with D3DCompileFromFile() Thanks  :D
  13. So I'm trying to upload an array of matrices to my vertex shader, like so: cbuffer cbJointTransforms : register(b6) {     float4x4 gBoneTransforms[96]; }; I think it is done correctly, however when I try to draw my model using this data to transform the vertices, nothing shows up on screen. If I just do the basic WVP multiplication it draws fine. I cant really find any obvious mistake with the shader code, so i have to, for now, assume im uploading this data to the GPU wrong.   here's how I do it: //Constant buffer D3D11_BUFFER_DESC cbDesc; cbDesc.ByteWidth = sizeof(XMMATRIX)*aFinalMatrices.size();  cbDesc.Usage = D3D11_USAGE_DYNAMIC; cbDesc.BindFlags = D3D11_BIND_CONSTANT_BUFFER; cbDesc.CPUAccessFlags = D3D11_CPU_ACCESS_WRITE; cbDesc.MiscFlags = 0; cbDesc.StructureByteStride = 0; D3D11_SUBRESOURCE_DATA InitData; InitData.pSysMem = &aFinalMatrices[0]; InitData.SysMemPitch = 0; InitData.SysMemSlicePitch = 0; hr = this->device->CreateBuffer(&cbDesc, &InitData, &cbJointTransforms); aFinalMatrices is a vector<DirectX::XMMATRIX>   I don't know how to debug the shader like you do with regular C++ code in Visual Studio (i.e putting breakpoints and checking locals). I am compiling it during runtime. I tried using Graphics Debugging but it doesn't show me any values in the buffer.   Any help appreciated!! Been trying to solve this for hours and nothing. I just want to get going with debugging the actual animation, not the friggin constant buffer...   Shader code: (relevant parts) cbuffer cbJointTransforms : register(b6) {     float4x4 gBoneTransforms[96]; }; struct VS_IN {     float3 pos : POSITION;     float3 nor : NORMAL;     //float2 UV : TEXCOORD;     float4 weights : WEIGHTS;     int4 boneIndices : BONEINDICES; }; VS_OUT VS(VS_IN input) {     VS_OUT output;     //FRANK D. LUNA (p.781)     //init array     float weights[4] = { 0.0f, 0.0f, 0.0f, 0.0f };     weights[0] = input.weights.x;     weights[1] = input.weights.y;     weights[2] = input.weights.z;     weights[3] = input.weights.w;     //Blend verts     float3 position = float3(0.0f, 0.0f, 0.0f);     float3 nor = float3(0.0f, 0.0f, 0.0f);     for (int i = 0; i < 4; i++)     {         if (input.boneIndices[i] >= 0) //unused bone indices are negative         {             position += weights[i] * mul(float3(input.pos),             gBoneTransforms[input.boneIndices[i]]).xyz;             position = float3(position.xyz);             nor += weights[i] * mul(input.nor,             (float3x3)gBoneTransforms[input.boneIndices[i]]);         }     }     output.pos = mul(Proj, mul(View, mul(World, float4(position, 1.0))));     output.wPos = mul(World, float4(position, 1.0f));     output.nor = mul(NormalMatrix, nor);     output.nor = normalize(output.nor);     output.uv = float2(0.0f, 0.0f);     return output; }
  14. Every API call has a direct CPU-side cost, but you should be able to use as many as 10k API calls without blowing the budget (depending on how much cpu time the rest of your game needs). It's good practice to sort your objects into an order that will minimise calls, e.g. drawing objects that share a material at the same time.   I would add that there is a direct AND indirect CPU cost. With D3D11, a lot of black magic and lazy evaluation will happen, it usually make your Present call a big fat black box. It is better with D3D12 where the API has more defined behavior ( like shader  compilation happen at CreatePSO, nowhere else, ... ).   But again, unless you diagnostics a performance issue within your engine or something really dumb and easy to fixm all you are doing is early over optimisation, and this is just a waste of time :)   Indeed that has been my most prominent problem thus far when it comes to programming; over optimizing and underestimating the power of processing power.  ^_^
  15. The main idea - VS and PS are bound to pipeline. Only [0 or 1] VS and [0 or 1]PS maybe bound. Each time you do context->PSSetShader, prev PS shader will be unbond, and new will be bound. RenderScene()     RenderWater()         context->VSSetShader(vsWater);         context->PSSetShader(psWater)         context->Draw();     RenderGround()         context->PsSetShader(psGround)         context->Draw() //will use vsWater shader, already bound in RenderWater()     RenderSolders()         context->VSSetShader(vsSolder); //switch from vsWater         context->PSSetShader(psSolder) //switch from psGround         for (int i = 0; i < solders; ++i)         {             <update solder buffer>             context->Draw(); //Use the same shaders , no need to Set() them again here         } m_swapChain->Present(); Yeah I do get that. Probably one of the reasons that I questioned this method was that I was told that one "should minimze API calls", which makes perfect sense, but I took it too far. Like, wanting to only needing to set new shaders once or twice during each frame, but I get the sense that it's not that performance impacting.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!