Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

190 Neutral

About inzombiak

  • Rank
  1. inzombiak

    Plasticity using Bullet

    I'm currently trying to implement deformable bodies in my game. I've integrated Bullet into my engine and while everything is working fine, I can't figure out how to create a clay-like body. The most basic way I can explain what I want to do is to modify the collision shape of bodies in real time. I want the player to be able to deform certain bodies with the mouse. I've tried using Bullet's soft bodies, but so far I've been unable to get them to behave the way I want them to (making a stretchy box has proven difficult). They either collapse or don't stretch and there isn't enough documentation to understand what all of the parameters are. I'm also concerned about the performance impact of using large or too many soft bodies.   After getting stuck on the soft bodies I decided to use a triangle mesh instead, but I can't find how access the mesh data when needed. The recommended way of changing  a collision shape is removing and adding the body each time the shape needs to be changed, but this could happen frequently and I'm not sure what the performance impact of it is. There is also the issue of updating objects that may end up inside the new shape.   I was wondering if any has done this before or has any tips on how to approach the problem.   I don't necessarily expect Bullet to support what I want to do, I just don't want to reinvent the wheel. If what I want to do is impossible with the default implementation of Bullet, can someone give me a starting point to implement myself?   P.S. Sorry for any grammar mistakes, haven't been getting much sleep.
  2. You should tie it to the animation. When I create animations for a character, I have special bones (marked by a naming convention) where I connect items ingame. So, in game you update the animation of the character first, then you get the position of these special bones and move/rotate your linked item there.   I'm probably going to be using sprite based animation instead of 3D. I guess I going define bind-points on each frame and use those, right?
  3. I'm currently working on a isometric Diablo style game and I want to start implementing the combat. The combat the basic magic, melee and ranged variety with players being able to add modifiers to attacks. While I have the image in my head, I cant figure out a sound implementation method. I have some ideas and would like to hear peoples opinions on them.The "engine" I've built is Entity-Component system.   The implementation for ranged and magic is a lot more clear in my head: create entity with collision, movement and any special (timers, AI etc.) components. My collision component stores function pointers to resolver functions that just apply different effects (heal, damage etc.), so when collision does happen I just go through the vector and call them one by one. If its a special projectile, like a bomb, it would trigger a different function that creates an explosion entity.   My problem with melee is that movement of the weapon. Should I just define a set path of motion for the entity that is create and have it follow the path or somehow tie it into the animation? Aside from that it should be the same, right?   I know this is very hand-wavy and unclear, but I've been struggling with it for a week. If anyone can offer advice, suggestions or a specific methodology it would be great.
  4. I'm trying to load and play a FBX scene using OpenGL. I can get the mesh and vertices properly, but I can't get the animations working.   I followed the How to Work with FBX SDK article, which helped a lot, but I can't get the animations to display properly. My knowledge of FBX, OpenGL and 3D programming in general is fairly limited, which my be the reason its been so difficult.   Either way, here is the snippet that sets the animation. I believe the problem comes from how I'm trying to apply the affine matrix to each vertex. for (int i = 0; i < m_numTriangles; i++) { for (int j = 0; j < 3; j++) { currVertex = m_verticies[m_triangles[i].vertices[j]]; posVec = currVertex.pos; normVec = currVertex.normal; for (int k = 0; k < currVertex.blendingInfo.size(); ++k) { jointIndex = currVertex.blendingInfo[k].index; jointWeight = currVertex.blendingInfo[k].weight; if (jointWeight != 0) { transMat = m_skeleton.joints[jointIndex].animation[frameIndex1-1].transform; temp = transMat.MultT(currVertex.pos); posVec += temp * jointWeight; } } glNormal3f(normVec.mData[0], normVec.mData[1], normVec.mData[2]); glVertex3f(posVec.mData[0], posVec.mData[1], posVec.mData[2]); } } I know this method is slow and I'm currently trying to implement the animation using the ViewScene example, but I'm still curious about what I'm doing wrong. I assume the normalVec should go through the same transformations as the posVec, but I want to get the model to stop looking like something from a Silent Hill game first. Any help would be appreciated.
  5. Irlan: So if I understand what your're saying correctly, it is that by separating my systems, I can create an architecture where I can swap out low-level systems (e.g. swapping Bullet Physics for Box2D) without having to change the high-level systems.   Krohm/Angus: I have each system creating and managing its corresponding component types, but in order to avoid a major system overhaul, I made the createComponenet() functions and the containers that hold these components static so they could be accessed from my component factory as such m_componentFactory.Register<TransformComponent>(TransformComponent::COMPONENT_NAME, TransformManager::CreateTransformComponent); m_componentFactory.Register<RenderComponent>(RenderComponent::COMPONENT_NAME, EntityRenderer::CreateRenderComponent); m_componentFactory.Register<InputComponent>(CInputComponent::COMPONENT_NAME, InputManager::CreateInputComponent); *To create a component I call* m_componentFactory.Create(GetIDFromName(currComponentNode->Attribute("name"))); I know statics aren't the most popular kind of variable, but would this be considered a horrendous thing to do? It just makes my code a lot cleaner and a lot organized. Sorry for the branching questions and thanks for sticking around.
  6. It was just a general system I added for the sake of explaining my situation, but will do.
  7. My current issue was(is) with passing the information within the components. I'm not sure how this separation of systems would help.   I'm currently trying out Krohm's/Angus's solution. Its not as clean as having everything come out of a factory, but I can see how it would be better of the performance side.
  8.     This makes all of the sense. Thank you. I had been following the "Game Coding Complete 4" book, but I couldn't find where the actual components are created, so I thought "just toss em on the heap".  
  9. Hi all,   I recently decided to move my game from an inheritance based architecture to entity-component based one. I have the whole Component system set up, but I'm having problems figuring out how to store and pass them around. Below is a bad image of my architecture.   [attachment=26133:Game Architecture.png]   Right now the operations are as follow: The Game class passes information and objects between classes as well as contains the mainloop The LevelLoader class reads the tile info from a .tmx(TIled) file and generates the tile Entities, storing them in one of three vectors based on type: foreground, background and collision box. I separated them to make rendering and collision simpler. After loading is complete ALL the Entities are transferred to the GameObjectManager class. Every frame after this is processed as follows, the GameObjectManager updates all its entities, which update their components, then Game extracts the entities vector from GameObjectManager and hands either the vector or elements of the vector individually to the different systems. The Systems handle their operations and this cycle repeats.   This is where I feel like I'm making a mistake. I have the entities and I have the component, I just don't know how to give the different systems (Physics,Renderer Input etc.) access to them. Currently I just get the std::vector<Entity> container from the GameObjectManager class and pass it into each system, but it just doesn't feel right. Objects that have no physics component have to be iterated over in the Physics system and ones that are just empty collision boxes get passed to the Renderer.   One idea that came to mind was having a vector of pointers for each type of component and passing the corresponding vector to the corresponding system. This would certainly help the game run smoother, but I think adding new component types would become more difficult.   Another is to sift through the vector of entities in the Game class and just pass the entities components to the right system: for (int i = 0; i < entity.size(); ++i) { if (entity[i].hasComponent("phsyics")) { physicsSystem.Update(entity.getPhysicsComponent()); } if (entity[i].hasComponent("render")) { renderSystem.Update(entity.getRenderComponent()); } .... } With all this said, my question is, what is the most efficient way of handling entity/component storage and accessing. Should I throw them all in a single container and just pass it around and let the systems sift through? Or should I manually sort the components beforehand and pass the systems only what they need? Is there a better/smarter way of handling this?
  10. inzombiak

    DirectX Render Limit?

    Cool, thank you for your help guys.
  11. inzombiak

    DirectX Render Limit?

    Ran PIX, and the screen simply wasn't rendering as in calculations happened but the buffers weren't swapping. So I eventually figured it was because EndScene wasn't being called. I went through the code checking the D3D, Model and Shader classes and eventually found that I had placed a length limit on the text that shows how many models were being rendered. The 2nd 0 in 100 was pushing it over that limit causing the Render loop to exit before reaching EndScene. Is that the answer to your questions? Did I understand you correctly? 
  12. inzombiak

    DirectX Render Limit?

    Well I fixed it and am ashamed of why it wasn't working. Turns out that the text I was rendering was exceeding the predetermined length and breaking the render function before the D3D EndScene call. Sorry for the trouble, I'm just going to no sulk in the corner for a bit.
  13. inzombiak

    DirectX Render Limit?

    Yeah Im using the desktop version so no debugger, but I'm downloading the old sdk.  Until then, where might the issue lie? I'd assume in either the D3D class, Model class or Shader class correct?
  14. inzombiak

    DirectX Render Limit?

    Uh, I'm using VS 2013 Express, but I think it has the Graphics Debugger. I'll try it when I get home. What do you mean by "application problem"? I'm just confused as to what the number 100 has to do with anything. Thanks for the help guys.
  15. inzombiak

    DirectX Render Limit?

    Tried changing it to if(true), no change. Here are the snippets.   void LightShaderClass::RenderShader(ID3D11DeviceContext* deviceContext, int indexCount) { // Set the vertex input layout. deviceContext->IASetInputLayout(m_layout); // Set the vertex and pixel shaders that will be used to render this triangle. deviceContext->VSSetShader(m_vertexShader, NULL, 0); deviceContext->PSSetShader(m_pixelShader, NULL, 0); // Set the sampler state in the pixel shader. deviceContext->PSSetSamplers(0, 1, &m_sampleState); // Render the triangle. deviceContext->DrawIndexed(indexCount, 0, 0); return; } bool ModelClass::InitializeBuffers(ID3D11Device* device) { D3D11_BUFFER_DESC vertexBufferDesc, indexBufferDesc; D3D11_SUBRESOURCE_DATA vertexData, indexData; HRESULT result; // Create the vertex array. VertexType* vertices = new VertexType[m_vertexCount]; if (!vertices) { return false; } // Create the index array. unsigned long* indices = new unsigned long[m_indexCount]; if (!indices) { return false; } // Load the vertex array and index array with data. for (int i = 0; i<m_vertexCount; i++) { vertices[i].position = XMFLOAT3(m_model[i].x, m_model[i].y, m_model[i].z); vertices[i].texture = XMFLOAT2(m_model[i].tu, m_model[i].tv); vertices[i].normal = XMFLOAT3(m_model[i].nx, m_model[i].ny, m_model[i].nz); indices[i] = i; } // Set up the description of the static vertex buffer. vertexBufferDesc.Usage = D3D11_USAGE_DEFAULT; vertexBufferDesc.ByteWidth = sizeof(VertexType)* m_vertexCount; vertexBufferDesc.BindFlags = D3D11_BIND_VERTEX_BUFFER; vertexBufferDesc.CPUAccessFlags = 0; vertexBufferDesc.MiscFlags = 0; vertexBufferDesc.StructureByteStride = 0; vertexData.pSysMem = vertices; vertexData.SysMemPitch = 0; vertexData.SysMemSlicePitch = 0; // Now create the vertex buffer. result = device->CreateBuffer(&vertexBufferDesc, &vertexData, &m_vertexBuffer); if (FAILED(result)) { return false; } // Set up the description of the static index buffer. indexBufferDesc.Usage = D3D11_USAGE_DEFAULT; indexBufferDesc.ByteWidth = sizeof(unsigned long)* m_indexCount; indexBufferDesc.BindFlags = D3D11_BIND_INDEX_BUFFER; indexBufferDesc.CPUAccessFlags = 0; indexBufferDesc.MiscFlags = 0; indexBufferDesc.StructureByteStride = 0; // Give the subresource structure a pointer to the index data. indexData.pSysMem = indices; indexData.SysMemPitch = 0; indexData.SysMemSlicePitch = 0; // Create the index buffer. result = device->CreateBuffer(&indexBufferDesc, &indexData, &m_indexBuffer); if (FAILED(result)) { return false; } // Release the arrays now that the vertex and index buffers have been created and loaded. return true; } Shaders   cbuffer MatrixBuffer { matrix worldMatrix; matrix viewMatrix; matrix projectionMatrix; }; ////////////// // TYPEDEFS // ////////////// struct VertexInputType { float4 position : POSITION; float2 tex : TEXCOORD0; }; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; }; PixelInputType MultiTextureVertexShader(VertexInputType input) { PixelInputType output; // Change the position vector to be 4 units for proper matrix calculations. input.position.w = 1.0f; // Calculate the position of the vertex against the world, view, and projection matrices. output.position = mul(input.position, worldMatrix); output.position = mul(output.position, viewMatrix); output.position = mul(output.position, projectionMatrix); // Store the texture coordinates for the pixel shader. output.tex = input.tex; return output; } Texture2D shaderTextures[2]; SamplerState SampleType; struct PixelInputType { float4 position : SV_POSITION; float2 tex : TEXCOORD0; }; float4 MultiTexturePixelShader(PixelInputType input) : SV_TARGET { float4 color1; float4 color2; float4 blendColor; // Get the pixel color from the first texture. color1 = shaderTextures[0].Sample(SampleType, input.tex); // Get the pixel color from the second texture. color2 = shaderTextures[1].Sample(SampleType, input.tex); // Blend the two pixels together and multiply by the gamma value. blendColor = color1 * color2 * 2.0; // Saturate the final color. blendColor = saturate(blendColor); return blendColor; }
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!