Jump to content
  • Advertisement

eee

Member
  • Content Count

    16
  • Joined

  • Last visited

Community Reputation

271 Neutral

About eee

  • Rank
    Member
  1. This is because your .obj file is not exported to use indices, that is, it defines every single vertex (there are duplicates) that are needed to draw the object in a specific order. When you try to use an index buffer you are basically forcing the object to be drawn using indices, but the original file did not support that. 
  2. You're obviously trying to hack a game and I'm not sure if this forum tolerates such behaviour.   Most people grab the device pointer by hooking IDirect3DDevice9::EndScene(). You could hook this function by using MS Detours, or if you don't like using external libraries you could do what some call a midfunction hook, this is done by finding the memory address for the function you are trying to hook and then patch that location to your own jump written in assembly, grab the pointer in your function and then jump back (return) to the address which is right after your jump.
  3. You are trying to draw ontop of a game through injecting a dll, but you are creating your own device, thus almost like creating an overlay. You need to hook the game and grab the device pointer the game is using for rendering and then you will be able to draw in the game.
  4. This constructs the vertex shader input layout, but how do you create the actual vertex structure so you know how to interpret the data in a file? Do you do it based on the input layout that was just constructed? Do you have a bunch of premade possible vertex structures defined?   LSG_VERTEX_ELEMENT_DESC what does this structure look like? It seems it mirrors D3D11_INPUT_ELEMENT_DESC, and is defined to support multiple platforms? Where is LSG_MAX_VERTEX_ELEMENT_COUNT being set? Could you post the mesh class?     Thanks for your input!
  5. I want to create a very simple rendering system in DirectX11 and I have encountered a basic problem - in order to draw multiple objects, each entity needs to be drawn with a specific set of vertex, index  and constant buffers. How could I create a model/mesh/submesh class which is dynamic in such a way, that when it is created it knows which vertex structure to use and if a constant buffer should be used, that is, can be used to draw all kinds of objects?     Heres some code of what I mean: D3D11_INPUT_ELEMENT_DESC layout[] = { // we NEED TO know what sort of layout to use, thus, an abstract definition of an IED needs to be created { "POSITION", 0, DXGI_FORMAT_R32G32B32_FLOAT, 0, 0, D3D11_INPUT_PER_VERTEX_DATA, 0 }, { "COLOR", 0, DXGI_FORMAT_R32G32B32A32_FLOAT, 0, 12, D3D11_INPUT_PER_VERTEX_DATA, 0 } }; CDirectX11::gDev->CreateInputLayout(layout, 2, vs->GetBufferPointer(), vs->GetBufferSize(), &mVertexLayout); This is needed for shaders D3D11_BUFFER_DESC vbd = { 0 }; vbd.Usage = D3D11_USAGE_DEFAULT; vbd.ByteWidth = sizeof(Vertex) * vCount; //vertex structure size needs to be known, thus maybe the vertex structure needs to be known vbd.BindFlags = D3D11_BIND_VERTEX_BUFFER; D3D11_BUFFER_DESC cbd = { 0 }; cbd.Usage = D3D11_USAGE_DEFAULT; cbd.ByteWidth = sizeof(cbPerObject); // some objects do not need a constant buffer at all or may require different structured ones CDirectX11::gDevCon->IASetInputLayout(mVertexLayout); CDirectX11::gDevCon->VSSetShader(mVShader, 0, 0); CDirectX11::gDevCon->PSSetShader(mPShader, 0, 0); UINT stride = sizeof(Vertex); UINT offset = 0; CDirectX11::gDevCon->IASetVertexBuffers(0, 1, &mVBuffer, &stride, &offset); CDirectX11::gDevCon->IASetPrimitiveTopology(D3D11_PRIMITIVE_TOPOLOGY_TRIANGLELIST); CDirectX11::gDevCon->IASetIndexBuffer(mIBuffer, DXGI_FORMAT_R32_UINT, 0); cbPerObj.world = DirectX::XMMatrixTranspose(DirectX::XMLoadFloat4x4(&mWorld)); CDirectX11::gDevCon->UpdateSubresource(mCBuffer, 0, 0, &cbPerObj, 0, 0); CDirectX11::gDevCon->VSSetConstantBuffers(0, 1, &mCBuffer); CDirectX11::gDevCon->DrawIndexed(mIndiceCount, 0, 0); All the states which are set in order to draw an object ( I realize some of them dont need to be set every time a model is rendered, but I am trying to start very simple) I hope I explained this in a way someone actually understands what the problem is.  
  6. This is how i get the vertex positions:   v.pos = XMFLOAT3(static_cast<float>(mesh->GetControlPointAt(i).mData[0]),static_cast<float>(mesh->GetControlPointAt(i).mData[1]), static_cast<float>(mesh->GetControlPointAt(i).mData[2]));     and later on the normals else if (fbxNormal->GetMappingMode() == FbxGeometryElement::eByPolygonVertex){ if (fbxNormal->GetReferenceMode() == FbxGeometryElement::eDirect){ // right here vertices[i].nor = XMFLOAT3(static_cast<float>(fbxNormal->GetDirectArray().GetAt(i)[0]), static_cast<float>(fbxNormal->GetDirectArray().GetAt(i)[1]), static_cast<float>(fbxNormal->GetDirectArray().GetAt(i)[2])); } I also tried to swap the y and z axis but then the models appear completely black.   I do this before the parse: FbxAxisSystem axisSystem = FbxAxisSystem::eDirectX; if (scene->GetGlobalSettings().GetAxisSystem() != axisSystem){ axisSystem.ConvertScene(scene); } It would seem we are doing the same thing but for some odd reason my lighting is completely off.   Maybe the problem is with understanding how vertex indices and normals relate? As of now I just have a loop    from i=0 to verticecount and just plug in i to any of the getter functions. Maybe i need to somehow take account some indices?  
  7. The light is defined by XMFLOAT3(0,0,1), which should shine right into the +Z axis (note im negating the value in the shader, so that shouldnt be the issue). All im doing is rotating the cube along the Y axis, the logical outcome should be: the side of the cube facing the light source (the camera, really), should gradually lit up when cube is rotated.    Trying out the normals/lighting shader on a porsche panamera, I get this result:   http://postimg.org/image/ze4q9velr/full/ http://postimg.org/image/ha1pp8gxb/full/ http://postimg.org/image/cdphhxvoj/full/   Does this mean my lighting calculations are wrong, or that the normals are wrong? Using 3ds to export to .fbx and then using fbx sdk to read the mesh data.
  8. I know there are a couple of threads here discussing this issue, but unfortunately I wasnt able to find a solution to the problem Im having.   Using the fbx sdk I can parse vertex coordinates and indices. I can draw meshes correctly but now am trying to implement very rudimentary lighting. For this I obviously need to read normals and doing so for a simple cube which has a side length of 1, 8 vertices and 36 indices. When using the fbx sdk my normals for the cube are parsed like this:   else if (fbxNormal->GetMappingMode() == FbxGeometryElement::eByPolygonVertex){ if (fbxNormal->GetReferenceMode() == FbxGeometryElement::eDirect){ // this is where the if is true, nowhere else vertices[i].nor = XMFLOAT3(static_cast<float>(fbxNormal->GetDirectArray().GetAt(i).mData[0]),static_cast<float>(fbxNormal->GetDirectArray().GetAt(i).mData[1]),static_cast<float>(fbxNormal->GetDirectArray().GetAt(i).mData[2])); } I understand that to have lighting for a cube I should have 3 normals per vertex but right now when doing some checks the code runs in the code block specified above (the if statements). When I check the vertices I see this:   The normals seem very weird, am I exporting it wrong from 3ds?   My very simple lighting shader looks like this: float4 diffuse = light.diffuse; float3 finalColor = diffuse*light.ambient; finalColor += saturate(dot(-light.dir,input.nor)*light.diffuse*diffuse); return float4(finalColor,diffuse.a); This is what the result looks like: http://postimg.org/image/nh8vjyfep/full/ http://postimg.org/image/rj2s4fbm3/full/     Very odd :(  
  9. In order to create a model system, I need to choose a file format I am going to primarily support. I couldn't find too much information on which ones are used in the game industry, but it would seem .fbx is a good choice? Would it be possible to export models as compiled vertex and index buffers and then load them into the engines buffers? That would be very fast and efficient and it would make loading quite simple on the engine side (albeit not the exporter side, fbx is quite undocumented and I don't know where to start).    What should my hierarchy look like?   CModel { std::vector<CMesh*>mMeshes; XMFLOAT3 mPos; // what else? }; CMesh { std::vector<CSubMesh*>mSubMeshes; // what else? }; CSubMesh { CMaterial* mMaterial; vbuffer ibuffer pshader vshader // what else? };   Thank you !
  10. While it is true that below the level of a “mesh” there are no common or official terms, a single mesh can definitely have more than 1 material and render call. I outlined this on my site, where I referred to the sub-mesh level as “render part” for lack of a better term at the time (since then I have found “sub-mesh” to be better). In such a simple case as this you could possibly draw the red and green parts in one draw call (though it would be non-trivial in any generic sense), but the fact is that the red and green areas are 2 different materials on the same mesh and it is meant to illustrate that in a more advanced case you will have no choice but to draw the red, switch materials and shaders, and then draw the green.   If Assimp says that the red, green, and blue are 3 different meshes, it’s wrong. If it says there are 2 meshes but a mesh can have only 1 material, it’s wrong. I have never used Assimp and don’t know its limitations, but if it says a mesh can have only 1 material then that sounds like a pretty bad limitation.     Yes, but that’s the active way. A better way is the last-minute way. Basic Graphics Implementation L. Spiro   I read your site and liked it, however I couldn't fully understand how to implement what you call the last-minute way. For example, I don't understand what CStd::Min( LSG_MAX_TEXTURE_UNITS, CFndBase::m_mMetrics.ui32MaxTexSlot ); does.    Am I right in understanding you have two arrays :  an array of last active states (for example m_psrvLastActiveTextures ) and the ones that are active at the time of the render call (for example m_psrvActiveTextures)? And then you set the last active to the current one if they are not already equal?   I am still fuzzy on the details, though, it seems similar to the active way because you are swapping a current sate and a last one? Whats the difference?   Thanks!
  11. Since Im considering using assimp, if i understood it correctly then: a model is a car, the car is broken up into meshes (each bodypanel, if theyre different, windows of the glass) where each mesh has a different material (that includes a texture, diff color, blending states etc). Quote from the official page: "Each aiMesh refers to one material by its index in the array." and "A mesh represents a geometry or model with a single material." If there are better alternatives to assimp, please point me in the right direction. I have tried implementing an .obj loader but it was very slow :(.   I have never written a wrapper, so would this be the way: pseudocode:   class CDirectX11States { void SetInputLayout(ID3D11InputLayout* inputLayout){ if (mCurrentInputLayout != inputLayout){ mCurrentInputLayout = inputLayout; mDevCon->IASetInputLayout(inputLayout); } ID3D11InputLayout* mCurrentInputLayout; }; class CDirectX11 {   device devicecontext swapchain etc };   Thanks for everyones patience!
  12. Thanks for the answers, I am quite patient and fine with taking it slow. I want to do things right from the start so when the project grows, adding new features would be as clean and simple as possible. Currently though, I dont even know how would I begin implementing a graphics engine, do I create a wrapper around dx11 to ensure I dont make any unnecessary calls (like setting a state that is already set)? Where would I begin? That is, I am not sure what are the bare minimum things Id need to implement for a graphics engine.
  13. I want to start implementing a game engine, but I don't have a clear understanding of how it should be designed. I've read much on the topic and it would seem the best way should be creating standalone generic components that are then used to create various objects. Would it look something like this?  class CMaterial { // texture, diffuse color... }; class CMesh { CMaterial mMaterial; vertex buffer index buffer // each mesh has exactly 1 material when using assimp (I think) }; class CModel { constant buffer per model std::vector<CMesh>mMeshes; }; Then one would create a generic model for a humanoid: class CHumanoid { CModel mModel; CPhysics mPhysics; CInputHandler mInputHandler; }; Which would include draw,create and destroy functions.   You see, I'm not sure how I could implement the different components that I could use to make up some sort of a complete structure, such as a human (it needs to be rendered, physics needs to have effect on it, it need to be able to move, wield weapons ...).   Then it would need to have a rendering class that uses dx11 to draw it, would I make this a static/global class or what?   I'm very lost here as to how one should create a game engine from generic components that handle themselves and are as decoupled as possible.
  14. eee

    Beginners problem

    Thanks for everyone, who has answered, the problem turned out to be with setting the constant buffers - I have 2 : one for each frame and one for each object. When I was setting the buffers I had forgotten to change the register in the call. m_pDevCon->PSSetConstantBuffers(0,1,&m_pCBufferPerFrame); Offtopic: If i have other problems, am I supposed to create a new thread for each one, or can I just use this thread?   Thanks.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!