• Advertisement

mrheisenberg

Member
  • Content count

    159
  • Joined

  • Last visited

Community Reputation

362 Neutral

About mrheisenberg

  • Rank
    Member
  1. Assuming I use DXGI_FORMAT_R16_UINT and not DXGI_FORMAT_R32_UINT for an index buffer format and an unsigned short array for the indices, I get 65536 vertices for a mesh.Now this is good, because it's not like I would ever bind a mesh that huge, but what if I bind a mesh that has 20000 vertices and I use tessellation to generate new vertices, so the new mesh has 4 times more vertices(80000).Will this be an issue with a 16-bit index buffer?
  2. Do you use command lists and deferred contexts?I imagine they would complicate the sorting if you split the renderables between the different deferred contexts on different threads.Like if you have a system that automatically distributes draw calls and it messes up the draw order. Also to add - I'm not using a second vertex buffer as an instance buffer, I'm using a buffer bound as a SRV, so I suppose my method is less efficient.The people from DICE said this technique is used in Battlefield 2, but they didn't mention anything like draw order.
  3. Ok just one question - why a 32bit draw item index - do you mean an integer?Why not just a direct pointer to the draw item?Isn't that the fastest possible way? Also a question about L.Spiro's link - he talks about depth sorting, but what if I plan to instance a lot of objects?I just add their transform matrices to the instance buffer and pass instanceCount to the draw call.Does the way they are depth sorted correspond to how the GPU draws them?Or does it just draw them in random order(when instancing)?
  4. My renderer(based on D3D11) is currently built a lot like the one from the Hieroglyph3 engine.Its something like this: class Renderer {      static ImmediateContextManager Immediate; //Encapsulates an immediate context, inherits from DeferredContextManager      static vector<DeferredContextManager> Deferred; //Each one encapsulates a deferred context          //ID3D11Device functionality - encapsulating methods, like CreateBuffer, CreateTexture, etc. };   In the ContextManager each time you set some state it checks if it's been set before and this way it prevents redundant API calls.However that's quite a lot of if checks involved for each object I render.I thought about sorting objects by material type, but that means first I need to sort by shader, then by texture, then by vertex buffer and pretty much anything that would require a state change in the pipeline, and that's a huge amount of sorting each frame and sometimes I still end up with cases where a redundant API calls i made.Basically: -for automatic state monitoring you get tons of branching each time you render an object -for sorting you have to perform a huge amount of sorts to make sure everything turns out ok The performance hits of both these methods get noticed with large amounts of objects on scene.The sorting actually comes up way heavier when I do it(I use Quicksort3), than just using the state monitoring method.Maybe I should make some combination of the two?
  5. I'm generating a mesh from a list of triangles and I can see the mesh properly in the Input Assembler in the Graphics Diagnostics of Visual Studio 2013, I can even see the proper vertex coordinates go trough the vertex shader, however the mesh doesn't get drawn.I think the rasterizer doesn't see it as triangles or something.All my other meshes render properly, however this one doesn't.I'm building it by adding pairs of 3 vertices in a list of triangles.What could be causing it to properly show up in the debugger, but not on the real screen?
  6. Hello, I'm writing a program that will generate me a CPU shadow for a 2D room with a light source and any number of polygons.The light is also blocked by the walls.This is the algorithm I've got so far: 1.Sort all polygon vertices by their angle from the light. 2.Generate an array of all lines between 2 vertices. 3.Iterate sorted list and for each polygon check the entire "lines" array to see if the light intersects that and check if the intersection point is closer than the current vertex 4.Each 3 generated vertices make a triangle and add it to the buffer that will be used for the shadow mesh. Is this the correct way to proceed?
  7. I have a 2D scene of polygons rendered with D3D11 and I have a light source on the screen.I need to compute a shadow map find out how much of the floor is lit(unoccluded by the polygons).The thing is I'm not sure how to do it in 2d, should I just give the polygons a height and then render a cubemap from the light's points of view?And then download shadow map to staging texture and loop trough dark pixels?Can anyone suggest a technique?
  8. I'm actually using cubes so I can more easily see their rotation, for the final scene I'm trying to render a sphereflake fractal like this one: As you can see, each sphere is rotated according to the center of its mother sphere, so the spheres never intersect/go inside eachother, however I couldn't find any guide on generating sphereflake positions online, mostly just raytracing articles.
  9. I changed the code to be like that:   void Entity3D::Face(const Entity3D& target) { XMMATRIX matrixScale = XMMatrixScalingFromVector(Scale); XMMATRIX matrixTranslation = XMMatrixTranslationFromVector(Translation); Transform = matrixScale * Rotation * matrixTranslation; //Creates the transform if(HasParent()) { Transform *= _parent->Transform; //Updates in relation to parent } XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); up = XMVector3Transform(up, Rotation); Transform = XMMatrixLookAtLH(Transform.r[3], target.Transform.r[3], up); //Creates the final matrix} It still doesn't work properly.Did I mess up the order?It seems right to me.Maybe there's another way to set 2 entities to face each other?I'm not entireli familiar with the math behind XMMatrixLookAtLH, so I'm not sure what's going on. Here are 3 pictures of the: The first two happen when I call: void Entity3D::Update() { XMMATRIX matrixScale = XMMatrixScalingFromVector(Scale); XMMATRIX matrixTranslation = XMMatrixTranslationFromVector(Translation); Transform = matrixScale * Rotation * matrixTranslation; if(HasParent()) { Transform *= _parent->Transform; } } Right after Face(); So if might be logical to comment out Update() and only use face?So here is what happens when I only use Face():   As you can see no matter at whan angle I arrange the cubes around the mother cube and the even smaller ones, when I call Face() they don't face the mother, they just clump up.What I want is for them to face it in a way that the new ones don't intersect the mother cube.This is what I want to achieve:
  10. Isn't Translation in global/world space?The HasParent() branch simply updates the entity's transform to be relative to the one of the parent, so if the parent moves, the entity will be moved with it.
  11. Ok so basically this is my method so far: //Insite the entity there is: //XMVECTOR Translation //XMVECTOR Scale //XMMATRIX Rotation void Entity3D::Face(const Entity3D& target) { XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); XMMATRIX temp = XMMatrixRotationRollPitchYawFromVector(XMLoadFloat3(&_rotationEuler)); Transform = XMMatrixScalingFromVector(Scale) * XMMatrixLookAtLH(Translation, target.Translation, up); if(HasParent()) { Transform *= _parent->Transform; } Sphere.Transform(Sphere, Transform); } I'm calling this so I can get one entity to face another, however I'm not sure where to put the Scale and Translation.What I was originally doing was using Polar2Cartesian coordinates to get cubes to rotate around a cube and it worked well, but I need them to rotate and face the central cube, however when I call the above method, half of them get clumped on the left of it and half of them get clumped on the right of the central cube(if it doesn't crash that is).   edit:the reason I used Rotation in the first post was, because my idea was basically to change the entity's rotation matrix such that it's facing the other entity.I can change it properly with XMMatrixRotationRollPitchYaw, however I can't possibly know the roll, pitch and yaw required for each cube to face the central cube.
  12. I'm using this code for my Entity class:   void Entity::Face(FXMVECTOR target) { XMVECTOR up = XMVectorSet(0.0f, 1.0f, 0.0f, 0.0f); up = XMVector3Transform(up, Rotation); Rotation = XMMatrixLookAtLH(Translation, target, up); //Rotation is a XMMATRIX, Translation is a XMVECTOR } And trying to get it to face another entity.Basically I'm doing this: Entity a = //.... Entity b = //....... a.Face(b.Translation); And what happens is one of 3 things: 1.The function crashes, because of an assertion 2.Entity 'a' is misplaced in a weird position 3.Entity 'a' dissapears This damn function XMMatrixLookAtLH has been the source of problems all over my project, everywhere I used it I had to spend hours to get it to work properly. If I create the rotation matrix with XMMatrixRotationRollPitchYawFromVector, it works perfectly, however I NEED to get it to face directly at the point it's been given and I don't know how to get it to work.Please someone give me an advice.  
  13. DirectXCollision BoundingFrustum usage?

    It's weird, objects flicker on the screen while debug output stays at "Objects on screen: 2", is it certain that the frustum must be transformed by the camera's view matrix?Or is it something else?
  14. I got my DirectXMath-using camera to compute view and projection matrices properly, however when using DirectXCollision structures I ran into some problems.When I frustum cull, everything is always culled away.Here is what I do: -Build a projection matrix -Build a frustum from that projection matrix with BoundingFrustum::CreateFromMatrix -Transform the frustum by the view matrix that is generated each frame -Each screen object has a BoundingSphere that is transformed by it's world matrix each frame -Each frame I test if entities intersect frustum and if they don't, they aren't rendered. Is this the proper usage?
  15. Can't properly produce a ViewProjection matrix

    Maybe the issue is here?   XMVECTOR Eye; XMVECTOR At; XMVECTOR Up; Eye = Translation; At = Translation + Transform.r[2]; Up = Transform.r[1]; return(XMMatrixLookAtLH(Eye, At, Up)); I tried not transposing, transposing only one or the other, transposing both, it never works properly.  
  • Advertisement