Advertisement Jump to content
  • Advertisement

chillypacman

Member
  • Content Count

    412
  • Joined

  • Last visited

Community Reputation

100 Neutral

About chillypacman

  • Rank
    Member
  1. I'm trying to clone a very large number of animation keys in 3D studio max and moving them all to the end of the track list. The problem is I have thousands of keys in the entire animation and it's not a simple matter of shift+click+drag. Is there anyway at all I can copy all keys within time frames to another time frame WITHOUT dragging them across? It lags terribly as its basically drawing and shifting through thousands of keys for several bones on the same rig...
  2. chillypacman

    Only alpha textures work?

    I don't have this issue (at least I don't think so), mind showing how you create the textures?
  3. Your suggestion actually sounds pretty good! I'm already using matrix arrays in a shader to do the bone transforms so it would (in theory) be a simple matter of doing the seperate animations and adding them to the array which gets passed in.
  4. Quote:Do you mean hierarchy frames (bones, joints)? If not, perhaps you can explain it differently Ah yes, I mean bones, joints etc. Sorry that was ambiguously worded, I've been working with D3DXFRAME stuff for a while and I'm starting to forget that :P Quote:You probably know this: it's possible to blend two or more tracks together with a fraction of each track applied at a particular time. But it sounds like you, perhaps, want one track to be a "shoot" animation and another track to be a "run" animation and have them applied in full simultaneously to have the character run and shoot at the same time. That isn't directly supported. Yeah I pretty much want stuff like that, currently I have basically loaded up the hierarchal mesh and render it twice, once for the lower body, once for the uppder body, and I synch up their animations as needed. It's a bit fiddly and honestly I'm not too proud of the code..
  5. I was wondering if it's possible to separate which frames get which animation tracks in DirectX. The animation controller seems to be very specific to the entire hierarchy and I'm not sure if there is a way to separate this without having to create a whole different clone to the animation controller and substituting the bone transforms (which I would imagine would be a fairly messy process).
  6. I was wondering if it's possible to separate which frames get which animation tracks in DirectX. The animation controller seems to be very specific to the entire hierarchy and I'm not sure if there is a way to separate this without having to create a whole different clone to the animation controller and substituting the bone transforms (which I would imagine would be a fairly messy process).
  7. It should be fine, I have books with code that uses DXUT in it.
  8. nvm, just figured out I shouldn't be incrementing the indice value by the number of indices the model has but by the largest indice(+1), it makes so much sense actually, I don't understand why I thought doing the other thing would work... Now off to bed before I actually break code...
  9. I'm trying to make multiple copies of the same vertex buffer in a larger vertex buffer. I can make the vertex copies fine (it would seem) however I'm having problems with copying th eindices, since its the same vertex buffer the indices have to change and cannot simply be copy+pasted within the same index buffer. What I do is: Create a vertex buffer x times larger than the vertex buffer I'm copying from fill large vertex buffer with values from the small vertex buffer, once I reach the end of the small vertex buffer I reset the counter to 0 and start over. I've checked the values and they're all correct (i.e. I know for sur eI'm copying th ebuffer values properly and doing so multiple times) I create an index buffer x times larger than the index buffer I'm copying from I copy the smaller index buffer to the larger one, after filling it up once I offset the value of the small index buffer by the number of vertices then I set them to the larger index buffer. This does not seem to work, I'm fairly certain my idea behind creating a larger index buffer are flawed since when I render without an index buffer set the model shows up broken but in a consistantly broken way (i.e. all the triangles look the same but don't connect to other triangles properly). I'm really not sure what I sould be doing? Maybe it's just late (4am) and I've been working on this for the past four-ish hours...
  10. ah fair enough, yeah I thought as much...
  11. If I have an ID3DXMesh object I can get its buffer by calling LockVertexBuffer() on it, is it possible to increase the buffer size? I don't mean to change its vertex declaration, just increase the number of vertices it has.
  12. chillypacman

    Direct3D why Depth Buffer doesn't work.

    Haven't really used DX10 before but you could try device->SetRenderState(D3DRS_ZENABLE, true)?
  13. chillypacman

    Why is instancing so fast?

    So if I were coding in OpenGL, DX10 or DX11 I wouldn't benefit as much from instancing? would it sitll be worth implementing instancing with those APIs?
  14. Instancing seems to speed rendering up quite a bit but I'm not sure why. Basically if I have a model which I render in a shader I set apparopriate matrices for it to be transformed into in th eshader and call draw on it and I do this multiple times for the same model. With instancing I create a large vertex buffer and fill it with the same model multiple times and give each instance of the model an index which is stored in the vertices, then in the shader I look up the index stored and I use that to determine how it should be transformed. The only idffernece I really see is one draw call versus multiple draw calls but ultimately the size of the vertex buffer being submitted is the same (actually it could be slightly larger in an instanced vertex buffer). So why is it so fast? Is it because a draw call is so slow in directx? Why are draw calls slow then?
  15. Quote:Original post by MJP In most cases a deferred renderer will be bound by the number of pixels you end up shading in your lighting pass. This might seem obvious, but what isn't obvious is that there's not necessarily a direct relationship between the number of lights and the number of pixels shaded. Two reasons for this: 1. The number of pixels shaded for a light depends on how big it is, and how close to the camera it is. If all of your lights are large, you could end up with 20 full-screen passes for 20 lights. 2. Typically for a deferred renderer you use optimizations to reduce the number of pixels shaded for any particular light. This includes... -Using bounding volumes to restrict the shaded pixels to the area where the light is actually affecting pixels -Using depth testing to reject pixels that are "buried under ground" or "floating in mid-air" -Using stencil testing to reject pixels where no G-Buffer data has been rendered, or to enhance the depth test I've written a DirectX bsaed deffered renderer based on the XNA tutorials here so I can confirm it uses bounding volume tests for pointlights (namely spheres) however I don't think the depth tests ever occur so it might be worth for th eOP to try that. Quote:Consider taking Naughty Dog's (and some others) approach to lights and render using screen-space tiles instead of world-space spheres This sounds interesting, do you have a link to a paper or something discussing it?
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!