• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

115 Neutral

About trojanfoe

  • Rank

Personal Information

  • Industry Role
  • Interests
  1. Yeah I already do that - I keep position, origin, scale and rotation and a transform and inverse transform, both the latter with dirty flags. The matrices are calculated in their "get" methods if their dirty flags are set (I think we are on the same page). At the moment I can use SpriteBatch in a "flat" scene graph by simply providing the position, origin, etc to it's Draw() method, but in order to get a hierarchical scene graph working I would need to decompose the world transform back to position, origin, etc and that's where my maths breaks down. However this seems like a waste to decompose, given the vertex shader loves matrices so much anyway, so I am thinking: 1) For SM5+ hardware, I will pass the transform and other stuff in a structured buffer. 2) For < SM5 I think I am looking at passing the transform in a per-object constant buffer. I am currently trying to learn how to do that
  2. Hey thanks - already on that one. The major issue I currently have with SpriteBatch is that I want to pass it a world transform rather than position, origin, scale and rotation, as the transform is stored in my scene graph entities and is accumulated as the scene graph is traversed (i.e. multiplied with parent transform). At the moment I am having to decompose the transform back to position, origin, scale and rotation, however I cannot get the maths right for it to work. Passing transforms to the vertex shader on a per-object basis looks tricky anyway, unless you want to use structured buffers (SM5+ ?), so I think I am looking at a somewhat high-end sprite renderer. Oh well, it's fun learning as I go.
  3. I thought the same and hacked a const in and it made no difference. Doesn't really matter now - I've just added a conversion method to my Util class, however I was a bit stumped as to why it didn't work and wondered if it was a well known issue in this forum. Thanks for your reply.
  4. I hope this is the right place to ask questions about DirectXTK which aren't really about graphics, if not please let me know a better place. Can anyone tell me why I cannot do this: DirectX::SimpleMath::Rectangle rectangle = {...}; RECT rect = rectangle; or RECT rect = static_cast<RECT>(rectangle); or const RECT rect(m_textureRect); despite Rectangle having the following operator RECT: operator RECT() { RECT rct; rct.left = x; rct.top = y; rct.right = (x + width); rct.bottom = (y + height); return rct; } VS2017 tells me: error C2440: 'initializing': cannot convert from 'const DirectX::SimpleMath::Rectangle' to 'const RECT' Thanks in advance
  5. But there is also the need to actually write a replacement SpriteBatch in order to support instancing... I'll go head-in-the-sand for the time being
  6. Hey thanks for the replies. I think I was chasing down performance issues I don't actually have. Premature optimisation I already have a solution for depth sorting in the CPU by only allowing a small finite number of layers, and I will ignore the GPU instancing until I have more experience with Direct X and stick with SpriteBatch.
  7. Hi there, this is my first post in what looks to be a very interesting forum. I am using DirectXTK to put together my 2D game engine but would like to use the GPU depth buffer in order to avoid sorting back-to-front on the CPU and I think I also want to use GPU instancing, so can I do that with SpriteBatch or am I looking at implementing my own sprite rendering? Thanks in advance!
  • Advertisement