I haven't really used the Geometry Shader stuff before. I'm old and don't like change. It frightens me.
But, currently with Gorgon I'm recycling a dynamic vertex buffer to display my sprites. Basically if the sprites use the same texture, shader, etc... it adds the sprite to the buffer until it's full or until there's a state change. If the buffer is full, then it's discarded, otherwise more sprites get appended until full. Upon state change or discard, it gets flushed to the render target in one draw primitive call. This seems pretty standard to me, and it's been working quite well for me.
However, I've been giving some thought to using a geometry shader to stream out the sprites. I've done a little reading and it seems that this is an ideal use case for GS. But, apparently there's a few gotcha's with these shaders? Notably, the 1024 32 bit float limit. I understand that the size of data out might an issue as well?
Currently, I'm using a position, color and uv coordinates for my sprites vertices. I suspect I'd need to send all of these, and more besides to shader to meet my needs. So would that cause an issue? I pre-transform my vertices before sending them to the vertex buffer (i.e. no world matrix, I've found this to improve performance), and update the vertex position by the camera and projection matrix in the vertex shader. Will this cause an issue?
Has anyone here used a GS to power a sprite renderer? Did you compare its performance to a batched solution (like the one I presented above)? I'm not able to set up a test project for the time being, so if I could get some feedback, that'd be swell.
Tape_WormMember Since 21 Apr 2000
Offline Last Active Yesterday, 08:55 PM
- Group Members
- Active Posts 1,159
- Profile Views 3,635
- Member Title Member
- Age 38 years old
- Birthday June 18, 1975
- Website URL http://www.tape-worm.net