Jump to content
  • Advertisement
Sign in to follow this  
Randy Gaul

2D Transparency : Uniform vs Vertex Attribute

This topic is 759 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm a bit new to graphics rendering and APIs and wanted to ask anyone experienced with rendering APIs for some opinions. I have my own sprite API for pushing quads into a buffer. The quads are sorted on the CPU.

 

What is generally the preferred method for implementing transparency for 2D sprites, where the transparency of individual sprites can be adjusted over time? Would it be preferred to set a shader uniform on different draw calls, and try to sort sprites into calls? This might work if the transparency was quantized to say 0-255.

 

Or perhaps it's generally best to just place some alpha values into the vertex attributes of sprite quads directly? I am kind of shy about this idea since it seems a bit wasteful to place an alpha attribute 6 times into a single quad (I'm drawing quads with two triangles each).

 

Does anyone have any feedback or opinions to share?

 

Note: I've applied pre-multiplied alpha to all textures so complicated sorting shouldn't be necessary.

Share this post


Link to post
Share on other sites
Advertisement
Generally, yeah you'd use uniforms... But, trying to draw a large number of simple objects is a special case where the general solution might not be best. Using the vertex attributes allows you to draw so many more sprites in a single batch, so is probably a great idea here.

The trade off you're making here is adding a tiny extra cost to your vertex shader and wasting a bit of RAM, in exchange for larger batch sizes, which will save you a lot of CPU time, and potentially help with overall GPU efficiency.

Btw you can render a quad with 4 verts and 6 indices, instead of with 6 vertices.

Another option is instancing, where the transparency value would be in a "vertex buffer" that holds per-instance data, which would save a bunch of RAM... but, in practice, instancing is usually very inefficient when you've only got a few vertices per instance.

Share this post


Link to post
Share on other sites

Instancing the alpha was a pretty interesting idea... For now I'm just setting the shader uniform before draw calls. Luckily in-code at the time I'm pushing sprites into a buffer, generally the sprites with the same alpha value are created together in code. So the result is very similar to instancing, but without implementing any kind of instancing. i.e. problem solved "outside of code" with some a-priori planning.

 

Does anyone else have any ideas or comments? It would be very appreciated :)

Share this post


Link to post
Share on other sites

Since you're doing 2D, on almost all devices you're going to be bound by the CPU very quickly. Even for weak mobile phones, the rasterization and shader work that happens in 2D games is going to be less heavy to compute than all the CPU work done to get stuff to draw. Of course, if for some reason your game requires drawing tons of alpha blended full screen rectangles with complicated shader operations and a lot of texture fetching using a low amount of draw calls, this might be different, but this is just not the case for most 2D games. On desktop PCs, consoles, and faster mobile devices, the GPU work involved with 2D code will be nearly trivial for the device by today's standards, and the GPU will just be idling most of the time.

 

So if you're in a relatively early stage of implementing your renderer, you should consider the fact that you will add actual batching (one draw call, many objects) eventually - if you're ever going to get into performance sensitive situations. Considering that in 2D scenes, most types of objects can be drawn using the same vertex formats, geometry (sprites based on quads, bitmap fonts based on quads, particles based on quads, tilemap tiles based on quads, etc.), textures (thanks to atlases), render states (shaders, blending etc) you can take a ton of work away from the CPU this way, drawing hundreds to thousands of distinct objects in single draw calls.

 

That doesn't mean you have to implement all of that now, but your architecture should allow for it. I've been optimizing a WebGL renderer recently whose architecture ran practically opposite to this idea, and I can tell you it's a huge pain to work that stuff in later. 

 

You would think that doing 2D is trivial with how fast CPUs and GPUs now are, right? My experience has been completely the opposite unfortunately.

Edited by agleed

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!