Sign in to follow this  

2D vector graphics using shaders

This topic is 2586 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm working on a 2D vector graphics engine that will (hopefully) draw anti-aliased circles, quads, and bezier curves using pixel shaders. So vertex-wise everything is a quad. So for a circle, the pixel shader just checks to see how much of the pixel is inside the circle, for instance, and scales the color's alpha by this value. The result (hopefully) would be a nicely anti-aliased pixel-perfect circle.

I was hoping to use separate shaders for the different shapes, to try and keep the instruction cost and shader model requirements quite low. But for this to work I'll also be alpha-blending a lot.

If I do use different shaders for the different shapes, does that mean that if I have to draw, say, a circle on top of a square on top of a circle, that would require 3 separate draw calls? ie: I can't batch the circles together into a single draw call, because if I do there's no way to draw the square in between the two circles without using the depth buffer, and then I'll have jagged alpha blending artifacts.

Which would quickly amount to a potentially large number of draw calls for any non-trivial scenes, which might become a bottleneck. Which would seem to imply I need to write a monolithic pixel shader which can draw all the 2D primitives I want to support, so I can batch my 2D primitives into reasonably sized chunks.

But the code for drawing anti-aliased pixels will probably be quite chunky. So putting everything into a single shader might push up the minimum specs to shader model 3 or higher just from the instruction count, which I was hoping to avoid.

So I dunno, I'm just wondering if people have any thoughts or tips. Maybe there's some sneaky trick using different passes?

Share this post


Link to post
Share on other sites
Depends how 'large' large is.

Let's take something simple like an eye:

You have a circle for the eyeball itself. You have a circle for the iris. You have a circle for the pupil. And you have a circle for the light shine. And maybe another eye shine if you want it to be extra cute looking.

That's 4-5 draw calls if each circle is a separate draw call. Just for one eye of a character. An average character might end up being constructed of a few dozen or low hundreds of primitives.

That's vs. maybe 6 draw calls for a 3D mesh based on passes and things like that. The nearest analog would be if you had to draw every triangle in your 3D scene as a separate draw call.

I expect to be fill rate limited, but if every primitive is a separate draw call I could easily end up draw call limited. So I want to nip that in the bud early on.

Share this post


Link to post
Share on other sites
Quote:
Original post by Relfos
Why don't you use just one shader for everything? This way you can batch draw calls easily, pass the 'shape' type as a uniform to the shader.


Quote:
Original post by Numsgil
Which would seem to imply I need to write a monolithic pixel shader which can draw all the 2D primitives I want to support, so I can batch my 2D primitives into reasonably sized chunks.

But the code for drawing anti-aliased pixels will probably be quite chunky. So putting everything into a single shader might push up the minimum specs to shader model 3 or higher just from the instruction count, which I was hoping to avoid.


So yeah, that's my current plan, I'm just wondering if there's a better way.

Share this post


Link to post
Share on other sites
I don't see a better way, if you want alpha blending and zsorting. What about instead of using quads for everything, aproximate the shapes with triangles, this way you can still use a shader for everything, and the shader code wouldn't need to be very complex. With enough triangles, you can get a very smooth circle, and also, you can get access to all kinds of complex shapes, that would be difficult to define with just a formula.

Share this post


Link to post
Share on other sites
Quote:
Original post by Relfos
I don't see a better way, if you want alpha blending and zsorting.


Actually I only care about alpha blending. I have to sort all the quads anyway, and go back to front to prevent artifacts, so I can't use the depth buffer anyway.

I'm wondering if there's a way to approximate alpha blending that would allow you to draw quads in a more arbitrary manner? I can't use additive blending, for instance, because it makes things look glow-y, but maybe there's some other trick?

Quote:

Is there a way to What about instead of using quads for everything, aproximate the shapes with triangles, this way you can still use a shader for everything, and the shader code wouldn't need to be very complex. With enough triangles, you can get a very smooth circle, and also, you can get access to all kinds of complex shapes, that would be difficult to define with just a formula.


That's an option, but you have to dynamically tessellate everything every frame as you zoom in/out, which is a pain. If I was using SM4, I could use a geometry shader to do that, which would be pretty cool.

Quote:

Original post by Halifax2
Chapter 25 of GPU Gems 3 might be of interest to you: Rendering Vector Art on the GPU.


Thanks, I'll read over it and see how they do it.

Share this post


Link to post
Share on other sites
The article on GPU Gems 3 doesn't solve the ordering problem. But, how much translucent overlapping objects do you want to support? Have you verified this is actually a problem? Have you considered depth peeling techniques?

Share this post


Link to post
Share on other sites
I would suggest you simply render the shapes triangulated. that way you can simply benefit from alot of feature like MSAA etc. and are not limited to simple shapes . Also you could simply render everything of the same "material" in one pass. You would simply need to optimize in a different place which would be to make the engine push as many triangles as possible. Things like pseudo or real hardware instancing are your friend in a scenario like that and fairly easy to implement.
Another idea would be to approximate the shapes using distancefields. that would have the benefit that you could simply use quads, and even simple textures (holding the distancefield) to define arbitrary shapes. You would also only have one shader to draw the shapes, so you would not have the problem of too many state changes. Alpha blending is still needed, but that should not be a problem.

Share this post


Link to post
Share on other sites
Quote:
Original post by apatriarca
But, how much translucent overlapping objects do you want to support? Have you verified this is actually a problem?


For things like circles, I'm basically creating a procedural texture in the pixel shader. So just right there all circles will need alpha blending. If I antialias the edges of shapes in the pixel shader, they'll all need alpha blending. So if I do things in the pixel shader like I want to pretty much every quad I draw needs to be

Quote:
Have you considered depth peeling techniques?


From what I understand, if this worked, it would just reduce fillrate. Which is a worthy goal, but not really the issue I'm wrestling with right now.

Quote:
Original post by mokaschitta
I would suggest you simply render the shapes triangulated.


As I said before, the issue with triangulating everything is that to make things smooth you have to dynamically tesselate shapes as the zoom level changes, or you'll get artifacts (the circle will not look smooth at high zoom levels, as you see the "kink" in its surface). Which is a huge pain to do. And if I want pixel-perfect shapes, you're talking about a lot of triangles.

Quote:

that way you can simply benefit from alot of feature like MSAA etc. and are not limited to simple shapes


Keep in mind that you can stretch the shapes using transforms. So a pixel-perfect circle can be stretched to form any ellipse without changing anything in the pixel shader code (since you're doing the pixel (approximated as an ellipse) to circle collision code in texture coordinate space).

So circle, square, open/closed (that is, whether it's a line or a polygon) linear curve and open/closed bezier curve (or cubic spline curve) should be the only primitives necessary to build up all the other shapes. There's some elbow grease to figure out how to get the curves working properly in the pixel shader, but it seems fairly doable.

Quote:

Another idea would be to approximate the shapes using distancefields. that would have the benefit that you could simply use quads, and even simple textures (holding the distancefield) to define arbitrary shapes. You would also only have one shader to draw the shapes, so you would not have the problem of too many state changes. Alpha blending is still needed, but that should not be a problem.


I don't know what you mean by distancefields specifically, but using textures runs into the problem that at some zoom level things are going to look pixellated, which I'm trying to avoid (that's the whole point of vector graphics, after all). If you procedurally generate the texture in the pixel shader, you're basically back to what I'm doing right now anyway.

Share this post


Link to post
Share on other sites
distance field rendering allows you to render vector like graphics from low resolution textures (and also antialiase them) It's the way valve renders the text and some other textures in team fortress 2. they scale exactly like you want, search google and the forums for it, there have been many threads about that topic before.

Share this post


Link to post
Share on other sites
Quote:
Original post by Halifax2
Chapter 25 of GPU Gems 3 might be of interest to you: Rendering Vector Art on the GPU.


Quote:
Original post by raigan
Rather than using circles, rectangles, etc. as primitives, you could use a single general/generic primitive, for instance beziers: Resolution Independent Curve Rendering using Programmable Graphics Hardware

You could then build circles, rectangles, etc. by combining many of these simpler primitives.


This is an area i have been attempting to look into as well. Have there been any implementations of these methods. i was attempting to work out the alpha-blending method mentioned for graphics cards that do not support the ddx/ddy gradient instructions. Although not entirely necessary for more recent cards, i would like to better understand the math involved, and would be nice to have support for it in those instances. Any help in that area would be greatly appreciated.

Haven't compared against using distance fields yet though.

Share this post


Link to post
Share on other sites

This topic is 2586 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this