Jump to content
  • Advertisement
Sign in to follow this  
Lunpa

Rendering SVG on a vertex shader

This topic is 3033 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Rendering SVG on a vertex shader: Bad idea or AWESOME idea?

While stored as XML, SVG is a procedural format. As such, you can pretty much take the rendering commands in the order they come, if you're clever, condense it, and store the commands in the pixels of an image file.

I actually have that worked out already; and if you do bitmap tracing (like with inkscape), and optimize your svg just right, you can get some pretty astronomical "compression".

Say I were to implement the rendering commands for just the svg:path element (all my bitmap tracer really uses) on a vertex shader; how bad would the runtime complexity be? Basically, the rendering commands are for filling in a volume defined by lines and curves. I believe this is in the same realm of procedural textures, but I have very little experience with that.

Also, are there any pitfalls I should be aware of for this?



SVG path spec, for anyone who's interested:
http://www.w3.org/TR/SVG/paths.html

Share this post


Link to post
Share on other sites
Advertisement
There is an article about vector graphics rendering in one of the GPU Gems books, written by the famous Jim Blinn. I don't recall off the top of my head how it was implemented, but I think it is available online at the nvidia developer site.

As a side note, I don't know if you have taken a look at the D3D11 tessellation stages or not, but they would be ideal for this type of thing. You can decide dynamically how finely to tesselate a mesh that represents the path based on the onscreen size and derivative of the path. This would produce a (near) optimal tessellation for a given shape...

Share this post


Link to post
Share on other sites
Awesome; I found the article online here: http://http.developer.nvidia.com/GPUGems3/gpugems3_ch25.html

It is the spitting image of what I am after, but with one caveat: I'm looking to use this for texturing 3d models, but the article seems to assume that the geometry was already taken care of. But I guess it keeps the sport in it, if the answers all there right from the start.

I think a variation to using tessellation, would be to generate a single curve for a polygon, which simplifies however many curves actually pass through it. This could be used with tessellation, to simplify objects further away, too.

I'm not all that familiar with tessellation shaders; but also for this case, I have to do my best to broaden my minimum system requirement :P


But thank you! This is very helpful =)

Share this post


Link to post
Share on other sites
The above resource is, if memory serves, co-authored by Blinn, Huges and Loop.
All work for Microsoft Research, graphics section. Check out this as well, there's a demo available doing pretty likely what you're trying to do.

I have been using the method described in the GPU Gems article (which is slightly more elaborated than its original version (pdf)) and I'm quite satisfied with it. It is a completely pixel-driven approach (almost) so there's no need to have tassellation units.

The only thing that you should be aware for me is the massive complexity in correctly triangolate the given shapes while maintaining curve information. There are a couple of triangolators around. If you're ok with them you'll save a lot of work. I personally don't quite get how could you make it work for arbitrary 3D models.

Share this post


Link to post
Share on other sites
Quote:
Original post by Krohm
I personally don't quite get how could you make it work for arbitrary 3D models.


Me neither =)
I think the key thing here, is you have one line/curve per triangle in the method described in the gpu gems article. I guess the way I envisioned doing this would involve creating a texture with the vertex shader, and then letting the fragment shader access it as a sampler; but that was grounded in a poor understanding on how samplers work.

Perhaps another way of doing this would be to render each svg texture as 2d objects, like described above, at whatever desired resolutions; and then in a second render pass, render the models normally. But these svg files can have something like 20 different paths overlaid one another; but I guess that just means up to 20 meshes per svg texture, and that a little bit of z-indexing keeps rasterization of said texture on one pass.

Is one pass per texture a bad idea (say, I have like 60 textures)? Would it be better to do a mega-texture here (one pass per all textures)... considering that the textures are not always going to be rendered at the same size?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!