Jump to content
  • Advertisement
Sign in to follow this  
kuroioranda

2D acceleration on modern cards

This topic is 2604 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I'm wondering if anybody knows whether or not modern consumer GPUs on PC (ATI, Nvidia, Intel) still have dedicated hardware for 2D rendering operations, or if the drivers repurpose the 3D hardware for 2D operations now? It seems like it would be easy enough to emulate most 2D blitting operations with screen aligned 3D primitives.

Share this post


Link to post
Share on other sites
Advertisement
3D operations end up being 2D operations by the time pixels are being processed. Yes, the common way of performing 2D processing is with flat "3D" primitives.

Share this post


Link to post
Share on other sites
Considering that 2D is really the very same as 3D but with everything having the same Z, I don't see any reason why dedicated hardware would even be required.

Share this post


Link to post
Share on other sites
There used to be dedicated 2D hardware once upon a time, when 3D graphic accelerators were out of the questions and blitting with a reduced CPU and bus workload was a godsend, until around 1995 modern 3D acceleration transitioned from really bleeding edge (e.g. early Voodoo add-on cards that worked alongside a graphics card) to a barely more expensive alternative to a less capable mainstream 2D only graphics card.

Share this post


Link to post
Share on other sites
I don't know if this is still relevant in 2011, but out of interest -- at work, there's one particular GPU from ~5 years ago that I've got complete control over (as in, I can manually write words to it's instruction stream and control it's program counter, instead of using an API like D3D).
On this GPU, there are still commands to switch it between 3D mode (which allows it to act like a GPU) and 2D mode (which allows it to perform raw memory copying operations in VRAM).
However, it also has a very deep pipeline, and switching modes causes a pipeline flush (which is almost the worst thing you can do to harm performance). So, in our new engine, we avoid using the 2D mode *at all* and perform any "2D" type operations in 3D mode.

Share this post


Link to post
Share on other sites
Thanks guys. That's one suspicion verified :).

As a further question, what about acceleration of primitives that don't have any easy 3D analogue? Blits and lines are pretty easy to do (as you say, rasterize in "3D" hardware without any perspective correction), but I'm curious about more complex 2D primitives such as filled multi-point polys and arcs. I think (although I'm not positive) that older cards had real 2D hardware dedicated to raster operations such as arcs and complex polygons that the GDI could use. I can see how these things could be emulated using line and triangle strips with a bit of CPU preprocessing in the graphics driver, but is that how it's actually done?

Hodgman, out of curiosity what chip is it?

Share this post


Link to post
Share on other sites

As a further question, what about acceleration of primitives that don't have any easy 3D analogue? Blits and lines are pretty easy to do (as you say, rasterize in "3D" hardware without any perspective correction), but I'm curious about more complex 2D primitives such as filled multi-point polys and arcs. I think (although I'm not positive) that older cards had real 2D hardware dedicated to raster operations such as arcs and complex polygons that the GDI could use. I can see how these things could be emulated using line and triangle strips with a bit of CPU preprocessing in the graphics driver, but is that how it's actually done?

3D video cards only rasterize triangles (some /may/ have dedicated quad rendering paths, but most will still split these into two triangles). Concave polygons are required to be pre-tesselated, and from what i've read, that functionality is no longer hardware accelerated. Though you could quite easily do so via compute shaders.


Hodgman, out of curiosity what chip is it?

Vendor NDA will prevent him from being answer that, im pretty sure. The hardware in question is both a blessing and a curse to work with though.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!