How is software rendering GPU accelerated by Windows, Mac and Ubuntu?

Started by
-1 comments, last by imagenesis 10 years, 2 months ago

It is specified that the Desktop Window Manager of Windows Vista (and other operating systems namely MacOS, Ubuntu) presumably:

  1. Provides buffers to be filled consisting of values for the color of each pixel and programs are responsible to fill those buffers generally with APIs that provide UI abstractions.
  2. Take for instance the UI of Adobe's suit of products. It is a programmable UI with each component being generated based on varying brightness. Presumably, the way this is implemented is that on a brigthness change the bitmap of the UI component is cached.

Questions:

1.Does it make sense to composite said bitmaps to said buffer provided by DWM or does it make more sense to write to a GPU API (OpenGL, Direct3d)? Presumably if you are compositing the contents of your Window as a set of bitmaps, where you send it is your choice to make and you can send it to the GPU directly if you wish.

2.A confirmation of #1: It is in fact the case that for a UI generated by software(eg a vector or as specified the behavior exhibited by Adobe's UIs) as provided by Qt and other general purpose OS UI libraries, generally what is fed to the OS APIs or the GPU APIs are cached composited bitmaps.

3.Essentially the point of this question, is a clarification on the abstraction of what occurs in libraries that provide Vector implementations. In the instance of UIs made from vectors, it would appear to make sense to cache them as spritesheets when reasonable as it would appear they are all sent to the GPU anyway. Otherwise, pixel calculations are in fact made for the entirety of the Window on every frame in the CPU and written to the OS provided bitmap buffer.

It would appear that if you are not caching ALL of your elements to a spritesheet or implementing GPU Vectors than you must in fact render to a bitmap (OS buffer) with software. The reason being that you need to maintain Z-order.

4.However, perhaps it's better to render an invalidated vector to a transient texture, when necessary in the scene graph hierarchy on a render pass, since it's functionally equivalent to rendering to the provided bitmap buffer except for:

  1. Allocating a texture of 2^n dimensions
  2. A unbatched GPU draw call

5.Please specify libraries that have explored the various specified options and picked a particular implementation with regards to hardware accelerating UI abstractions.

This topic is closed to new replies.

Advertisement