I am finally putting some effort towards my open source GUI library, which is meant to be generic but has OpenGL 3.x as the main focus for now since that's what my engine uses.
I decided to do away with a lot of cruft it had accumulated, doing away with OpenGL 1.5 support and focus on OpenGL 3.2+ core profile.
In order to keep the library generic enough that it could eventually be used with D3D, SDL or some other graphics API, I have separated the library into modules, a core, which does all graphic API (GAPI) independent operations and a renderer which does the specific API operations.
I keep an image buffer that represents the screen or the drawable area of the window hosting the GAPI, and the base renderer class draws lines, rects and so on to this buffer.
On the specific GAPI renderer, OpenGL in this case, I keep a texture object and each frame (for now, I will be doing it only when necessary later on), I issue a glTexSubImage2D with the contents of the image system buffer, then I render a quad (triangle strip really) textured with this texture over the whole screen.
So, I was wondering if this is the way to go or if I should be looking for some of the new fancy stuff on OpenGL 3.x+, it would be nice to keep image data entirely on the GPU, I don't think there is a simple way to access that memory for example to only change 15 pixels worth of a text caret, is there?
I considered keeping multiple textures (one per window/widget), but I don't know how optimal would that be, I guess as long as the window dimensions don't change much, it may be an improvement.