Hello,
In the past I've built a pretty simple OpenGL text rendering engine that uses the AngelCode BMFont library. I'm currently in the process of converting it to DirectX. As I'm doing my conversion, I'm looking for ways that I could improve my logic. I'm still very much a beginner, and was hoping to be pointed in the right direction to optimize my engine.
The application is a console emulator. It displays 101 characters by 36 characters, totalling to 3636 characters that can be displayed simultaneously. The amount of characters only changed when the user resizes the console app.
Right now, I'm just using a large vertex buffer, rendering it with DrawPrimitive on every paint. Sometimes, all of the vertices change, but it also happens that only a small subset of the vertices are updated as I might be only updating a few characters. For now, this does the job just fine, but I'm most likely going to be adding Direct2D support to this engine, so I'd like to minimize the work that I'm going to be doing with the bitmap font rendering part of the engine.
What should I do so that when I only need to update a couple characters, so I don't have to send the whole vertex back to the GPU?
Thoughts?
TL;DR - bitmap font rendering engine. currently using 1 vertex buffer. how to optimize for times where only a small part of the VB gets changed?