Click to Enlarge
I got it working eventually by using CreateDIBSection to create a DIB, and then rendered onto that. One really nice thing about that is that CreateDIBSection just gives you a pointer to the raw memory representing the bits in the DIB for free. The only thing you have to do in return is call GdiFlush() before touching it, to make sure GDI drawing calls have all been executed.
Because I'm sure you all care, here's the source code for CFont::Create() (Actually, the reason is if this shows upo in Google, having working code here helps people a lot more):
bool CFont::Create(const std::string& strFace, int nPointSize, unsigned int nFlags)
// Destroy old font
// Create a memory DC
m_hDC = CreateCompatibleDC(NULL);
CLog::ErrorFormat("ERROR : CreateCompatibleDC() failed. Error code: %d\n", GetLastError());
// Create a GDI font
int nHeight = MulDiv(nPointSize, GetDeviceCaps(m_hDC, LOGPIXELSY), 72);
m_hFont = CreateFont(-nHeight, 0, 0, 0, (nFlags&FLAG_Bold) ? FW_BOLD : FW_NORMAL,
(nFlags&FLAG_Italic) ? TRUE : FALSE, (nFlags&FLAG_Underline) ? TRUE : FALSE,
(nFlags&FLAG_Strikeout) ? TRUE : FALSE, ANSI_CHARSET, OUT_DEFAULT_PRECIS,
CLIP_DEFAULT_PRECIS, DEFAULT_QUALITY, DEFAULT_PITCH | FF_DONTCARE, strFace.c_str());
m_hDC = NULL;
CLog::ErrorFormat("ERROR : CreateFont() failed. Error code %d\n", GetLastError());
// Calculate glyph size (Next power of 2 from height)
Assert(nHeight > 0);
m_nGlyphSize = RoundUpToPowerOfTwo(nHeight+4); // Add some buffer space
if(m_nGlyphSize < 32) m_nGlyphSize = 32;
// Create a memory bitmap for rendering glyphs to
memset(&theBmp, 0, sizeof(theBmp));
theBmp.bmiHeader.biSize = sizeof(BITMAPINFOHEADER);
theBmp.bmiHeader.biWidth = m_nGlyphSize;
theBmp.bmiHeader.biHeight = -m_nGlyphSize;
theBmp.bmiHeader.biPlanes = 1;
theBmp.bmiHeader.biBitCount = 32;
m_hBmp = CreateDIBSection(m_hDC, &theBmp, DIB_RGB_COLORS, (void**)&m_pBits, NULL, 0);
m_nGlyphSize = 0;
m_hFont = NULL;
m_hDC = NULL;
CLog::ErrorFormat("ERROR : CreateCompatibleBitmap() failed. Error code %d\n", GetLastError());
// Select everything into the DC
m_hBmpOld = (HBITMAP)SelectObject(m_hDC, m_hBmp);
m_hFontOld = (HFONT)SelectObject(m_hDC, m_hFont);
SetBkColor(m_hDC, RGB(0, 0, 0));
SetTextColor(m_hDC, RGB(255, 255, 255));
// All done
m_strFace = strFace;
m_nSize = nPointSize;
m_nFlags = nFlags;
The bit commented out at the bottom is the code used to generate the above screenshot.
I'm not going to post the code for PreloadGlyphs, because it's pretty big. All it does it goes through each wchar of the string passed and calls FillRect then DrawTextW to draw the glyph. Then GdiFlush(), then I lock the texture and start copying stuff to it. If anyone wants the full source for some reason, post here and I'll reply in a comment (Assuming I check my comments, so don't bother if this is a week old) or you can find a link to my e-mail address in my profile if you want it e-mailed to you.
So I'm happy now. Tomorrow I'll be dealing with actually drawing the text, but that's piss easy. My CFont::DrawText() call will just supply a list of SOSprite objects, one for each glyph. That gives me the advantage of being able to do things like wobble a string on a sine wave by adjusting the height of each Glyph. And it's effectively free to do in my engine.
I've realised that I could really do with making a free list for SOSprites. I know, I know, premature optimisation and all that, but doing DS development has told me do get things like this out of the way sooner rather than later. And since each letter of text is a sprite now, that's a lot of sprite objects. I should be able to overload the member operator new to do that, that'll be nice. Another advantage of using NEW in my memory manager is it won't get in the way here.
Also, once I've got full 2D support, I'll be releasing the entire engine source code for people to look at and use. It's pretty straightforwards, but could do with a bit of tidying. There's some things like to just get a sprite on the screen you need to set a bunch of parameters, and most of them can be given default values.
So, once 2D is done, I'll take a step back and do a serious engine code tidy. I don't know if I'll release the GUI code with the engine, since it's not engine code, but it is useful to have. I can always supply it seperately, it's completely modular (Apart from it's dependance on the engine of course).
I also finally got around to posting on the DirectX private newsgroups about D3D and thread stuff, since it's been driving me insane lately (My app takes about 20 seconds to start while it loads some textures).
I've noticed that when using the debug Direct3D runtimes, 2 threads are
created every time I call IDirect3DTexture9::GetSurfaceLevel(), or when I
create a resource (VB, IB, Texture tested so far). My debug output is full
of "The thread 'Win32 Thread' (0xd7c) has exited with code 0 (0x0)." type
Now, I thought this might be some sort of performance tweak, since I've only
noticed this happening since I got my dual core processor, but it only
happens with the debug runtimes, and it causes a huge slowdown - where
loading a 256x256 texture with a full mipmap chain through D3DX used to take
a fraction of a second, it now takes around 2 seconds and spawns 18
threads - which causes a lot of jittering in my app.
Something that may be related to this, is something Jack posted in his
GameDev.Net journal is apparently he gets a load of threads spawned when
making a defensive copy of the implicit swap chain.
Anyone know anything about this?
Anyway, Sugar Rush is about to start, so I am bed.