Custom font with D3DX

Started by
23 comments, last by MikeVitt 18 years, 4 months ago
What I mean by the shadows is that I blur the characters in the alpha channel so that they overlap the original characters in such a fashion that they look like shadows. Have you ever read the book titled “The Dark Side of Game Texturing”? The author describes making blood textures (or the like) using a flat color but the actual sprite in the alpha channel. The result should of the alpha blended and/or alpha tested texture should result in what looks to be a blood stain or what ever the sprite mimics.

Essentially, this is how I would like to achieve my shadows. Actually, one could call them shadows, but also call them glows, but composed only in the alpha channel and not the color (RGB) channels of the sprite. When blending using the alpha channel, parts of the background should show were ever the blurred alpha values are. In fact, I could find an image format or build my own, that can support multiple alpha channels that are used to make dropped shadows or glows.

I would accomplish this by creating several alpha textures (similar to the above described blood textures) for a set of sprites or characters. Then, simply render the alpha sprite with alpha blending first, then the non-alpha sprite next. By properly configuring a few render states in Direct3D, I can change the color of the rendered alpha sprite. Using this effect, I can make glows, shadows, or what ever overlay types of effects I need. Better yet, I can adjust the alpha test or blend factor to oscillate between two specified parameters to even further similar a glowing effect.

Now, onto the actual animation:
What I need to accomplish uses lots of text effects and transitions (text & screen). When I was writing out the technical design of exactly what I wanted I found my self wondering how this would all work—smoothly! With so many different animations running at one time it further complicated things. I thought, “How would I show all these animations correctly. I wonder if I should use some sort of timeline or framing mechanism.” So now I find my self having problems managing the animation for all I need. I would like the frame work to be some what robust in such a way that all animated objects are though of that same way via common interface. This would enable me to write a callback system that would attach to correct implementation to the correct “identified” animated object.

This sounds most pleasing and proper to me. So I have figured many ways to accomplish the callback system. It’s quite easy, just a list of enumerated constants that are caught in a switch block that assigned function pointers to the correct rendering routines. The animated object also other information and variables, but most are common traits that can be used by most objects. For example, position, scale, rotation, color, texture (uses a texture matrix to achieve this NO! direct access to texture coordinates).

Next, I need to complete the framework with the animation components for frame basic animation across a given timeline. How would I accomplish this? This is my most correct problem and what I’m having problems with. Some objects may appear in a timeline that were not originally their. So an event system that stores reference to these objects is used to signify how many objects are actively being animated. I would like to use a set maximum of them that use my list-array.

Hmm… Maybe I’m actually answering my question as I write. The frames are bothering me mostly. For example, a fade start and pauses, then resumes and finally ends. The start pause, resume and end can all be represented with frames, but how? I have just those frames, but the stretch across a lengthy timeline, well 4 seconds if you consider that lengthy. The actual animation frames will most likely be hard-coded. (TODO: <- Fix hard-coded mess) I would like my animations to be described with an outside source so I don’t have to recompile each time I need to test them.

Hmm… Maybe if in my transition phase I use a queue method that sets an animation to be processed via push->list-array data. This would be the initialization. Then I simply perform the necessary loop checking for input, window messages and the like while processing and rendering the animation. But most importantly, I need to check if the queue is empty. If the queue is empty, this means all the animations have stopped and this signifies that transition is over.

But during the actual game I won’t be waiting for those animations to end at all. I just continue on like with out a care. But, this also brings up another interesting topic: How do I switch to a transition while not affecting objects currently existing in the queue? I bet I just need to have another cup of coffee and think about it.
Take back the internet with the most awsome browser around, FireFox
Advertisement
I think I know one of my problems. I'm trying to through everything together. But I can’t because each type of thing I want to animate needs to be handled in a different way.

Doo!

But I can use just lists of frame indices and a time in which to render them all. Yep, that’s it for the sprites. Now onto fade and what not.
Take back the internet with the most awsome browser around, FireFox
Quote:Original post by DrunkenHyena
A bit of an aside, but what's so special about FreeType? I understand it's cross-platform, but I've heard people rave about it for projects that aren't intended to be crossplatform. So what's the deal? I scanned (briefly) the feature list and nothing jumped out at me. Is it really so much better than using the Win32 calls? If so, why? Inquiring (and lazy) minds want to know!


FreeType provides direct access to not only individual anti-aliased glyph bitmaps that can be rendered using a number of different font file formats, but also the typographical information necessary to use them properly in a custom rendering engine. The Windows API, so far as I'm aware, just doesn't have the tools to easily get access to the information you need to render text in a per-pixel typographically correct manner (for example, taking into account kerning between invividual letters).
Quote:The Windows API, so far as I'm aware, just doesn't have the tools to easily get access to the information you need to render text in a per-pixel typographically correct manner (for example, taking into account kerning between invividual letters).


Uniscribe would be the Windows API to use for getting at this kind of information. This is what D3DXFont uses to layout text.

xyzzy
Hello,

I also use a bitmap font in my Dx8 game. I came up with a different way to generate the bitmap in order to eliminate the problems with texture coords. I use Dx8 DrawText to render to an A8R8G8B8 texture. I render my char list one at a time and store the coordinates using the DT_CALCRECT flag for the calculation. This method also allows you to generate any font you want at run-time. Just something else to consider :)

Good luck.
-Mike

This topic is closed to new replies.

Advertisement