• Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

About this blog

Where pixels trip the light fantastic.

Entries in this blog

Don't you hate it when you're writing some new image loading or texture drawing code but don't have any suitable test images? I always waste lots of time looking for a "nice" image to test with, and often end up drawing something with distict pixel values so I can pinpoint where any given image loading bug is. With that in mind I've spent a few evenings working on a proper "test card".

TV test cards are a rare sight on broadcasts these days, with most digital channels choosing to just have a blank screen when a channel is off air, but pretty much everyone in the uk will have the famous Test Card F burnt into their retina at some point. Test card design is rather facinating, as all the various elements (blocks of colour, diagonal stripes, etc.) are designed to allow particular elements of a broadcast or tv configuration to be tested and tweeked.

Most of them aren't relevent to games though, since we don't have to worry about scan amplitudes and all that mumbo jumbo. Mostly we're concerned about getting binary data off disk and accurately displaying that on the screen (dealing with file formats, endianness, and display surface formats in the process). So while Test Card F is nice and iconic, it's not really suitable for our use.

Here's a couple of my old (crude) test images:

They're not bad, but by todays standards they're a little small, the nicer Tails one is an awkward 48x48 and neither of them are good when you're trying to debug file format, image pitch or similar issues since the borders and colours aren't terribly helpful.

With those issues in mind, here's my new test card:

This breaks down as:
1. Well defined an unique border pixels make it easy to see if you're displaying the whole image or if you've sliced off an edge accidentally.
2. The main circle shows is for judging aspect ratio, and making sure you've not accidentally streched or squashed it.
3. The checkered edge markings are every 8 pixels, so you can line things up easily.
4. Various pixel patterns in the corners for checking 1:1 pixel drawing.
5. Colour and greyscale bars and gradients for general colour/gamma correction and to highlight colour precision issues.
6. Ticks mark the center of each edge for alignment.
7. Tails is always cool. [grin] Replace with your own favorite character as you see fit. The image and the text make sure you're displaying it at the correct orientation and not flipping or mirroring it.
8. The empty box at the bottom is left blank for you own logo or text.

One nice "feature" is the very outer border, here's a close up of the top left corner:

The start of the scanline starts with easily identifiable red, green and blue pixels (which show up as nice round non-zero, non-FF, hex numbers in a debugger's memory window) which are similar to a text file's byte-order mark. Just after that the desaturated colours spell out (again, when viewed in a debugger's memory view) "RedGreenBlue TestCardA 256x256". Of course if you're actually loading from a BGR image format that'll just be a row of junk characters, so directly after that the message is repeated (this time "BlueGreenRed TestCardA 256x256). This allows you to easily identify if you've actually calculated the start position of the image header from your image file and that you're reading it in in the right format and endianness - if you can read "BlueGreenRed" when you were trying to load a RGB image, you know you've messed something up somewhere. :)

The top right has a similar terminating series of characters:

This time it's a slightly different hex pattern for the numbers so you can distinguish the start of one scanline from the next. Similarly the rest of the image has two distinct colours all the way down the edges. These are particularly useful when debugging misaligned image data or pitch issues. The bottom row contains the same encoded messages for those crazy image formats which are stored bottom up rather than top down.

So there you go. Use it, modify it, abuse it however you see fit. If I get chance I think I'm going to produce a few variants for lower resolutions (maybe 128x128 and 64x64) plus RGBA versions for testing alpha channels. And suggestions for improvements or tweeks are welcome.

Licensed under Creative Commons 2.0.
Sometimes you get an idea that's a little bit oddball but for some reason you can't get out of your head and just have to run with it. This has been bouncing around my head for the last couple of days and has just been translated into code:

Yes, I embedded a web server inside of Rescue Squad and as well as static content (like the images) it can serve up live render stats direct from the game engine. I'm pleasantly surprised how easy it was, it only took about an hour to grab NanoHttpd, integrate it and slightly tweak it to also serve a (semi)hardcoded stats page.

I plan to extend this somewhat to make it more useful, like having multiple (cross linked) pages of stats, a logging page containing trace messages, and possibly a resource/texture/rendertarget viewer as well. Any suggestions as to what else to (ab)use http for are welcome. :)
After I upgraded to a 720p display format (rather than just 800x600) the framerate took a little bit of a dip on my slower machine. Understandable really as it's drawing quite a few more pixels - 921k rather than 480k in the worst case, ignoring overdraw. I've spent the last few days optimising the renderer to see how much of the performance I could get back.

Firstly, you've got to have some numbers to see what's working and what isn't, so I added a debug overlay which displays all kind of renderer stats:

The top four boxes show stats for the four separate sprite groups - main sprites are visible ones like the helicopters and landscape, lightmap sprites contains lights and shadows, sonar sprites are used for sonar drawing, and hud sprites are those that make up the gui and other hud elements like the health and fuel bars. The final box at the bottom shows per-frame stats such as the total number of sprites drawn and the framerate.

Most suprising is the "average batch size" for the whole frame - at only 4.1 that means that I'm only drawing about four sprites per draw call, which is quite bad. (Although I call them sprites there's actually a whole range of "drawables" in the scene, water ripples for example are made of RingGeometry which is a ring made up of many individual triangles, but it's easier to call them all sprites).

Individual sprite images (such as a building or a person) are packed into sprite sheets at compile time. In theory that means that you can draw lots of different sprites in the same batch because they're all from the same texture. If however you're drawing a building but then need to draw a person and the person is on a different sheet, then the batch is "broken" and it's got to start a new one.

To test this out I increased the sprite sheet size from 512x512 to 1024x1024 and then 2048x2048. For the main sprites (which is the one I'm focusing on) this pushed the average batch count up from 5.3 to 5.6 and then 16.2. Obviously the texture switching was hurting my batch count - 16 would be a much more respectable figure. Unfortunately not everyone's graphics card can load textures that big, which is why I'd been sticking to a nice safe 512x512.

However further investigation found that the sprite sheets weren't being packed terribly efficiently - in fact packing would give up when one sprite wouldn't fit and a new sheet would be started. This would mean that most sheets were only about half full - fixing the bug means that almost all of the sheet is now used. Below you can see one of the fixed sheets, with almost all the space used up.

Along with this I split my sheets into three groups - one for the landscape sprites (the grass and coast line), one for world sprites (helicopters and people) and one for gui sprites. Since these are drawn in distinct layers it makes sense to keep them all together on the same sheets rather than randomly intermingling them.

One last tweak was to shave off a few dead pixels on some of the larger sprites - since they were 260x260 it meant that only one could fit onto a sheet and would leave lots of wasted space. Trimming them down to 250x250 fits four in a single sheet and is much more efficient.

Overall these upped the batch count for main sprites up to a much more healthy 9.2, reducing the number of batches from ~280 to ~130.

Good, but there's still optimisations left to be done...
I decided that the searching aspects of the gameplay are largely ruined by showing a larger area of the map at larger resolutions, so I've ruled that out as a possible way of dealing with multiple resolutions. Instead I've decided to pinch a trick from console games - the game world will internally always be rendered to a "720p" texture, and then that'll be streched over the full screen to upscale or downscale to the native resolution as appropriate.

I say "720p" because (similar to how tvs do things) there isn't a single fixed resolution, instead it'll always be 720 lines vertically, and the number of horizontal pixels will vary depending on the aspect ratio. So someone with a 16:9 screen will have a virtual resolution of 1280x720, whereas those on 4:3 displays will have 960x720. In windowed mode the virtual resolution always matches the physical window size, so you still get nice 1:1 graphics when viewed like this. For fullscreen the streching may mean you'll get some loss of sharpness but doing it manually in-game gives much better quality than letting the user's TFT do the scaling.

The menus are also drawn over the top with a 720p virtual resolution, but without the render-to-texture step (they're just scaled using the projection matrices). The HUD is the exception to the rule in that it's always rendered over the top at the native resolution instead of the virtual resolution. This is possible since the components are all relatively positioned according to the screen edges.

Different aspect ratios are also handled tv-style in that anything less that 16:9 has bits of the edges chopped off. That's not a problem as it'll just make those with 4:3 displays a little more blinkered. And to avoid having to have scalable menus or multiple menu layouts I just need to keep the important stuff inside the center 4:3 area, which is easy enough now I've got red guidelines to mark off the areas for different aspect ratios.

I think it all works out quite nicely - the code stays (largely) simple because it's all dealing with a single virtual resolution, the whole game looks better because it's natively at somethingx720 instead of 800x600, windowed mode still looks nice and crisp with 1:1 sprites and everyone gets the game in the correct aspect ratio - even in fullscreen.

Ooo, pretty...

So I was screwing around with some random bits of polish in the game and realised that I havn't tried fullscreen since I got my shiny new widescreen monitor a few months back. Unfortunately since the game is always rendered at 800x600 (and therefore a 4:3 aspect ratio) it looks a bit rubish when streched over a 16:10 display.

An hour of hacky tinering and it's running at the native res and aspect ratio, and the results are rather pretty I think:

(click for full size version)

I was surprised to find that only a few things were resolution-dependant. The HUD was the most obvious, since it used hard-coded positional values, so it now uses relative offsets from the screen edges. The menus are all borked too, since they're hardcoded to 800x600 so end up huddled in the bottom corner, I'll worry about them later.

The problem is that I think it breaks the gameplay - searching and exploring the map is a large part of the fun and challenge, and you don't get that when you can see half or a third of it on screen at a time. In fact it makes some maps laughably easy. But visually it looks so good I'm not sure I want to let it go.

Decisions decisions...
The results are up for the 2009 4k competition, with my NiGHTS 4k entry coming a rather satisfying 11th! W00t!

Unsurprisingly (and deservingly) Left 4k Dead takes the top spot, and the worryingly addictive Bridge4k second. In fact pretty much all of the top games are worth playing, there's a surprising amount of content shoe horned into some of the games.
After getting back from holiday (and recovering from), progress on Rescue Squad 2 has been nice and steady, with lots of little tweeks and additions making it into a more rounded game. Most obviously I've drawn a handful of new sprites required for the new features, and to make the levels look more varied.

There's a few different types of boat, including the large cargo ship which you'll have to pick up cargo from in some missions. Currently they don't move but I plan to give them some basic wandering AI in the future, and you'll have to keep an eye on them to see if they get into trouble and need rescuing.

Buildings now have "needs", and will only accept cargo of certain types. Most buildings will accept food and medical cargo, but the new hospital is where you'll have to take the injured people you rescue (rather than unhelpfully ditching them on the nearest oil rig and heading home). Buildings display what they're willing to accept in the little speach bubbles so you know where to take things.

I still haven't figured out a good way to do animated waves and ripples along the coast line though, which has been bugging me for ages.
Kotaku put up an entry about the 2009 4k competition closing and mentioned NiGHTS 4k as one of the ones to check out. Woot!

The judging should be finished anytime soon, and while I don't expect NiGHTS 4k to be in the top games (the quality bar has been very high this year) I'm interested in hearing the comments and seeing the final standings. Plus I've only played a handful of the entries properly so I'll find out which good ones I've missed.
Since it's the new year (yes, I'm a little slow catching up with things) I've been giving the site a bit of an overhaul. I was never really happy with the old theme, due to it being very dark narrow, making it a bit awkward to read on certain monitors and. It was also pretty restrictive in terms of what I could put on the individual pages due to the side navigation bar always being required.

So I've spent the last week working on...

Continued at TriangularPixels.com.

Scripted level

Just playing around with level scripting now that my map editor is pretty much working as I want it. Jython seems to work really well as a scripting language, and is very easy to intergrate. [grin]

Vimeo video.

Incidentally, if anyone knows what magic voodoo you have to do to get vimeo videos embedded directly into a journal that'd be great. Just pasting in the html it gives you seems to be interpreted as regular text. Possibly the journals are deliberatly ignoring one of the tags...

LD12 Goals

So the theme voting for LD48 is up, and as usual there's a whole bunch of interesting ones but nothing that really jumps out and grabs me. And a whole bunch of others which could be quite horrible to do (like "Film Noir", which is a nice idea but would be very content heavy and hard to do well in the short time allowed). I try not to think too hard about the themes at this point because I can never guess which theme will actually be chosen, and it's more fun to leave it until the contest starts anyway.

LD12 goals continued here...
It looks like there's another Ludum Dare 48 hour game programming competition starting soon, so I'll hopefully be taking part in that. Spread the word and join in - the more the merrier!

Meanwhile, back on my main project, I'm having much trouble with just trying to get a simple "chase player" behaviour. Pathfinding (with A*) wasn't too hard but there's lots of icky low-level details for moving a character around the environment which are creaping into my behaviour and generally cluttering it up. And due to the highly temporary nature of behaviours stopping and restarting a behaviour (such as repathing when the player moves far enough to invalidate the current behaviour) tends to lead to unpleasant animation jittering as it rapidly switches between idle and running animations.

So I'm trying to pull out some of the low-level movement and animation into a "locomotion" layer.

Continued at my new journal location...
For the last few days I've taken some time off from the AI work to mess around with some graphical effects, and in particular I've been experimenting with a 2d ambient shadows effect. This is inspired by the Screen Space Ambient Occulsion (SSAO) effect which has gotten popular lately, and is largely a translation and adaptation of it into a 2d world.

To start with, here's my test scene (unrelated to the current platformer/ai work):

More images and shader code at my new journal location...
The physics behind a jump in a platformer is pretty simple and something I must have written a thousand times, but having an AI player jump and land where it wants to is considerably trickier. My initial attempt was to string a bezier curve between the start and end points and just make it follow that - this is easy to do and very reliable (as it will always get to the exact end point) but produces unconvincing motion. It really needs to use the proper physics otherwise it looks out of place.

Continued at my new journal location...
So I've been going through as much stuff on behaviour trees that I can find to try and figure out whether they're going to be appropriate for what I'm doing and how they actually work. "Behavior Trees for Next-Gen Game AI" is very comprehensive and well worth a watch if you're interested (despite my dislike for videos - text is just so much more practical), and I think I can see how it would come together to produce interesting behaviour.

Continued at my new journal location...
Enemy behaviour in my current (as yet unnamed) game hasn't come on very far - just a single enemy with a traditional hard-coded finite state machine (done the old-school way, with a big switch statement). Which was initially fine as it let me get some of the more important low-level details into place, but now I'm looking at adding more enemies it's not looking so hot so something more elaborate is called for.

Continued at my new journal location...

Hello World!

My website has been neglected lately, so I've decided it's time for a makeover. As well as a fresh new look and a new url I've decided to move my journal from GameDev.net over to here. The reliability of gamedev has been pretty low recently, and I've much more space and flexibility over here for things like bigger images, RSS and embedding videos.

Continued at my new journal location...
No proper update this week, because my website literally exploded:

This evening at 4:55pm CDT in our H1 data center, electrical gear shorted, creating an explosion and fire that knocked down three walls surrounding our electrical equipment room.

Which only highlights how much I actually use it - I can't get at my SVN repository, or my forum, and I can't upload any new screenshots or builds either. [sad]

Fortunately I've got the latest of all my code locally so it doesn't actually stop me working on new stuff, but it's rather annoying. Hopefully it'll all be sorted out soon.
Firstly, I reached 1000 comits to my SVN repository, which is nice.

Following on from the previous shadowing work, I've been adding proper night time rendering for those late night missions. It uses most of the same rendering code, but instead of starting with an all white lightmap and drawing in darker shadows it starts completely black and lights are added into it as lighter areas. Ambient light is easily done by clearing to a different colour than black, so I'm using a very dark blue which just about gives you enough light to see by.

All helicopters come equipped with a search light now, which is made from a sprite for the beam attached to the helicopter, and dynamically generated circle/ring geometry for the actual lit area on the ground. Since the ring isn't just a sprite it's properties can be changed on the fly, so it'll get larger but fuzzier if you're high up, and smaller but with sharper edges if you're low down.

The lights on the buildings are "micro lights", which are small sprites with animated vertex colours to make them pulse. They're "micro" in the sense that they're small enough not to have an area, so they only have simple depth sorting applied to them. I'm planning on adding large lights (for things like flares) which will have to have a more complicated sorting/layering method to give them a proper volume.
I've never been entirely happy with the shadows in Rescue Squad - they can't be transparent since they're made from multiple sprites, and so would cause ugly double darkening when they overlapped. The solution to this is to make them solid black (as seen in this screenshot). This creates it's own set of problems however, as it means it'll obscure anything you're hovering over and trying to pick up. So the shadow gets placed on a separate layer behind the objects and water ripples, which is physically daft but generally looks ok.

So instead I figured it was time to do them properly, which is what I've added now. The basic idea is to render shadows into a texture, so that overlapping shadows don't double-darken but instead clamp to pure black. Then we draw this texture in the normal rendering but semi transparent so our black/white mask turns into a softer shadow.

As always there's a bunch of complications to deal with however - firstly combining the generated shadow/lightmap texture with the scene requires it to be projected over all the regular sprites. That's a pain to do so instead there's two render-to-texture passes - one that creates the lightmap and one that creates the scene at full brightness. These are then combined to the framebuffer in a final pass with a fullscreen quad and standard multitexturing.

The other main snag is that the shadow should be obscured if it goes behind buildings. So we also have to draw the buildings, rocks and over objects as occulders into the lightmap texture, but in pure white. To avoid having to have duplicate sprites in pure-white I use EXT_Secondary_Color to tint the existing sprite texture on the fly. As a bonus, the secondary colour can also be used to tint the shadows the correct shade of grey (rather than black) which simplifies the final pass to just regular multitexturing.

Render-to-texture uses FBOs where possible, but will fallback to copying from the framebuffer if not available, so the whole thing only needs the secondary colour extension and 2-texture multitexturing, which even the lousy intergrated intel chips have by now, so it'll work pretty much anywhere. [grin]
Work on the sequal to Rescue Squad continues, and over the last couple of weeks I've been moving towards having proper landscapes instead of just random building sprites dotted around the map. On the code side there's been the additions to the level editor to support layers and grid snapping, and I've just managed to finish enough tiles to do a proper mockup:

There's about 16 unique tiles done so far, and I might just be lazy and flip the right hand cliffs to make the left hand cliffs (maybe with a bit of touch up so they look slightly different).

Next up is upgrading the map editor's scripting system to cope with layers and changing the Rescue Squad export script accordingly. I'm still not sure how they'll be handled in game, perhaps just as a generic landscape sprite with a drawing priority, perhaps something more specific (like distinct layers for terrain/buildings/decals/etc.).
I seem to have inadvertently made distance maps popular at the moment. As well lonesock and NineYearCycle experimenting with it, kevglass (author of the rather cool Slick library) has added it to Slick's font tool. [grin] You can use the "Heiro" link on the right to give it a go.

In other, less exciting, news I've been adding support for layers into my little map editor (mostly so I can do proper landscapes for Rescue Squad).

It seems to be going smoothly so far, but it does raise some funny edge cases (like trying rubber-band select a whole bunch of objects which may be on various layers). So far I'm just going to go with what seems obvious and see what the result is like and tweak it from there.
In the spirit of sharing, here's the shader source and the image preprocessing source for the alpha magnification code.

GLSL fragment shader:

uniform sampler2D textMap;

void main()
vec4 baseColour = vec4(0, 0, 1, 1);
baseColour.a = texture2D(textMap, gl_TexCoord[0].xy).a;
const float distanceFactor = baseColour.a;

const float width = fwidth(gl_TexCoord[0].x) * 25.0; // 25 is an arbitrary scale factor to get an appropriate width for antialiasing
baseColour.a = smoothstep(0.5-width, 0.5+width, baseColour.a);

// Outline constants
const vec4 outlineColour = vec4(1, 1, 1, 1);
const float OUTLINE_MIN_0 = 0.4;
const float OUTLINE_MIN_1 = OUTLINE_MIN_0 + width * 2;
const float OUTLINE_MAX_0 = 0.5;
const float OUTLINE_MAX_1 = OUTLINE_MAX_0 + width * 2;

// Outline calculation
if (distanceFactor > OUTLINE_MIN_0 && distanceFactor < OUTLINE_MAX_1)
float outlineAlpha;
if (distanceFactor < OUTLINE_MIN_1)
outlineAlpha = smoothstep(OUTLINE_MIN_0, OUTLINE_MIN_1, distanceFactor);
outlineAlpha = smoothstep(OUTLINE_MAX_1, OUTLINE_MAX_0, distanceFactor);

baseColour = mix(baseColour, outlineColour, outlineAlpha);

// Shadow / glow constants
const vec2 GLOW_UV_OFFSET = vec2(-0.004, -0.004);
const vec3 glowColour = vec3(0, 0, 0);

// Shadow / glow calculation
float glowDistance = texture2D(textMap, gl_TexCoord[0].xy + GLOW_UV_OFFSET).a;
float glowFactor = smoothstep(0.3, 0.5, glowDistance);

baseColour = mix(vec4(glowColour, glowFactor), baseColour, baseColour.a);

gl_FragColor = baseColour;

Java image preprocessing:

package net.orangytang.evolved.tools.filters;

import java.awt.Color;
import java.awt.image.BufferedImage;
import java.io.File;
import java.io.FileFilter;
import java.util.ArrayList;

import net.orangytang.evolved.tools.FileAttributes;

import org.lwjgl.util.Rectangle;

public class DistanceFieldFilter implements ImageFilter

public FileFilter getFileFilter()
return new FileFilter()
public boolean accept(File pathname)
return true;

public ArrayList preProcess(ArrayList loadedImages)
return loadedImages;

private static float separation(final float x1, final float y1, final float x2, final float y2)
final float dx = x1 - x2;
final float dy = y1 - y2;
return (float)Math.sqrt(dx*dx + dy*dy);

public BufferedImage process(BufferedImage inImage, FileAttributes attributes, Rectangle rectangle)

final int scanSize = 200; // controls the size of the affect. Larger numbers will allow larger outline/shadow regions at the expense of precision and longer preprocessing times

final int outWidth = inImage.getWidth() / 32;
final int outHeight = inImage.getHeight() / 32;

BufferedImage outImage = new BufferedImage(outWidth, outHeight, BufferedImage.TYPE_4BYTE_ABGR);
float[][] distances = new float[outImage.getWidth()][outImage.getHeight()];

final int blockWidth = inImage.getWidth() / outImage.getWidth();
final int blockHeight = inImage.getHeight() / outImage.getHeight();

System.out.println("Block size is "+blockWidth+","+blockHeight);

for (int x=0; x {
for (int y=0; y {
distances[x][y] = findSignedDistance( (x * blockWidth) + (blockWidth / 2),
(y * blockHeight) + (blockHeight / 2),
scanSize, scanSize);

float max = 0;
for (int x=0; x {
for (int y=0; y0].length; y++)
final float d = distances[x][y];
if (d != Float.MAX_VALUE && d > max)
max = d;
float min = 0;
for (int x=0; x {
for (int y=0; y0].length; y++)
final float d = distances[x][y];
if (d != Float.MIN_VALUE && d < min)
min = d;

final float range = max - min;
final float scale = Math.max( Math.abs(min), Math.abs(max) );

System.out.println("Max: "+max+", Min:"+min+", Range:"+range);

for (int x=0; x {
for (int y=0; y0].length; y++)
float d = distances[x][y];

if (d == Float.MAX_VALUE)
d = 1.0f;
else if (d == Float.MIN_VALUE)
d = 0.0f;
d /= scale;
d /= 2;
d += 0.5f;

distances[x][y] = d;

for (int x=0; x {
for (int y=0; y0].length; y++)
float d = distances[x][y];
if (d == Float.NaN)
d = 0;

// As greyscale
// outImage.setRGB(x, y, new Color(d, d, d, 1.0f).getRGB());

// As alpha
outImage.setRGB(x, y, new Color(1.0f, 1.0f, 1.0f, d).getRGB());

// As both
// outImage.setRGB(x, y, new Color(d, d, d, d).getRGB());

return outImage;

private static float findSignedDistance(final int pointX, final int pointY, BufferedImage inImage, final int scanWidth, final int scanHeight)
Color baseColour = new Color(inImage.getRGB(pointX, pointY) );
final boolean baseIsSolid = baseColour.getRed() > 0;

float closestDistance = Float.MAX_VALUE;
boolean closestValid = false;

final int startX = pointX - (scanWidth / 2);
final int endX = startX + scanWidth;
final int startY = pointY - (scanHeight / 2);
final int endY = startY + scanHeight;

for (int x=startX; x {
if (x < 0 || x >= inImage.getWidth())

for (int y=startY; y {
if (y < 0 || y >= inImage.getWidth())

Color c = new Color(inImage.getRGB(x, y));

if (baseIsSolid)
if (c.getRed() == 0)
final float dist = separation(pointX, pointY, x, y);
if (dist < closestDistance)
closestDistance = dist;
closestValid = true;
if (c.getRed() > 0)
final float dist = separation(pointX, pointY, x, y);
if (dist < closestDistance)
closestDistance = dist;
closestValid = true;

if (baseIsSolid)
if (closestValid)
return closestDistance;
return Float.MAX_VALUE;
if (closestValid)
return -closestDistance;
return Float.MIN_VALUE;

Usual disclaimers apply - this is just from a quick weekend's worth of tinkering, so it's somewhat rough and ready. Released under a "do no evil" license - do what you want as long as you're not being evil with it.

Some notes:
There's quite a lot of magic numbers floating around which would be better exposed and parameters for better flexibility. In particular the "scanSize" in the preprocessing controls how much of the source image is scanned for each output pixel in the distance map. Bigger numbers take more time to preprocess and produce distance maps with a greater spread (meaning you can get fatter outlines and softer shadows). In practice I found that 200 worked well for a 1024x1024 input image, if you're using different input image sizes you might want to change that accordingly.

The shader has a lot of magic numbers, quite a few should be moved into uniform variables for better flexibility (such as fill colour, outline colour, width of outline and shadow offset). This should be trivial.

fwidth() is the built-in screen space derivitive, and is used to get consistant antialiased edges regardless of the actual scale the image is drawn at. I found this always returned 0 on my 6600 at work, so this might be better replaced by a uniform variable and setting it to something appropriate depending on the scale of the text.

Similarly, smoothstep() is another built-in GLSL function which returns unimpressive results on my 6600 resulting in lower quality anialiasing. Not sure if this is a hardware thing or just old drivers. Either way, it might be better to replace it with a custom function that just does a linear interpolation (might be faster too).

If anyone uses this it'd be nice if you could drop me a message and let me know. I always like seeing other people's cool screenshots. [grin]
As mentioned in this post, I've been having a crack at Valve's Improved Alpha Tested Magnification technique. I've mostly just been hacking around to see what the results are, so it's all rather special-cased and hard-coded at the moment, but it's enough to see if it's worth carrying on further.

Here's the current results:

Now you're (hopefully) thinking "that's a mighty fine looking 'a'", but the really neat part is how little resources it takes to draw it - a single 32x32 sprite on a single quad with a tiny ~25 line shader. [grin] Yup, even enlarged sixteen times it retains the smooth curves even though the actual texture data is miniscule. In fact I've found it can be enlarged pretty much indefinately and still look awesome.

The shader is a bit too complicated for my liking (two texture samples, a fwidth() screen-space derivative, three smoothstep() calls and a couple of branches) but compared to Ysaneya's 250+ line and 10+ texture sampling shader behemoths it's pretty light weight. I suspect I'm being a little too old-skool and worrying too much about it.

The fixed-function fallback is just done with an appropriate alpha-test value, and so won't give soft drop shadows or antialiasing but the shape is the same. And outlining can be done by drawing it twice with different alpha test values.

At the other end of the scale, small text isn't too bad either:

That's shrunk down to half size (16x16), and still comparable in quality to a bitmap font renderer at the same size. Drop shadows will work, but at that scale outlining tends to get pixely due to not having enough physical space to fit a proper outline into.

The Dilemma
Now I have a dilemma - do I write a proper text renderer using this and replace my current bitmap font rendering method? For people with shader capable cards (GF FX and up) it's better in all areas. For shaderless people (ie. people with those crappy intergrated intel chips) it means they'll get scalable aliased text instead of unscalable, antialiased text. I'm not sure that's a tradeoff I'm willing to make, as much as I'd love the greater flexibility.

Is it about time to say "screw it" to people with crappy graphics cards and just go for GL2.0 features as a baseline (FBOs, VBOs and proper GLSL shaders)? On the one hand that goes all the way back to the GeForce FX, which is five years old now - a long time in graphics hardware. On the other hand theres a lot of people with lousy intergrated intel chips (laptop people mostly), and I'm not sure if I want to shut them out. And I'd rather not try and maintain two completely different renderers for the different paths because I've only one pair of hands (and fundamentally, want to concentrate on writing games, not hardware specific workarounds).

So what do you people aim for, hardware-wise? As high as your development machine? An arbitrary base line spec? As low as possible? Comments highly appreciated.
Sign in to follow this  
  • Advertisement