# R&D Poor Signed Distance Font Quality When Drawn Small

## Recommended Posts

My SDF font looks great at large sizes, but not when I draw it at smaller sizes. I have my orthogonal projection matrix setup so that each unit is a 1x1 pixel. The text is rendered from Freetype2 to a texture atlas @ 56px with a spread of 8 pixels (the multiplier is 8x and scaled down). I'm drawing @ 18px in the screenshot attached to this post. The way I calculate the size of the text quads is by dividing the desired size (18px in the screenshot) by the size of the glyphs in the atlas (56px in this case), and scaling the glyph sprite by that factor. So: 18/56 = ~0.32, and I multiply the rect's size vector by that when it comes to vertex placement (this obviously doesn't apply to the vertices' texture coords). Now, I made sure that all metrics stored in my SDF font files are whole numbers (rect position/size, bearing amounts, advance, etc), but when I scale the font, vertex positions are almost always not going to be whole numbers. I increase the "edge" smoothstep shader parameter for smaller text as well, but it doesn't seem to help all that much.

##### Share on other sites

How do you generate the mip-maps for your SDF texture?

##### Share on other sites
1 hour ago, Hodgman said:

How do you generate the mip-maps for your SDF texture?

I forgot to post that. I'm doing basic mipmapping. Here's the member definition that creates the OpenGL texture object, and uploads the data to its memory space:

uint32_t Font::CreateTexture(size_t width, size_t height, const void* buffer)
{
uint32_t handle;
glGenTextures(1, &handle);
glBindTexture(GL_TEXTURE_2D, handle);
glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, (GLsizei)width, (GLsizei)height, 0, GL_RED, GL_UNSIGNED_BYTE, buffer);
glGenerateMipmap(GL_TEXTURE_2D);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
return handle;
}

Here's how I call it:

texture_ = CreateTexture(atlasWidth_, atlasHeight_, (void*)((uint8_t*)buffer + offset));

Here's my text shader source as well:

static const std::string TextVertSource =
"uniform mat4 u_transMat;"
"attribute vec2 a_pos;"
"attribute vec2 a_coord;"
"varying vec2 v_coord;"
"void main()"
"{"
"	v_coord = a_coord;"
"    gl_Position = u_transMat * vec4(a_pos, 0.0, 1.0);"
"}"
;

static const std::string TextFragSource =
"uniform vec4 u_params;" // fillWidth, fillEdge, strokeWidth, strokeEdge
"uniform vec4 u_colors[2];" // fillColor, strokeColor
"uniform sampler2D u_tex;"
"varying vec2 v_coord;"
"void main()"
"{"
"	float distance = 1.0 - texture2D(u_tex, v_coord).r;"
"	float fillAlpha = 1.0 - smoothstep(u_params.x, u_params.x + u_params.y, distance);"
"	float strokeAlpha = 1.0 - smoothstep(u_params.z, u_params.z + u_params.w, distance);"
"	float a = fillAlpha + (1.0 - fillAlpha) * strokeAlpha;"
"	vec4 color = mix(u_colors[1], u_colors[0], fillAlpha / a);"
"	gl_FragColor = color;"
"	gl_FragColor.a = a;"
"}"
;

Edited by Vincent_M

##### Share on other sites

I'm not sure whether standard mipmap generation (averaging) is a good/correct one for signed distance fields (correct me if I'm wrong, it's 5:22 am and I'm in front of computer for more than 20 hours ... so I'm not really thinking straight).

Can you show how your distance fields (incl. higher pyramid levels - miplevels) look like?

## Create an account

Register a new account

• ### Similar Content

• By Rockroot
Hello dear AI folk!
I picked up a passion for Game AI a while ago and I wanted to join a community for a while now. This seems to be the right place to get cozy.

Now, for my bachelors degree, I am supposed to write a research essay (about 20 pages). I chose to title it "Comparing Approaches to Game AI imder Consideration of Gameplay Mechanics". In the essay I am making bold statements, concerning the state of related work in the field. I want to know, if it holds up to reality. And what better way to find out, than asking the community of said reality?
Disclaimer: This is the first research paper I have ever done and it is work in progress. I feel that I suck at this.
The main question is: Does the above statement hold up to reality?
I would also like to share the completed work as soon as it is done, if anyone is interested. I am in dear need of some feedback, that I personally can grow by.

Cheers
• By pcmaster
Hello!
Previous year in my job we implemented a HDR output (as in HDR10 / BT.2020 / ST.2084 PQ back-buffers) on one of our games on the consoles, which do support HDR10 over HDMI. The HDR compatible hardware (monitors, televisions) has already been around for a year, with varying quality.
I wonder if there's already HDR-HW output exposed in the PC drivers? Windows 10? Vulkan? DX 11? DX 12? Which vendors?
For those unfamiliar, I'm talking about outputting HDR signal to HDR hardware (using r10g10b10a2_unorm + PQ backbuffers, or better).
Thanks, .P

• Animating characters is a pain, right? Especially those four-legged monsters!

This year, we will be presenting our recent research on quadruped animation and character control at the SIGGRAPH 2018 in Vancouver. The system can produce natural animations from real motion data using artificial neural networks. Our system is implemented in the Unity 3D engine and trained with TensorFlow.

If you are curious about such things, have a look at this:

• By Dromo
I am about to start a PhD that will investigate ways of replicating creativity in the AI systems of simulated people in virtual environments. I will research which psychology theories and models to use in order to achieve this, with a focus on creative problem solving.

The aim of this project is to create virtual characters and NPCs that can create new solutions to challenges, even if they have never encountered these before. This would mean that not every possible action or outcome would need to be coded for, so less development resources are required. Players would encounter virtual people that are not bound by rigid patterns of pre-scripted behaviour, increasing the replay value and lifespan of games, and the accuracy of simulations.

I am looking for companies or organisations that would be interested in working with me on my PhD, and I think computer games companies might be the most likely. I am trying to think of ways in which this new AI system might benefit games companies, or improvements and new types of games that might be possible. I am on this forum to ask for your thoughts and suggestions please, so I can approach games companies with some examples.

Thank you for your time and interest.

import sys
import nltk
import random
from nltk.tokenize import word_tokenize,sent_tokenize
GREETING_KEYWORDS = ("hello", "hi", "greetings", "sup", "what's up",)
GREETING_RESPONSES = ["'sup bro", "hey", "*nods*", "hey you get my snap?"]
User_input = input ("User said:")
type (User_input)
def check_for_greeting(sentence):
"""If any of the words in the user's input was a greeting, return a greeting response"""
words = word_tokenize (sentence)
if words in GREETING_KEYWORDS:
print(random.choice(GREETING_RESPONSES))
return;
check_for_greeting(sentence = User_input )

• Hello gamedev, I am currently evaluating the worthiness of jumping into RD work for an automatic impostor system in our engine. In the past I've witnessed tremendous performance increase from such a system into the engine of LumenRT (which has to cope with unoptimized user created content). We're a little bit in the same situation right now. Possibly large fields with way too much data (high poly etc..).
So if the engine would support auto-impostor-ing of stuff that'd be cool. Though, to make it a bit more modern, I was thinking that we could extend the parallax validity of billboards by storing the depth too, and render them using parallax occlusion mapping.
So the invalidation could come after the camera has moved to a more radical angle than for traditional impostors. These exist techniques with full volumetric billboards that I am aware of, but they need the gometry shader to generate slices, and cost heavy voxel storage. I need something very light on the bandwidth to cope with switch/PS4 limitations.
Can you point me to modern research on well balanced imposter techniques sounding like this ? or any idea you have on the matter.
thanks

• If you have CROWDFUNDED the development of your game, which of the following statements do you agree with?
1. I went out of my way to try to launch my game by the estimated delivery date
2. I made an effort to launch my game by the estimated delivery date
3. I was not at all concerned about launching my game by the estimated delivery date
-------------------------------------------------------------------------------
Hi there! I am an academician doing research on both funding success and video game development success.
For those who have CROWDFUNDED your game development, it would be extremely helpful if you could fill out a very short survey (click the Qualtrics link below) about your experiences.
http://koc.ca1.qualtrics.com/jfe/form/SV_5cjBhJv5pHzDpEV
The survey would just take 5 minutes and I’ll be happy to share my findings of what leads to crowdfunding success and how it affects game development based on an examination of 350 Kickstarter projects on game development in return.
This is an anonymous survey and your personal information will not be recorded.
Thank you very much in advance!

• Just unless you missed it, Humble Bundle is currently selling a book bundle regarding AI and machine learning.
The bundle includes one book for UE4 and a lot of general books.

• At the company I currently work for we have been working on a variety of AI projects related to big data, natural speech, and autonomous driving. While these are interesting uses of AI, I wonder about their application in real-time systems like games. Games can't tolerate large delays while sending data to the cloud or complex calculation and are also limited in storage space than can be allocated to data. I am curious about the community's view of where complex AI could fit in gaming?

• Hi, I'm looking for suggestions for this game similar to ravensburger scotland yard
any deas is appreciated

https://images-cdn.fantasyflightgames.com/filer_public/78/b4/78b4b240-ec1d-416d-8486-970fb5a941c9/whitehall_mystery_rulebook_small_copy.pdf
thanks

• 35
• 12
• 10
• 9
• 9
• ### Forum Statistics

• Total Topics
631357
• Total Posts
2999525
×