• Advertisement
Sign in to follow this  

R&D Poor Signed Distance Font Quality When Drawn Small

Recommended Posts

My SDF font looks great at large sizes, but not when I draw it at smaller sizes. I have my orthogonal projection matrix setup so that each unit is a 1x1 pixel. The text is rendered from Freetype2 to a texture atlas @ 56px with a spread of 8 pixels (the multiplier is 8x and scaled down). I'm drawing @ 18px in the screenshot attached to this post. The way I calculate the size of the text quads is by dividing the desired size (18px in the screenshot) by the size of the glyphs in the atlas (56px in this case), and scaling the glyph sprite by that factor. So: 18/56 = ~0.32, and I multiply the rect's size vector by that when it comes to vertex placement (this obviously doesn't apply to the vertices' texture coords). Now, I made sure that all metrics stored in my SDF font files are whole numbers (rect position/size, bearing amounts, advance, etc), but when I scale the font, vertex positions are almost always not going to be whole numbers. I increase the "edge" smoothstep shader parameter for smaller text as well, but it doesn't seem to help all that much.

Screen Shot 2017-12-11 at 9.13.20 PM.png

Share this post


Link to post
Share on other sites
Advertisement
1 hour ago, Hodgman said:

How do you generate the mip-maps for your SDF texture?

I forgot to post that. I'm doing basic mipmapping. Here's the member definition that creates the OpenGL texture object, and uploads the data to its memory space:

uint32_t Font::CreateTexture(size_t width, size_t height, const void* buffer)
{
	uint32_t handle;
	glGenTextures(1, &handle);
	glBindTexture(GL_TEXTURE_2D, handle);
	glTexImage2D(GL_TEXTURE_2D, 0, GL_R8, (GLsizei)width, (GLsizei)height, 0, GL_RED, GL_UNSIGNED_BYTE, buffer);
	glGenerateMipmap(GL_TEXTURE_2D);

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	return handle;
}

Here's how I call it:

texture_ = CreateTexture(atlasWidth_, atlasHeight_, (void*)((uint8_t*)buffer + offset));

Here's my text shader source as well:

static const std::string TextVertSource =
"uniform mat4 u_transMat;"
"attribute vec2 a_pos;"
"attribute vec2 a_coord;"
"varying vec2 v_coord;"
"void main()"
"{"
"	v_coord = a_coord;"
"    gl_Position = u_transMat * vec4(a_pos, 0.0, 1.0);"
"}"
;

static const std::string TextFragSource =
"uniform vec4 u_params;" // fillWidth, fillEdge, strokeWidth, strokeEdge
"uniform vec4 u_colors[2];" // fillColor, strokeColor
"uniform sampler2D u_tex;"
"varying vec2 v_coord;"
"void main()"
"{"
"	float distance = 1.0 - texture2D(u_tex, v_coord).r;"
"	float fillAlpha = 1.0 - smoothstep(u_params.x, u_params.x + u_params.y, distance);"
"	float strokeAlpha = 1.0 - smoothstep(u_params.z, u_params.z + u_params.w, distance);"
"	float a = fillAlpha + (1.0 - fillAlpha) * strokeAlpha;"
"	vec4 color = mix(u_colors[1], u_colors[0], fillAlpha / a);"
"	gl_FragColor = color;"
"	gl_FragColor.a = a;"
"}"
;

 

Edited by Vincent_M

Share this post


Link to post
Share on other sites

I'm not sure whether standard mipmap generation (averaging) is a good/correct one for signed distance fields (correct me if I'm wrong, it's 5:22 am and I'm in front of computer for more than 20 hours ... so I'm not really thinking straight).

Can you show how your distance fields (incl. higher pyramid levels - miplevels) look like?

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By spacerat
      we are looking for someone who can develop a fully automatic [recruiting information removed by moderator - please use jobs section]
       
       
    • By ja0335
      Hi,
      Currently I'm working in a project in where an AI team of NPCs must attack a squad of 4 characters manipulated by the Player. The problem I have is all the info I've found about AI against a player is related by attacking a single character. In this particular scenario attack rules changes because the AI must be aware about four characters. I'm curious if any one knows some paper about this particular scenario.
      Thanks.
    • By eyal wahabi
      Am a new game dev and I need the help of you (the experts)
      While making the game I had one main problem,
      In my game, the player moves his mouse to control the direction of a sword that his character is supposed swings against other players,
      the problem is that I don't know how to program the hand to move according to the mouse. 
      I will be grateful if someone can give me a helping hand on how to code it or a general idea of how this thing can be programmed on unity ^^.
    • By hyper3d
      Hi,
      Recently I have been looking into a few renderer designs that I could take inspiration from for my game engine. I stumbled upon the BitSquid and the OurMachinery blogs about how they architect their renderer to support multiple platforms (which is what I am looking to do!)
      I have gotten so far but I am unsure how a few things that they say in the blogs..
      This is a simplified version of how I understand their stuff to be setup:
      Render Backend - One per API, used to execute the commands from the RendererCommandBuffer and RendererResourceCommandBuffer Renderer Command Buffer - Platform agnostic command buffer for creating Draw, Compute and Resource Update commands Renderer Resource Command Buffer - Platform agnostic command buffer for creation and deletion of GPU resources (textures, buffers etc..) The render backend has arrays of API specific resources (e.g. VulkanTexture, D3D11Texture ..) and each engine-side resource has a uint32 as the handle to the render-side resource.
      Their system is setup for multi-threaded usage (building command buffers in parallel and executing RenderCommandBuffers (not resources) in parallel.
       
      One things I would like clarification on
      In one of the blog posts they say When the user calls a create-function we allocate a unique handle identifying the resource
      Where are the handles allocated from? the RenderBackend?
      How do they do it in a thread safe way that's doesn't kill performance?
       
      If anyone has any ideas or any additional resources on the subject, that would be great.
       
      Thanks
    • By TexasJack
      Disclaimer: I was tempted to put this in Game Design, but it heavily references AI so feel free to move it to wherever it needs to be if this is the wrong place.

      I had this idea that a game could use machine learning to support it's AI and make itself a really challenging opponent (nothing new there), but also to tailor its style of playing based on feedback given by human players.

      Using an RTS as a classic example, lets say you prefer to play defensively. You would probably get more enjoyment out of games where the opponent was offensive so as to challenge your play style. At the end of each match, you give a quick bit of feedback in the form of a score ('5/10 gold stars' for example) that pertains to your AI opponent's style of play. The AI then uses this to evaluate itself, cross referencing its score against previous scores in order to determine the optimum 'preferable' play style.

      Then I got onto to thinking about two issues with the idea:

      1) The human player might not be great at distinguishing feedback about their opponents play style from feedback about their game experience in general.
      2) In a multiplayer context, players could spam/abuse/troll the system by leaving random/erroneous feedback.

      Could you get round this by evaluating the player without them knowing it, i.e. could some other data recorded from the way a player acts in a game be used to approximate their enjoyment of a particular opponents play style without being too abstract? For example 'length of time played somehow referenced against length of time directly engaged with AI opponent' etc...

      Do any existing games work like this? I just came up with it when I saw a stat that call of duty has been played for a collective 25 billion hours or something - which made me that would be the perfect bank of experience to teach a deep learning computer how players interact with a game.

      Just a bit of abstract thinking, that's all.
  • Advertisement