Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 19 Aug 2012
Offline Last Active Apr 30 2016 01:50 AM

Topics I've Started

Signed Distance Field Text Rendering: some questions

13 February 2016 - 03:50 PM

Hello everyone smile.png


I've just implemented signed distance field text rendering and I've got a few minor problems, I can't solve by myself. wacko.png


First of all, a quick overview of the problems:

  • I advance the cursor on the x-axis with the value of "xAdvance" (should be self-explanatory), but the space between two characters seems to be wrong (see screenshots). How can I fix this?
  • How could I set a font size like "Set this font to a size of 12px"? => How do I calculate the correct box size for a single character?
  • It occurs that the visual font thickness depends on the background color (see screenshots, in the upper one, the font seems to be clearly thicker). Why does it happen? (Due to alpha?)  And how can I prevent this behaviour?

So, here are some screenshots:






Now, I hope you see what I meant. If not feel free to ask and I will try to clarify it.

Here, my implementation:


The signed distance field font files were created with Hiero

It created the texture (512x512 in my case) and a .fnt text file.

The important (and almost all) parts (/lines) of this file are like

char id=33    x=484  y=202  width=24   height=60   xoffset=-7   yoffset=7    xadvance=33

where x,y,width,height represent non normalised texture coordinates and the offsets and xadvance represent pixel numbers.


The fragment shader, which calculates the actual visual character on the signed distance field texture:

#version 330 core

in vec2 _uv;

uniform vec3 color;
uniform sampler2D fontAtlas;

const float width = 0.5;
const float edge = 0.1;

void main()
	float distance = 1.0 - texture(fontAtlas, _uv).a;
	float alpha = 1.0 - smoothstep(width, width + edge, distance);
	gl_FragColor = vec4(color, alpha);

This is how I calculate the vertices for drawing:

void Font::write(const char* text, float scale, float x, float y, float z)
	for (const char* p = text; *p; p++)

		Glyph g = m_glyphs[*p];

		float gx = x + static_cast<float>(g.xOffset) * scale;
		float gy = y - static_cast<float>(g.yOffset) * scale;
		float gw = static_cast<float>(g.width) * scale;
		float gh = static_cast<float>(g.height) * scale;

		x += g.xAdvance * scale;

		if (!gw || !gh)

		float u1 = static_cast<float>(g.x) / static_cast<float>(m_atlas.width);
		float u2 = u1 + (static_cast<float>(g.width) / static_cast<float>(m_atlas.width));
		float v1 = static_cast<float>(m_atlas.height - g.y) / static_cast<float>(m_atlas.height);
		float v2 = v1 - (static_cast<float>(g.height) / static_cast<float>(m_atlas.height));

		m_buffer.push_back(Vertex(gx	 , gy	  , z, u1, v1));
		m_buffer.push_back(Vertex(gx	 , gy - gh, z, u1, v2));
		m_buffer.push_back(Vertex(gx + gw, gy - gh, z, u2, v2));
		m_buffer.push_back(Vertex(gx	 , gy	  , z, u1, v1));
		m_buffer.push_back(Vertex(gx + gw, gy - gh, z, u2, v2));
		m_buffer.push_back(Vertex(gx + gw, gy	  , z, u2, v1));

This is how I render all of this:

void Font::render(const glm::mat4& ortho, const glm::vec3& color /*= glm::vec3(1.0f)*/)
	if (m_buffer.size() > 0)
		m_shader.setUniform("ortho", ortho);
		m_shader.setUniform("color", color);

		glBindTexture(GL_TEXTURE_2D, m_atlas.id);
		glBindBuffer(GL_ARRAY_BUFFER, m_vbo);
		glBufferData(GL_ARRAY_BUFFER, sizeof(Vertex) * m_buffer.size(), &m_buffer[0], GL_STREAM_DRAW);

		glDrawArrays(GL_TRIANGLES, 0, static_cast<GLsizei>(m_buffer.size()));

		glBindTexture(GL_TEXTURE_2D, 0);
		glBindBuffer(GL_ARRAY_BUFFER, 0);

If you need some other code, just say it and I'll provide it. smile.png






- minor changes in question 1 and 3

Atmospheric Scattering

24 November 2015 - 05:54 AM

Hello everyone,


there's something special I want to implement, but that I can't understand: atmospheric scattering.

My goal is, to achieve a sky background, where I can adjust the color gradients/etc. parameters.


Therefore I googled and found a lot of interesting stuff: Sean O'Neills GPU Gems 2 - accurate atmospheric scattering.

1. Question) Why is everything I found regarding the implementation stuff from before 2012?


Now I had a problem: the shader described is only the SkyFromSpace shader.

This was fixed, after I found the link to the download section.


Then there came up another problem: almost everyone got the tip: visit his website and download the improved version.

2. Question) How should I visit Sean O'Neills old website, if the website is now a website for casinos?



Have anyone an idea how to get the source for Sean O'Neills implementation 'SkyFromAtmosphere' in OpenGL?

Or better: an actual tutorial for atmospheric scattering?

Or is there something better these days? (<- that's why everything is from before 2012?)


Thanks in advance,


Handling an open world

01 October 2015 - 02:42 AM

Hi all,

In the last 2 years I learned a lot about 3D, OpenGL, GLSL, Rendering techniques like instanced rendering, framebuffers and post processing, but also things like ECS vs OOP, state managing, different Frameworks (GLFW, SDL, SFML, ...),
Physics (BulletPhysics) and stuff like that.
Previously I made little 2D games in Java and C++.

Now I'm feeling ready to dig deeper.
My current project is a 3D open world game. Graphics isn't relevant, that's why it will simply consist of low poly objects with low res textures and a pixelating post processing shader with a sci fi setting.
The first idea is a city, which feels a little bit like a living world.

Therefore my goals are cutted to
these points:
- animated neon lights and general lighting (banners, "advertises", street lamps, ambient lighting, etc.)
- walking pedestrians
- flying cars on multiple height levels
- moving around in first person

I think, I can definitely achieve this in a few years with my current knowledge. But there is one simple point I can't wrap my head around it:

- How are multiple (hundreds of) objects (buildings) handled? I know how I can render anything,but I didn't think, that I need to render anything at the same time. But how I can easily achieve it, to determine if something should be rendered and something not? How can I calculate if it is visible at the current position and view direction of the player?
And that, without hitting the performance a lot?

My ideas:
I know about texture streaming but is sonething like that possible for whole game objects? Like "game object streaming"?

I read stuff about so called SceneGraphs. But I don't really get it how it can help me? When each objects position is stored relatively to its parent position, in which way it would be simplier/faster to determine, if its in the viewport or not?

I hope it is understandable what I mean. :)

Please don't answer, if you just want to tell me something like "GTA V has over 1000 members in over 3 years full time". I appreciate it.

Issues with BulletPhysics & OpenGL

02 February 2015 - 01:40 PM

Dear community,

I'm currently programming a 3D game in pure C++ with OpenGL, GLM, ASSIMP, SFML & BulletPhysics. smile.png

But now, I'm at a point, where I'm done with it, 'cause I couldn't get the problem or find something on the internet, which can lead into the direction of a fix. huh.png 
I've got a problem with updating my meshs model matrix with the world matrix, getting from the btMotionState of a btRigidBody, to render it via shader. unsure.png


The shader computes the vertex location like

gl_Position = projection_mat4 * view_mat4 * model_mat4 * vec4(position, 1.0);

Where projection is computed by glm::perspective(...), view by glm::lookAt(...) and model by the physics system, see below. smile.png



So far, I'm creating the whole physics world in a class called "PhysicsManager".
It's a test and so it creates the whole bullet physics system like that:

m_broadphase = new btDbvtBroadphase();
m_collisionConfiguration = new btDefaultCollisionConfiguration();
m_dispatcher = new btCollisionDispatcher(m_collisionConfiguration);


m_solver = new btSequentialImpulseConstraintSolver();
m_dynamicsWorld = new btDiscreteDynamicsWorld(m_dispatcher, m_broadphase, m_solver, m_collisionConfiguration);
m_dynamicsWorld->setGravity(btVector3(0.0f, -9.81f, 0.0f));

// Floor
m_groundShape = new btStaticPlaneShape(btVector3(0, 1, 0), 1);
m_groundMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(0, 0, -1)));
btRigidBody::btRigidBodyConstructionInfo groundRigidBodyCI(0, m_groundMotionState, m_groundShape, btVector3(0, 0, 0));
m_groundRigidBody = new btRigidBody(groundRigidBodyCI);


// Model as a sphere in BulletPhysics
m_fallShape = new btSphereShape(1);
m_fallMotionState = new btDefaultMotionState(btTransform(btQuaternion(0, 0, 0, 1), btVector3(8, 50, 8)));
btScalar mass = 2;
btVector3 fallInertia(0, 0, 0);
m_fallShape->calculateLocalInertia(mass, fallInertia);
btRigidBody::btRigidBodyConstructionInfo fallRigidBodyCI(mass, m_fallMotionState, m_fallShape, fallInertia);
m_fallRigidBody = new btRigidBody(fallRigidBodyCI);


// GLDebugDrawer from the demos of the Bullet 3.X github page
glDebugDrawer = GLDebugDrawer();
glDebugDrawer.setDebugMode(btIDebugDraw::DBG_DrawAabb | btIDebugDraw::DBG_DrawWireframe | btIDebugDraw::DBG_DrawConstraints | btIDebugDraw::DBG_DrawConstraintLimits | btIDebugDraw::DBG_DrawNormals);

And I'm updating the world each frame with 

m_dynamicsWorld->stepSimulation(1.0f / 60.0f, 10);

After that, I'm updating the meshs model matrix with

btTransform trans;

where "m_model" is a glm::mat4 initialised as "m_model = glm::mat4()". smile.png



Then rendering is done via

glBindTexture(GL_TEXTURE_2D, mesh->tex);

GLuint uniform_model = glGetUniformLocation(p_shader, "model");

glUniformMatrix4fv(uniform_model, 1, GL_FALSE, glm::value_ptr(p_objects->at(i).m_model));
glDrawArrays(GL_TRIANGLES, 0, mesh->size);

glBindTexture(GL_TEXTURE_2D, 0);

So I bind all the stuff of the 3D mesh, draw it and unbind it.



Problem is:

Why is my rendering not aligned with the physically computed bounding boxes?
(I mean not perfectly aligned like origin/center of mass but not anywhere near the collision box?)

Can anyone give me a hint or something like that?


Here's a picture of what I mean:





Help with calculate skinned models loaded with Assimp

30 September 2014 - 01:36 PM

Hey dudes.


I'm using C++ with SFML + OpenGL (+ GLM) + Assimp and I'm currently working on an animation system for my first 3D game project and I'm struggling with bone animated 3D meshs and the Assimp library since a few weeks.

Since threads to this theme are rare on the internet, I decided to ask you guys. ^^

First of all, I wanted to say, that I can load and display static models with textures assigned properly. smile.png 
Although the shaders for rendering to a different framebuffer and for post processing are working fine.
But I can't get some animation working flawlessly due to the fact, that I didn't understand some things in order to complete this task. ohmy.png

So, my main question is (which is moreover the main problem): How do I get one specific keyframe of the bones and get their transformations/rotations/scaling ?

My thoughts are:

  1. Load the vertex bone data, so each vertex know, by what bone it is how much influenced (done! biggrin.png )
  2. Each frame:
    - Get the time, since the animation started (done! smile.png)
    - Look up the "nearest" time stamp in the keyframe list (I didn't want to interpolate yet, just keyframe after keyframe for now )
    - Calculate all transformations per bone for this specific keyframe in a 4x4 matrix
    - Update transformation matrices in the shader (maybe done, can't test it... biggrin.png)

1. is done properly and I stuck with 2. since a few weeks now.
The problem is, so far I can define it, how can I look up one keyframe for each bone in Assimps "aiScene" and how do I calculate the per bone transformations? unsure.png 

Does anybody have good tips on this or something like that? blink.png 

Good evening,