Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 08 Apr 2011
Offline Last Active Today, 04:06 AM

Topics I've Started

How to architect complicated systems

19 May 2015 - 11:37 PM

I just wrote a blog post entitled "How to architect complicated systems". This stuff seems kind of obvious to a lot of you I guess, but it really wasn't obvious to me for much of my career, so maybe it will help some others out there who are getting started with programming and system design.. Would love to know what you think!

Concept for an input library - thoughts?

04 March 2015 - 06:04 AM

I'm thinking of building a small input library which would streamline the way input events are managed and make it a lot easier to observe complex input event combinations. The library would be designed to be plugged into whatever input source is desired, i.e. if you're using GLFW, or RawInput on Windows, or whatever, you would just point any events fired at this library and it would take care of the rest.

Conceptually, usage of the library would look like this:
// fires on ctrl+A, followed by ctrl+B, then ctrl+C, with a maximum of 1000ms between keypresses
auto observer = input::observe("ctrl+a,ctrl+b,ctrl+c", 0, 1000);

// fires when the mouse is moved
auto observer = input::observe("mouse:move");

// fires when the left mouse button is clicked
auto observer = input::observe("mouse:lclick");

// fires when the right mouse button is clicked while shift is down, followed by the space bar being pressed
auto observer = input::observe("shift+mouse:rclick,space");

// fires when the A key is pressed followed by a specific delay of between 500-2000ms, followed by B, then C
auto observer = input::observe("a,delay(500,2000),b,c");

observer.then([&](state) {
    // respond to input state here (the state object can be queried for any arbitrary keyboard/mouse/etc. state)

I haven't actually started writing any code yet, as I wanted to (a) see if anyone else has already written (and open-sourced) something similar, and (b) get some feedback on the idea. I'd be happy to open source this if others feel it might be useful to them.

Some extra thoughts:
  • The library could easily adapt to different platforms, e.g., you could add gamepad events and so forth
  • The library could be implemented as a template class allowing you to define your own event object that is passed when an event is triggered, and internally the library would provide an opportunity for the host application to add the input state object to the templated event argument object before it is passed to an observer, meaning that your event argument object could contain information such as the window handle where the event was triggered, and anything else (your game context object, game time class, whatever).
  • Registering an input observer could provide the option to ignore input of a given type, e.g. to handle situations where a key sequence is being observed irrespective of mouse state.
  • Gestures could hypothetically be supported as a future development...

Question for UE4 and Unity 4/5 experts, regarding capabilities

03 March 2015 - 08:48 PM

A question I haven't quite settled for myself, which I've been thinking about for a future game idea I have:
How well suited are UE4 and Unity (obviously I'm talking about 5, but I know there hasn't been a much time to gain expertise with it) to highly procedural games? I'm talking about the kinds of games where almost everything is heavily procedurally-generated, or modified in real time from baseline assets, with unused procedural assets being frequently unloaded as well. Assume that all of these things are to be generated by a server application that is not running the client engine, and streamed to clients on demand. Examples of the sorts of things I'm talking about:

* Terrain meshes, generated on a server and streamed to the client in real time, possibly containing a combination of voxels and polygon meshes (including dynamic level of detail and so forth)
* Materials and textures, generated from scratch, and/or applied to a baseline with many parameters affecting output
* Creature models and animations (again, heavily modified procedurally from base assets)
* Sound effects to a degree
* Weather, lighting, etc.

Let's not worry about the feasibility of my idea, I'm just curious whether these engines would fight me if I attempted to do the above, or if their API designs are flexible enough for this kind of thing.

Having trouble working out how to adjust 2D coordinates to OpenGL's coordinate system

25 February 2015 - 06:46 AM

I have a set of textured quads for text and user interface elements, very carefully positioned in relation to a top-left origin. OpenGL's y axis is inverted though, and I'm having a lot of trouble getting the calculations right so that the user interface elements retain their top-left relative positions when rendered.

Here's my default view and projection matrices:

auto projmat = glm::ortho<float>(0.0f, window.width, 0.0f, window.height, 0.0f, 1000.0f);
auto viewMatrix = glm::translate(glm::mat4(1.0), glm::vec3(0)

I'm aware that the ortho values are wrong, as they put my glyphs at the bottom of the screen, with the offsets pushing my glyphs upwards, instead of pushing them downwards from the top, which is what is supposed to happen:


If I change the projection to:

auto projmat = glm::ortho<float>(0.0f, window.width, -window.height, 0.0f, 0.0f, 1000.0f);

then my glyphs end up offset from each other correctly, but the baseline is now at the very top of the client area:


Here's my shader code:

#version 450

layout(location=0) in vec4 in_Position;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec4 in_Normal;
layout(location=3) in vec2 in_TexCoord;

out VSOutput
	vec4 color;
	vec2 texCoord;

} vs_output;

struct InstanceData
	vec2 pos;
	vec2 scale;
	vec2 uvScale;
	vec2 uvOffset;

layout (std430) buffer instance_data
	InstanceData instances[];

uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;

void main(void)
	InstanceData instance = instances[gl_InstanceID];

	float magnification = 50.0f; // TO DO: replace with uniform input

	// these are text glyphs so I need to scale the unit quad to the correct aspect ratio
	vec4 aspectScaling = vec4(instance.scale.x, instance.scale.y, 0, 0);
	vec4 scaledOriginalPosition = aspectScaling * magnification * in_Position;

	// we now translate the quad to the position specified by the current draw instance
	vec4 instancePosition = vec4(instance.pos.x, instance.pos.y, 0, 0);
	vec4 actualPosition = scaledOriginalPosition + instancePosition;
	actualPosition.w = 1; // set w to 1 because scaling would have screwed it up

	// perform the final world-view-projection transformation
	vec4 pos = projectionMatrix * ((viewMatrix * actualPosition));
	gl_Position = pos;

	vs_output.texCoord = (in_TexCoord * instance.uvScale) + instance.uvOffset;
	vs_output.color = in_Color;

Note that the texture coordinate calculation works just fine - it's just the y axis stuff that's giving me issues.

Any assistance would be appreciated. I want something that ideally is applied on the GL side of things - this concern shouldn't bleed back into the UI layout code, which I'd prefer to keep relative to normal screen coordinates.

Problems trying to assign texture to fragment shader

18 February 2015 - 09:35 PM

Stuck on this one; I've generated my first font atlas and I'm trying to render the entire thing out to a quad just to see if it looks right.

I'm trapping errors everywhere possible and have verified that my shader is compiling and linking correctly. There are no shader/program logs indicating any errors there. The font atlas is an unsigned char array, with one greyscale (0-255) byte per pixel.

The line I'm having trouble with is:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, atlas->width, atlas->height, 0, GL_RED, GL_UNSIGNED_BYTE, atlas->bitmap->data());
I'm getting "invalid operation" (aka GL_INVALID_OPERATION)

I have confirmed that the width and the height being passed in are 512 each, and that the std::vector containing the data has size 262144 (which is 512*512, as expected).

My shader code is:
#version 450

in VSOutput
	vec4 color;
	vec2 texCoord;

} vs_output;

out vec4 out_color;

uniform sampler2D fontSampler;
uniform sampler2D fontSampler2;

void main(void)
	out_color = texture(fontSampler, vs_output.texCoord);

The function I wrote to load the texture into video memory is:
static GLuint loadTexture(std::shared_ptr<ui::FontAtlas> atlas, const std::string& samplerName, const GLuint shaderProgramId, const GLuint textureIndex = 0)
	GLuint textureId;
	glGenTextures(1, &textureId);
	RETURN_0_IF_GL_ERROR("Error generating texture ID");

	glActiveTexture(GL_TEXTURE0 + textureIndex);
	RETURN_0_IF_GL_ERROR("Error setting active texture");
	glBindTexture(GL_TEXTURE_2D, textureId);
	RETURN_0_IF_GL_ERROR("Error binding to texture");

	printf("%dx%d, %d bytes\n", atlas->width, atlas->height, atlas->bitmap->size());
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, atlas->width, atlas->height, 0, GL_RED, GL_UNSIGNED_BYTE, atlas->bitmap->data());
	RETURN_0_IF_GL_ERROR("Error loading image data into the texture");

	glUniform1i(glGetUniformLocation(shaderProgramId, samplerName.c_str()), textureIndex);
	RETURN_0_IF_GL_ERROR("Error attaching texture to uniform location");

	return textureId;

And here's the definition of the font atlas:
class FontAtlas
	int width;
	int height;
	std::shared_ptr<std::vector<unsigned char>> bitmap;

	FontAtlas(int width, int height) : width(width), height(height) {}

Finally, here's the lines of code I'm calling to generate the font atlas and assign it to a texture, so you can see the parameters I'm passing in:
int id = ui::loadFont("exo/Exo-Regular.otf");
auto atlas = ui::generateFontAtlas(id, 48, 5);
moduleData->fontTextureId = loadTexture(atlas, "fontSampler", moduleData->shaderProgramId);

Is there anything obvious that I've done wrong?