Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


axefrog

Member Since 08 Apr 2011
Offline Last Active Mar 29 2015 04:31 AM

Topics I've Started

Concept for an input library - thoughts?

04 March 2015 - 06:04 AM

I'm thinking of building a small input library which would streamline the way input events are managed and make it a lot easier to observe complex input event combinations. The library would be designed to be plugged into whatever input source is desired, i.e. if you're using GLFW, or RawInput on Windows, or whatever, you would just point any events fired at this library and it would take care of the rest.

Conceptually, usage of the library would look like this:
 
// fires on ctrl+A, followed by ctrl+B, then ctrl+C, with a maximum of 1000ms between keypresses
auto observer = input::observe("ctrl+a,ctrl+b,ctrl+c", 0, 1000);

// fires when the mouse is moved
auto observer = input::observe("mouse:move");

// fires when the left mouse button is clicked
auto observer = input::observe("mouse:lclick");

// fires when the right mouse button is clicked while shift is down, followed by the space bar being pressed
auto observer = input::observe("shift+mouse:rclick,space");

// fires when the A key is pressed followed by a specific delay of between 500-2000ms, followed by B, then C
auto observer = input::observe("a,delay(500,2000),b,c");

observer.then([&](state) {
    // respond to input state here (the state object can be queried for any arbitrary keyboard/mouse/etc. state)
});

I haven't actually started writing any code yet, as I wanted to (a) see if anyone else has already written (and open-sourced) something similar, and (b) get some feedback on the idea. I'd be happy to open source this if others feel it might be useful to them.

Edit
Some extra thoughts:
  • The library could easily adapt to different platforms, e.g., you could add gamepad events and so forth
  • The library could be implemented as a template class allowing you to define your own event object that is passed when an event is triggered, and internally the library would provide an opportunity for the host application to add the input state object to the templated event argument object before it is passed to an observer, meaning that your event argument object could contain information such as the window handle where the event was triggered, and anything else (your game context object, game time class, whatever).
  • Registering an input observer could provide the option to ignore input of a given type, e.g. to handle situations where a key sequence is being observed irrespective of mouse state.
  • Gestures could hypothetically be supported as a future development...

Question for UE4 and Unity 4/5 experts, regarding capabilities

03 March 2015 - 08:48 PM

A question I haven't quite settled for myself, which I've been thinking about for a future game idea I have:
How well suited are UE4 and Unity (obviously I'm talking about 5, but I know there hasn't been a much time to gain expertise with it) to highly procedural games? I'm talking about the kinds of games where almost everything is heavily procedurally-generated, or modified in real time from baseline assets, with unused procedural assets being frequently unloaded as well. Assume that all of these things are to be generated by a server application that is not running the client engine, and streamed to clients on demand. Examples of the sorts of things I'm talking about:

* Terrain meshes, generated on a server and streamed to the client in real time, possibly containing a combination of voxels and polygon meshes (including dynamic level of detail and so forth)
* Materials and textures, generated from scratch, and/or applied to a baseline with many parameters affecting output
* Creature models and animations (again, heavily modified procedurally from base assets)
* Sound effects to a degree
* Weather, lighting, etc.

Let's not worry about the feasibility of my idea, I'm just curious whether these engines would fight me if I attempted to do the above, or if their API designs are flexible enough for this kind of thing.

Having trouble working out how to adjust 2D coordinates to OpenGL's coordinate system

25 February 2015 - 06:46 AM

I have a set of textured quads for text and user interface elements, very carefully positioned in relation to a top-left origin. OpenGL's y axis is inverted though, and I'm having a lot of trouble getting the calculations right so that the user interface elements retain their top-left relative positions when rendered.

Here's my default view and projection matrices:

auto projmat = glm::ortho<float>(0.0f, window.width, 0.0f, window.height, 0.0f, 1000.0f);
auto viewMatrix = glm::translate(glm::mat4(1.0), glm::vec3(0)

I'm aware that the ortho values are wrong, as they put my glyphs at the bottom of the screen, with the offsets pushing my glyphs upwards, instead of pushing them downwards from the top, which is what is supposed to happen:

02.25.2015-22.38.png

If I change the projection to:

auto projmat = glm::ortho<float>(0.0f, window.width, -window.height, 0.0f, 0.0f, 1000.0f);

then my glyphs end up offset from each other correctly, but the baseline is now at the very top of the client area:

02.25.2015-22.40.png


Here's my shader code:

#version 450

layout(location=0) in vec4 in_Position;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec4 in_Normal;
layout(location=3) in vec2 in_TexCoord;

out VSOutput
{
	vec4 color;
	vec2 texCoord;

} vs_output;

struct InstanceData
{
	vec2 pos;
	vec2 scale;
	vec2 uvScale;
	vec2 uvOffset;
};

layout (std430) buffer instance_data
{
	InstanceData instances[];
};

uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;

void main(void)
{
	InstanceData instance = instances[gl_InstanceID];

	float magnification = 50.0f; // TO DO: replace with uniform input

	// these are text glyphs so I need to scale the unit quad to the correct aspect ratio
	vec4 aspectScaling = vec4(instance.scale.x, instance.scale.y, 0, 0);
	vec4 scaledOriginalPosition = aspectScaling * magnification * in_Position;

	// we now translate the quad to the position specified by the current draw instance
	vec4 instancePosition = vec4(instance.pos.x, instance.pos.y, 0, 0);
	vec4 actualPosition = scaledOriginalPosition + instancePosition;
	actualPosition.w = 1; // set w to 1 because scaling would have screwed it up

	// perform the final world-view-projection transformation
	vec4 pos = projectionMatrix * ((viewMatrix * actualPosition));
	gl_Position = pos;

	vs_output.texCoord = (in_TexCoord * instance.uvScale) + instance.uvOffset;
	vs_output.color = in_Color;
}

Note that the texture coordinate calculation works just fine - it's just the y axis stuff that's giving me issues.

Any assistance would be appreciated. I want something that ideally is applied on the GL side of things - this concern shouldn't bleed back into the UI layout code, which I'd prefer to keep relative to normal screen coordinates.

Problems trying to assign texture to fragment shader

18 February 2015 - 09:35 PM

Stuck on this one; I've generated my first font atlas and I'm trying to render the entire thing out to a quad just to see if it looks right.

I'm trapping errors everywhere possible and have verified that my shader is compiling and linking correctly. There are no shader/program logs indicating any errors there. The font atlas is an unsigned char array, with one greyscale (0-255) byte per pixel.

The line I'm having trouble with is:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, atlas->width, atlas->height, 0, GL_RED, GL_UNSIGNED_BYTE, atlas->bitmap->data());
I'm getting "invalid operation" (aka GL_INVALID_OPERATION)

I have confirmed that the width and the height being passed in are 512 each, and that the std::vector containing the data has size 262144 (which is 512*512, as expected).

My shader code is:
 
#version 450

in VSOutput
{
	vec4 color;
	vec2 texCoord;

} vs_output;

out vec4 out_color;

uniform sampler2D fontSampler;
uniform sampler2D fontSampler2;

void main(void)
{
	out_color = texture(fontSampler, vs_output.texCoord);
}

The function I wrote to load the texture into video memory is:
 
static GLuint loadTexture(std::shared_ptr<ui::FontAtlas> atlas, const std::string& samplerName, const GLuint shaderProgramId, const GLuint textureIndex = 0)
{
	GLuint textureId;
	
	glGenTextures(1, &textureId);
	RETURN_0_IF_GL_ERROR("Error generating texture ID");

	glActiveTexture(GL_TEXTURE0 + textureIndex);
	RETURN_0_IF_GL_ERROR("Error setting active texture");
	
	glBindTexture(GL_TEXTURE_2D, textureId);
	RETURN_0_IF_GL_ERROR("Error binding to texture");

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	
	printf("%dx%d, %d bytes\n", atlas->width, atlas->height, atlas->bitmap->size());
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, atlas->width, atlas->height, 0, GL_RED, GL_UNSIGNED_BYTE, atlas->bitmap->data());
	RETURN_0_IF_GL_ERROR("Error loading image data into the texture");

	// THIS IS THE OFFENDING LINE:
	glUniform1i(glGetUniformLocation(shaderProgramId, samplerName.c_str()), textureIndex);
	RETURN_0_IF_GL_ERROR("Error attaching texture to uniform location");

	return textureId;
}

And here's the definition of the font atlas:
 
class FontAtlas
{
public:
	int width;
	int height;
	std::shared_ptr<std::vector<unsigned char>> bitmap;

	FontAtlas(int width, int height) : width(width), height(height) {}
};

Finally, here's the lines of code I'm calling to generate the font atlas and assign it to a texture, so you can see the parameters I'm passing in:
 
int id = ui::loadFont("exo/Exo-Regular.otf");
auto atlas = ui::generateFontAtlas(id, 48, 5);
moduleData->fontTextureId = loadTexture(atlas, "fontSampler", moduleData->shaderProgramId);

Is there anything obvious that I've done wrong?

Weird struct padding issue - what am I doing wrong?

14 February 2015 - 12:48 AM

I've been learning to use shader storage buffers, instancing, and multitexturing and while I'm pretty sure I've got the basics mostly figured out, I'm currently hitting a weird issue that is not making any sense to me.
 
My demo renders a bunch of cubes to the screen, each with a random scale, rotation and position. Each one is textured, and tinted a certain shade of red. I've just figured out how to have multiple textures available in the same shader pass, so I was going have some of the cubes render with one texture, and some of them with the other. Here's the original instance structure I was using:
 
struct CubeInstance
{
    glm::vec4 rotation;
    glm::vec4 scale;
    glm::vec4 position;
};

...

// quick and dirty way to fill the array with random data

srand(clock());
for(int i = 0; i < cubesData->totalInstances; i++)
{
    float rotX = rand()%200-100;
    float rotY = rand()%200-100;
    float rotZ = rand()%200-100;
    float rotS = (rand()%190 + 10)/100.0f;
 
    float posX = (rand()%10000 - 5000)/100.0f;
    float posY = (rand()%10000 - 5000)/100.0f;
    float posZ = (rand()%10000 - 5000)/100.0f;
 
    float scale = (rand()%400 + 10.0f)/100.0f;
 
    cubesData->instances[i] = CubeInstance
    {
        glm::vec4(rotX, rotY, rotZ, rotS),
        glm::vec4(scale, scale, scale, 1),
        glm::vec4(posX, posY, posZ, 0)
    };
}

...

// here's the bit of code with which I'm attaching the instance data to my shader program:

glGenBuffers(1, &cubesData->instanceBufferId);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, cubesData->instanceBufferId);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(cubesData->instances), &cubesData->instances, GL_STREAM_DRAW);
GLuint blockIndex = glGetProgramResourceIndex(cubesData->shaderProgramId, GL_SHADER_STORAGE_BLOCK, "instance_data");
glShaderStorageBlockBinding(cubesData->shaderProgramId, blockIndex, 0);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, cubesData->instanceBufferId);

And in my shader:
 
struct InstanceData
{
    vec4 rotation;
    vec4 scale;
    vec4 position;
};

layout (std430) buffer instance_data
{
    InstanceData instances[];
};
 
...

void main(void)
{
    InstanceData instance = instances[gl_InstanceID];
...
 

All good, that works fine. Here's what it looks like at this stage:

2015-02-14_1643.png


So then I went to introduce a fourth component to represent which texture each instance should use. Given that it starts on a 4-byte boundary, I figured it would be safe to just make it an int, in both the C++ code and the shader code. Obviously not, because I got a huge mess of screwed up rendering on the next run. Fair enough, so I added a padding vector of glm::vec3 at the end, and an equivalent padding vec3 in the shader code, and now for some really weird reason I'm ending up with a big stack of overlaid cubes rendering right at the origin for no apparent reason, in addition to the normal (albeit reduced-in-number) bunch of cubes normally found scattered around the screen.

My updated code has the following changes:

struct CubeInstance
{
    glm::vec4 rotation;
    glm::vec4 scale;
    glm::vec4 position;
    int material;       // ADDED THIS
    glm::vec3 _pad0;    // ADDED THIS
};

...

// added to my random data generation:

int material = rand()%2;  // ADDED THIS

cubesData->instances[i] = CubeInstance
{
    glm::vec4(rotX, rotY, rotZ, rotS),
    glm::vec4(scale, scale, scale, 1),
    glm::vec4(posX, posY, posZ, 0),
    material,           // ADDED THIS
    glm::vec3(0)        // ADDED THIS
};

and my updated shader code:

struct InstanceData
{
	vec4 rotation;
	vec4 scale;
	vec4 position;
	int material;       // ADDED THIS
	vec3 _pad0;         // ADDED THIS
};

And just from these changes, I now have a bunch of weird giant cubes appearing the centre of the scene.

2015-02-14_1611.png


Any ideas what I'm doing wrong?

PARTNERS