Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


axefrog

Member Since 08 Apr 2011
Offline Last Active Yesterday, 06:12 PM

Topics I've Started

Having trouble working out how to adjust 2D coordinates to OpenGL's coordinate system

25 February 2015 - 06:46 AM

I have a set of textured quads for text and user interface elements, very carefully positioned in relation to a top-left origin. OpenGL's y axis is inverted though, and I'm having a lot of trouble getting the calculations right so that the user interface elements retain their top-left relative positions when rendered.

Here's my default view and projection matrices:

auto projmat = glm::ortho<float>(0.0f, window.width, 0.0f, window.height, 0.0f, 1000.0f);
auto viewMatrix = glm::translate(glm::mat4(1.0), glm::vec3(0)

I'm aware that the ortho values are wrong, as they put my glyphs at the bottom of the screen, with the offsets pushing my glyphs upwards, instead of pushing them downwards from the top, which is what is supposed to happen:

02.25.2015-22.38.png

If I change the projection to:

auto projmat = glm::ortho<float>(0.0f, window.width, -window.height, 0.0f, 0.0f, 1000.0f);

then my glyphs end up offset from each other correctly, but the baseline is now at the very top of the client area:

02.25.2015-22.40.png


Here's my shader code:

#version 450

layout(location=0) in vec4 in_Position;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec4 in_Normal;
layout(location=3) in vec2 in_TexCoord;

out VSOutput
{
	vec4 color;
	vec2 texCoord;

} vs_output;

struct InstanceData
{
	vec2 pos;
	vec2 scale;
	vec2 uvScale;
	vec2 uvOffset;
};

layout (std430) buffer instance_data
{
	InstanceData instances[];
};

uniform mat4 viewMatrix;
uniform mat4 projectionMatrix;

void main(void)
{
	InstanceData instance = instances[gl_InstanceID];

	float magnification = 50.0f; // TO DO: replace with uniform input

	// these are text glyphs so I need to scale the unit quad to the correct aspect ratio
	vec4 aspectScaling = vec4(instance.scale.x, instance.scale.y, 0, 0);
	vec4 scaledOriginalPosition = aspectScaling * magnification * in_Position;

	// we now translate the quad to the position specified by the current draw instance
	vec4 instancePosition = vec4(instance.pos.x, instance.pos.y, 0, 0);
	vec4 actualPosition = scaledOriginalPosition + instancePosition;
	actualPosition.w = 1; // set w to 1 because scaling would have screwed it up

	// perform the final world-view-projection transformation
	vec4 pos = projectionMatrix * ((viewMatrix * actualPosition));
	gl_Position = pos;

	vs_output.texCoord = (in_TexCoord * instance.uvScale) + instance.uvOffset;
	vs_output.color = in_Color;
}

Note that the texture coordinate calculation works just fine - it's just the y axis stuff that's giving me issues.

Any assistance would be appreciated. I want something that ideally is applied on the GL side of things - this concern shouldn't bleed back into the UI layout code, which I'd prefer to keep relative to normal screen coordinates.

Problems trying to assign texture to fragment shader

18 February 2015 - 09:35 PM

Stuck on this one; I've generated my first font atlas and I'm trying to render the entire thing out to a quad just to see if it looks right.

I'm trapping errors everywhere possible and have verified that my shader is compiling and linking correctly. There are no shader/program logs indicating any errors there. The font atlas is an unsigned char array, with one greyscale (0-255) byte per pixel.

The line I'm having trouble with is:
glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, atlas->width, atlas->height, 0, GL_RED, GL_UNSIGNED_BYTE, atlas->bitmap->data());
I'm getting "invalid operation" (aka GL_INVALID_OPERATION)

I have confirmed that the width and the height being passed in are 512 each, and that the std::vector containing the data has size 262144 (which is 512*512, as expected).

My shader code is:
 
#version 450

in VSOutput
{
	vec4 color;
	vec2 texCoord;

} vs_output;

out vec4 out_color;

uniform sampler2D fontSampler;
uniform sampler2D fontSampler2;

void main(void)
{
	out_color = texture(fontSampler, vs_output.texCoord);
}

The function I wrote to load the texture into video memory is:
 
static GLuint loadTexture(std::shared_ptr<ui::FontAtlas> atlas, const std::string& samplerName, const GLuint shaderProgramId, const GLuint textureIndex = 0)
{
	GLuint textureId;
	
	glGenTextures(1, &textureId);
	RETURN_0_IF_GL_ERROR("Error generating texture ID");

	glActiveTexture(GL_TEXTURE0 + textureIndex);
	RETURN_0_IF_GL_ERROR("Error setting active texture");
	
	glBindTexture(GL_TEXTURE_2D, textureId);
	RETURN_0_IF_GL_ERROR("Error binding to texture");

	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_REPEAT);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR_MIPMAP_LINEAR);
	glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
	
	printf("%dx%d, %d bytes\n", atlas->width, atlas->height, atlas->bitmap->size());
	glTexImage2D(GL_TEXTURE_2D, 0, GL_RED, atlas->width, atlas->height, 0, GL_RED, GL_UNSIGNED_BYTE, atlas->bitmap->data());
	RETURN_0_IF_GL_ERROR("Error loading image data into the texture");

	// THIS IS THE OFFENDING LINE:
	glUniform1i(glGetUniformLocation(shaderProgramId, samplerName.c_str()), textureIndex);
	RETURN_0_IF_GL_ERROR("Error attaching texture to uniform location");

	return textureId;
}

And here's the definition of the font atlas:
 
class FontAtlas
{
public:
	int width;
	int height;
	std::shared_ptr<std::vector<unsigned char>> bitmap;

	FontAtlas(int width, int height) : width(width), height(height) {}
};

Finally, here's the lines of code I'm calling to generate the font atlas and assign it to a texture, so you can see the parameters I'm passing in:
 
int id = ui::loadFont("exo/Exo-Regular.otf");
auto atlas = ui::generateFontAtlas(id, 48, 5);
moduleData->fontTextureId = loadTexture(atlas, "fontSampler", moduleData->shaderProgramId);

Is there anything obvious that I've done wrong?

Weird struct padding issue - what am I doing wrong?

14 February 2015 - 12:48 AM

I've been learning to use shader storage buffers, instancing, and multitexturing and while I'm pretty sure I've got the basics mostly figured out, I'm currently hitting a weird issue that is not making any sense to me.
 
My demo renders a bunch of cubes to the screen, each with a random scale, rotation and position. Each one is textured, and tinted a certain shade of red. I've just figured out how to have multiple textures available in the same shader pass, so I was going have some of the cubes render with one texture, and some of them with the other. Here's the original instance structure I was using:
 
struct CubeInstance
{
    glm::vec4 rotation;
    glm::vec4 scale;
    glm::vec4 position;
};

...

// quick and dirty way to fill the array with random data

srand(clock());
for(int i = 0; i < cubesData->totalInstances; i++)
{
    float rotX = rand()%200-100;
    float rotY = rand()%200-100;
    float rotZ = rand()%200-100;
    float rotS = (rand()%190 + 10)/100.0f;
 
    float posX = (rand()%10000 - 5000)/100.0f;
    float posY = (rand()%10000 - 5000)/100.0f;
    float posZ = (rand()%10000 - 5000)/100.0f;
 
    float scale = (rand()%400 + 10.0f)/100.0f;
 
    cubesData->instances[i] = CubeInstance
    {
        glm::vec4(rotX, rotY, rotZ, rotS),
        glm::vec4(scale, scale, scale, 1),
        glm::vec4(posX, posY, posZ, 0)
    };
}

...

// here's the bit of code with which I'm attaching the instance data to my shader program:

glGenBuffers(1, &cubesData->instanceBufferId);
glBindBuffer(GL_SHADER_STORAGE_BUFFER, cubesData->instanceBufferId);
glBufferData(GL_SHADER_STORAGE_BUFFER, sizeof(cubesData->instances), &cubesData->instances, GL_STREAM_DRAW);
GLuint blockIndex = glGetProgramResourceIndex(cubesData->shaderProgramId, GL_SHADER_STORAGE_BLOCK, "instance_data");
glShaderStorageBlockBinding(cubesData->shaderProgramId, blockIndex, 0);
glBindBufferBase(GL_SHADER_STORAGE_BUFFER, 0, cubesData->instanceBufferId);

And in my shader:
 
struct InstanceData
{
    vec4 rotation;
    vec4 scale;
    vec4 position;
};

layout (std430) buffer instance_data
{
    InstanceData instances[];
};
 
...

void main(void)
{
    InstanceData instance = instances[gl_InstanceID];
...
 

All good, that works fine. Here's what it looks like at this stage:

2015-02-14_1643.png


So then I went to introduce a fourth component to represent which texture each instance should use. Given that it starts on a 4-byte boundary, I figured it would be safe to just make it an int, in both the C++ code and the shader code. Obviously not, because I got a huge mess of screwed up rendering on the next run. Fair enough, so I added a padding vector of glm::vec3 at the end, and an equivalent padding vec3 in the shader code, and now for some really weird reason I'm ending up with a big stack of overlaid cubes rendering right at the origin for no apparent reason, in addition to the normal (albeit reduced-in-number) bunch of cubes normally found scattered around the screen.

My updated code has the following changes:

struct CubeInstance
{
    glm::vec4 rotation;
    glm::vec4 scale;
    glm::vec4 position;
    int material;       // ADDED THIS
    glm::vec3 _pad0;    // ADDED THIS
};

...

// added to my random data generation:

int material = rand()%2;  // ADDED THIS

cubesData->instances[i] = CubeInstance
{
    glm::vec4(rotX, rotY, rotZ, rotS),
    glm::vec4(scale, scale, scale, 1),
    glm::vec4(posX, posY, posZ, 0),
    material,           // ADDED THIS
    glm::vec3(0)        // ADDED THIS
};

and my updated shader code:

struct InstanceData
{
	vec4 rotation;
	vec4 scale;
	vec4 position;
	int material;       // ADDED THIS
	vec3 _pad0;         // ADDED THIS
};

And just from these changes, I now have a bunch of weird giant cubes appearing the centre of the scene.

2015-02-14_1611.png


Any ideas what I'm doing wrong?

Arcsynthesis OpenGL tutorials contradict OpenGL documentation?

09 February 2015 - 03:17 AM

From the OpenGL documentation here: https://www.opengl.org/sdk/docs/man3/xhtml/glBufferData.xml

 

STREAM

The data store contents will be modified once and used at most a few times.

 

 

From the Arcsynthesis tutorials here: http://www.arcsynthesis.org/gltut/positioning/Tutorial%2003.html

 

GL_STREAM_DRAW tells OpenGL that you intend to set this data constantly, generally once per frame. 

 

As I understand it, the latter is held in high regard. Why then the contradiction?


Shoreline waves?

03 February 2015 - 06:06 PM

I am nowhere near the level yet where I am thinking about implementing this, but I am still curious. In almost all demos I see from game developers, water along a shoreline is clean and unbroken, simply ending at the point that the terrain rises above water level, which is a very clear contrast to what you see in the real world.

 

Are there any good techniques people are starting to use to create nicer transitions between sea and land? A procedural approach would be of particular interest because it would be universally applicable without having to be hand-crafted. Scenarios would include serene in-and-out lapping of small waves against the beach, right through to violent crashing of waves against rocky shore lines. Slight rising and lowering of the water level near the shore would also be important in order to simulate the waves coming in and out effectively.

 

This is something I'm very interested in implementing in my own development when I'm further along; hopefully next year at this rate. In the mean time, I'm eager to collect examples, tutorials and papers concerning other people's efforts in this area.

 

 

 


PARTNERS