Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 30 Apr 2004
Offline Last Active Jun 24 2014 10:07 AM

Topics I've Started

Calculating aribtrary point position in rectangle

20 June 2014 - 08:22 AM

I have four points in 2D space that make up a rectangle, top left, top right, bottom left and bottom right. Given the x and y of an arbitrary point I want to know where that lies in the rectangle. eg, if the point happens to lie on the top left corner it'd be (0,0) and if it happens to lie an the bottom right corner it'd be (width,height). I don't have a rotation matrix for said rectangle but I could make one from the points. I just thought there should be a much simpler way but my mind is having a blank.


OpenGL ES 2 Texture Not Rendering

29 June 2012 - 02:30 PM

I'm porting my game framework from OpenGL 3/4 to OpenGL ES 2 but I'm having an issue where textures aren't being rendered.

In once instance I'm using simple vertex buffer objects to store the vertices and texcoords. In my OpenGL code these are bound in vertex arrays but I understand they aren't used in ES. Basically, I have some initialisation code where I do this:

[source lang="cpp"] //glGenVertexArrays(1, &m_vao); //Not used in GLES.glGenBuffers(2, m_vbo);//glBindVertexArray(m_vao); //Not used in GLES.glBindBuffer(GL_ARRAY_BUFFER, m_vbo[0]);glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(Vector2), ms_verts, GL_STATIC_DRAW);glBindBuffer(GL_ARRAY_BUFFER, m_vbo[1]);glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(Vector2), ms_texCoords, GL_STATIC_DRAW);[/source]

And then when it comes time to render I do this:

[source lang="cpp"]int32_t* pUniformHandle = m_uniforms.Find(textureParamName);if (pUniformHandle){ glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, m_textureHandle); glUniform1i(*pUniformHandle, 0);}//glBindVertexArray(m_vao); //Not used in GLES.glBindBuffer(GL_ARRAY_BUFFER, m_vbo[0]);glVertexAttribPointer(Effect::POSITION_ATTR, 2, GL_FLOAT, GL_FALSE, 0, 0);glEnableVertexAttribArray(Effect::POSITION_ATTR);glBindBuffer(GL_ARRAY_BUFFER, m_vbo[1]);glVertexAttribPointer(Effect::TEXCOORD0_ATTR, 2, GL_FLOAT, GL_FALSE, 0, 0);glEnableVertexAttribArray(Effect::TEXCOORD0_ATTR);glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);glDisableVertexAttribArray(Effect::TEXCOORD0_ATTR);glDisableVertexAttribArray(Effect::POSITION_ATTR);[/source]

In OpenGL (with those "Not used in GLES" bits uncommented) this works fine but in GLES the prims are being rendered but the texture isn't being applied. For reference, my shader looks like this:

Vertex shader:

[source lang="plain"]uniform mat4 worldMatrix;uniform mat4 projectionMatrix;attribute vec2 aPosition;attribute vec2 aTexCoord;varying vec2 vTexCoord;void main(){ vTexCoord = aTexCoord; mat4 worldProjMatrix = projectionMatrix * worldMatrix; gl_Position = worldProjMatrix * vec4(aPosition, 0.0, 1.0);}[/source]

Fragment shader:

[source lang="plain"]uniform sampler2D texture;uniform vec4 colour;varying vec2 vTexCoord;void main(){ gl_FragColor = texture2D(texture, vTexCoord) * colour;}[/source]

Can anyone see what I'm doing wrong? I've been scanning through the GLES book but I can't see anything obvious.

Correct order of operations when enabling/disabling Cg shaders in OpenGL

28 April 2012 - 02:28 AM

I've started writing an Effect class which uses Cg shaders in OpenGL and I'm a bit confused about the order of operations when creating and rendering using Cg.

Currently, my Effect class contains CGprogram, CGprofile and an array of CGparameter variables which get populated on loading the Effect similar to this:

m_vertexProfile = cgGLGetLatesProfile(CG_GL_VERTEX);
m_vertexProgram = cgCreateProgramFromFile(g_cgContext, CG_SOURCE, fileName, m_vertexProfile, entryPointName, NULL);

CGparameter param = cgGetFirstParameter(m_vertexProgram, CG_PROGRAM);
while (param)
	 const char* paramName = cgGetParameterName(param);
	 m_vertexParameters[m_vertexParamNum++] = param;
	 param = cgGetNextParameter(param);

It's not exactly like this and this is only using a vertex shader but it contains the important code. Anyway, that's how I create the Effect and then when I want to use it during a render I use Enable() and Disable() functions before and after I draw the verts etc.

void Effect::Enable()

void Effect::Disable()

I'm not sure if this is the correct way to do it, though. Is it correct to enable and disable the profile for each shader? More to the point, do I actually want a profile per shader? I'm using the same profile for each shader so surely I could just have a global one and use that?

Any advice would be much appreciated.

Bizarre camera rotation

22 April 2012 - 01:23 PM

I'm getting unexpected results when I add a rotation to my camera. I'm sure it's really obvious bug but I'm having a bit of a blank so I was hoping someone could help me.

In my camera class I store four vectors. One for position and then one each for left, up and forward for the rotation. Then at the end of my camera's update I create the view matrix via something like:

	 m_view = Matrix::CreateLookAt(m_position, m_position + m_foward, m_up);

Anyway, my camera has a function for adding rotation which gets called from the input component and it looks something like this:

void Camera::AddRotation(const Matrix& rRotAdd)
	 Matrix tempMat(m_left, m_up, m_forward);
	 tempMat = rRotAdd * tempMat;

	 m_left = tempMat.GetColumn0();
	 m_up = tempMat.GetColumn1();
	 m_forward = tempMat.GetColumn2();

As you can see, I create a temporary matrix from the rotation member variables so that I can add on the rotation that's passed in and then re-set the vectors from the resulting matrix.

This seems to work fine if I only update the rotation on one axis, so just the pitch or just the yaw. But if I try to rotate in both pitch and yaw I'm not seeing the results I expect. The camera actually appears to roll and the effect is worse the more pitch/yaw I put on.

I can record a video of what it looks like if the issue isn't immediately obvious.

Calculating view matrix from a free cam

26 February 2012 - 05:04 AM

As per tradition, every couple of years I start to write a new framework/engine and I've just been thinking about something I've always done which might not be the best way of doing it so I thought I'd ask for some advice.

I always have an FPS style free cam which I use for my model viewer etc. and the way I always calculate the view matrix in this cam is along these lines:

m_pitch += pitchThisFrame;
m_yaw += yawThisFrame;

Matrix pitchMat = Matrix::CreateXRotation(m_pitch);
Matrix yawMat = Matrix::CreateYRotation(m_yaw);
Matrix rotation = yawMat * pitchMat;

const Vector& left = rotation.GetColumn0();
const Vector& up = rotation.GetColumn1();
const Vector& forward = rotation.GetColumn2();

m_position += left * xTransThisFrame;
m_position += forward * zTransThisFrame;

m_view = Matrix::CreateLookAt(m_position, m_position + forward, up);

As you can see, I basically have my free cam class store a vector for the position and then pitch and yaw values for the rotation of the camera. Then I use my CreateLookAt() function to generate the view matrix from these values each frame.

This has always worked for me but I have been wondering if this is the "correct" way to do it. Perhaps it would be better to just modify the view matrix directly each frame? Though, I suppose I would have to orthonormalise regularly if I went down that route...