• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

134 Neutral

About redneon

  • Rank
  1. Yeah, that was part of the long story short. In this particular example, because of the way the pipeline works doing that would require some refactoring that I didn't want to do. You're right, though. That is the proper way to do it. I should just bite the bullet and do the refactoring instead of being lazy :)
  2. Actually, I might be in luck. It looks like the camera will always be head on, which will make the shape always a rectangle. I imagine that makes things easier?
  3. It's a 3D game but the problem is 2D. To cut a long story short, I'm having to pass four uniforms into a pixel shader for the 2D screen space positions of the four corners of a shape. This will end up being a rectangle if the camera is head on, otherwise it'll be a trapezoid. For each pixel in the shader I have the screen space position and I want to know 1) if the pixel is within those four coordinates and if so 2) calculate the amount across and up the shape for texture lookup.
  4. Hmm. Actually, it could be a trapezoid based on the position of the camera.
  5. I have four points in 2D space that make up a rectangle, top left, top right, bottom left and bottom right. Given the x and y of an arbitrary point I want to know where that lies in the rectangle. eg, if the point happens to lie on the top left corner it'd be (0,0) and if it happens to lie an the bottom right corner it'd be (width,height). I don't have a rotation matrix for said rectangle but I could make one from the points. I just thought there should be a much simpler way but my mind is having a blank.  
  6. Bloody hell. I've fixed it and, as per usual with these things, it was user error. I was passing an incorrect value into glTexImage2D for the internal format. If you look at my texture load code below you can see I'm incorrectly passing bitDepth into glTexImage2D instead of textureFormat. Doh! I've no idea how this works correctly in normal OpenGL, though. I would expect it to fail like it does in GLES. Ah well, nevermind. [source lang="cpp"]int32_t width, height, bitDepth = 0; //Load the image. uint8_t* pData = ::stbi_load_from_memory(rTextureFile.GetData(), rTextureFile.GetSize(), &width, &height, &bitDepth, 0); if (pData) { m_width = width; m_height = height; m_bitDepth = bitDepth; uint32_t textureFormat = bitDepth == 4 ? GL_RGBA : GL_RGB; glGenTextures(1, &m_handle); glBindTexture(GL_TEXTURE_2D, m_handle); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexImage2D(GL_TEXTURE_2D, 0, bitDepth, width, height, 0, textureFormat, GL_UNSIGNED_BYTE, pData); //No longer need the image. stbi_image_free(pData); return true; } return false; [/source] Thanks for your help, everyone.
  7. Just thinking, is there any chance that I could have set my window up incorrectly? Maybe something wrong in the EGL stuff? I don't think that this could cause this issue but I'm just clutching at straws really, as I can't see any difference between my code and the Simple_Texture2D code.
  8. I've fixed why the geometry wasn't being rendered when I don't use VBOs (I was being an idiot and hadn't unbound a buffer I was using elsewhere). So, I've got the geometry rending both with and without VBOs but in both instances I'm just getting black prims without the texture. I'm now trying to match my code as closely as possible to the Simple_Texture2D sample. The first step was removing the VBOs, I've also tried using their texture code too but it hasn't made a difference. There must be something obviously different between my code and theirs though, for theirs to be working and mine not. I'm sure I'll get to the bottom of it if I keep chugging away
  9. I just quickly tried sacking off the VBOs and using glVertexAttribPointer to send the data to the GPU each render, as in the Simple_Texture2D sample like this: [source lang="cpp"] glVertexAttribPointer(Effect::POSITION_ATTR, 2, GL_FLOAT, GL_FALSE, 0, ms_verts); glVertexAttribPointer(Effect::TEXCOORD0_ATTR, 2, GL_FLOAT, GL_FALSE, 0, ms_texCoords); glEnableVertexAttribArray(Effect::POSITION_ATTR); glEnableVertexAttribArray(Effect::TEXCOORD0_ATTR); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableVertexAttribArray(Effect::TEXCOORD0_ATTR); glDisableVertexAttribArray(Effect::POSITION_ATTR); [/source] But now even the geometry isn't being rendered. I've probably done something stupid but I only had a quick five minutes before I go to work so I thought I'd give it a try. I'll have more of a look when I get back from work.
  10. I like #5, it's a clever way of seeing the texture coordinates are valid. Alas, mine were, black top-left, red top-right, green bottom-left, yellow bottom-right. I am setting the colour value (though I know in my code I only showed me setting the texture uniform) but just to double check I removed colour from the gl_FragColor calculation and I'm still getting a black square. I did think perhaps something isn't working in the GLES Windows SDK and that I should just port my code to Pi and fix the issue on there, if it's still an issue. That being said, if I run the Simple_Texture2D sample from the GLES book then the texture displays correctly. The only difference I can see between my code and theirs, however, is that they use glVertexAttribPointer to send the data to the GPU each render instead of storing the data off in a VBO. Should this make a difference?
  11. [quote name='clb' timestamp='1341007126' post='4954112'] [color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]To be precise, there is no such thing 'the texture isn't being applied'. What do you mean exactly? The geometry renders properly, but all rasterized fragments come out black (i.e. the texture2D call in the shader comes out black) ?[/background][/left][/size][/font][/color] [/quote] Sorry, yes, I was generalising as I wrote the post quickly but yeah the geometry renders correctly but black. If I change the fragment shader to output a defined colour like, say gl_FragColour = vec4(1.0, 0.0, 0.0, 1.0); then it renders with the colour I supplied instead of black. But when I use the texture2D function I just get black prims. The particular texture I'm trying to display is 512x512 so that rules out the power of two thing. I've also just tried setting GL_TEXTURE_WRAP_S and GL_TEXTURE_WRAP_T to GL_CLAMP_TO_EDGE and GL_TEXTURE_MIN_FILTER and FL_TEXTURE_MAG_FILTER to GL_LINEAR but it hasn't made a difference. Also, not sure if it makes a difference but I am using ATI's GLES SDK in Windows. My plan was to get it working in Windows, so I know it's working, before I move it across to my Raspberry Pi, which is my ultimate goal. I'll have more of a look tomorrow as I'm done for tonight.
  12. I'm porting my game framework from OpenGL 3/4 to OpenGL ES 2 but I'm having an issue where textures aren't being rendered. In once instance I'm using simple vertex buffer objects to store the vertices and texcoords. In my OpenGL code these are bound in vertex arrays but I understand they aren't used in ES. Basically, I have some initialisation code where I do this: [source lang="cpp"] //glGenVertexArrays(1, &m_vao); //Not used in GLES. glGenBuffers(2, m_vbo); //glBindVertexArray(m_vao); //Not used in GLES. glBindBuffer(GL_ARRAY_BUFFER, m_vbo[0]); glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(Vector2), ms_verts, GL_STATIC_DRAW); glBindBuffer(GL_ARRAY_BUFFER, m_vbo[1]); glBufferData(GL_ARRAY_BUFFER, 4 * sizeof(Vector2), ms_texCoords, GL_STATIC_DRAW); [/source] And then when it comes time to render I do this: [source lang="cpp"] int32_t* pUniformHandle = m_uniforms.Find(textureParamName); if (pUniformHandle) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, m_textureHandle); glUniform1i(*pUniformHandle, 0); } //glBindVertexArray(m_vao); //Not used in GLES. glBindBuffer(GL_ARRAY_BUFFER, m_vbo[0]); glVertexAttribPointer(Effect::POSITION_ATTR, 2, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(Effect::POSITION_ATTR); glBindBuffer(GL_ARRAY_BUFFER, m_vbo[1]); glVertexAttribPointer(Effect::TEXCOORD0_ATTR, 2, GL_FLOAT, GL_FALSE, 0, 0); glEnableVertexAttribArray(Effect::TEXCOORD0_ATTR); glDrawArrays(GL_TRIANGLE_STRIP, 0, 4); glDisableVertexAttribArray(Effect::TEXCOORD0_ATTR); glDisableVertexAttribArray(Effect::POSITION_ATTR); [/source] In OpenGL (with those "Not used in GLES" bits uncommented) this works fine but in GLES the prims are being rendered but the texture isn't being applied. For reference, my shader looks like this: Vertex shader: [source lang="plain"] uniform mat4 worldMatrix; uniform mat4 projectionMatrix; attribute vec2 aPosition; attribute vec2 aTexCoord; varying vec2 vTexCoord; void main() { vTexCoord = aTexCoord; mat4 worldProjMatrix = projectionMatrix * worldMatrix; gl_Position = worldProjMatrix * vec4(aPosition, 0.0, 1.0); } [/source] Fragment shader: [source lang="plain"] uniform sampler2D texture; uniform vec4 colour; varying vec2 vTexCoord; void main() { gl_FragColor = texture2D(texture, vTexCoord) * colour; } [/source] Can anyone see what I'm doing wrong? I've been scanning through the GLES book but I can't see anything obvious.
  13. [color=#000000]I've started writing an Effect class which uses Cg shaders in OpenGL and I'm a bit confused about the order of operations when creating and rendering using Cg.[/color] [color=#000000]Currently, my Effect class contains CGprogram, CGprofile and an array of CGparameter variables which get populated on loading the Effect similar to this:[/color] [color=#000000][CODE] m_vertexProfile = cgGLGetLatesProfile(CG_GL_VERTEX); cgGLSetOptimalOptions(m_vertexProfile); m_vertexProgram = cgCreateProgramFromFile(g_cgContext, CG_SOURCE, fileName, m_vertexProfile, entryPointName, NULL); cgGLLoadProgram(m_vertexProgram);[/color] CGparameter param = cgGetFirstParameter(m_vertexProgram, CG_PROGRAM); while (param) { const char* paramName = cgGetParameterName(param); m_vertexParameters[m_vertexParamNum++] = param; param = cgGetNextParameter(param); } [/CODE][/color] [color=#000000]It's not exactly like this and this is only using a vertex shader but it contains the important code. Anyway, that's how I create the Effect and then when I want to use it during a render I use Enable() and Disable() functions before and after I draw the verts etc.[/color] [color=#000000][CODE] void Effect::Enable() { cgGLBindProgram(m_vertexProgram); cgGLEnableProfile(m_vertexProfile); } void Effect::Disable() { cgGLUnbindProgram(m_vertexProfile); cgGLDisableProfile(m_vertexProfile); } [/CODE][/color] [color=#000000]I'm not sure if this is the correct way to do it, though. Is it correct to enable and disable the profile for each shader? More to the point, do I actually want a profile per shader? I'm using the same profile for each shader so surely I could just have a global one and use that?[/color] [color=#000000]Any advice would be much appreciated.[/color]
  14. Bizarre camera rotation

    A bit more information as I've just noticed something. I noticed if I set the camera up pointing down the Z axis and yaw 90 degrees so it's facing down the X axis, if I then try and pitch it appears to roll. This leads me to believe that it is pitching correctly but as if the camera was still pointing down the Z axis. What I'm expecting is it to pitch up and down whilst still pointing down the X axis. So could it be an order of operations issue? It's essentially a free camera I'm after.
  15. Bizarre camera rotation

    I'm getting unexpected results when I add a rotation to my camera. I'm sure it's really obvious bug but I'm having a bit of a blank so I was hoping someone could help me. In my camera class I store four vectors. One for position and then one each for left, up and forward for the rotation. Then at the end of my camera's update I create the view matrix via something like: [CODE] m_view = Matrix::CreateLookAt(m_position, m_position + m_foward, m_up); [/CODE] Anyway, my camera has a function for adding rotation which gets called from the input component and it looks something like this: [CODE] void Camera::AddRotation(const Matrix& rRotAdd) { Matrix tempMat(m_left, m_up, m_forward); tempMat = rRotAdd * tempMat; m_left = tempMat.GetColumn0(); m_up = tempMat.GetColumn1(); m_forward = tempMat.GetColumn2(); } [/CODE] As you can see, I create a temporary matrix from the rotation member variables so that I can add on the rotation that's passed in and then re-set the vectors from the resulting matrix. This seems to work fine if I only update the rotation on one axis, so just the pitch or just the yaw. But if I try to rotate in both pitch and yaw I'm not seeing the results I expect. The camera actually appears to roll and the effect is worse the more pitch/yaw I put on. I can record a video of what it looks like if the issue isn't immediately obvious.
  • Advertisement