• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

103 Neutral

About MrDoomMaster

  • Rank
  1. Problem with FBO

    You know, in every single example I've seen they use a RenderBuffer along with a Texture for the FBO. Perhaps not using the RenderBuffer is my problem? Maybe it is required?
  2. Problem with FBO

  3. Problem with FBO

    Sure, the code is below. Sorry for leaving this out, I forgot to post it with the rest of the code earlier: template< typename TextureBinder > static void RenderTexturedQuad( rs::RenderSystem& renderSystem, unsigned panelWidth, unsigned panelHeight, TextureBinder textureBinder ) { rs::RenderContext& context = renderSystem.GetContext(); context.EnableTextureMapping( true ); glDepthMask( GL_FALSE ); glDisable( GL_LIGHTING ); glPushMatrix(); textureBinder(); // This will bind the texture in a user-defined way. glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST ); glBegin( GL_QUADS ); { float halfWidth = panelWidth / 2.f; float halfHeight = panelHeight / 2.f; glColor3f( 1, 1, 1 ); // top left glTexCoord2f( 0, 1 ); glVertex2f( -halfWidth, halfHeight ); // top right glTexCoord2f( 1, 1 ); glVertex2f( halfWidth, halfHeight ); // bottom right glTexCoord2f( 1, 0 ); glVertex2f( halfWidth, -halfHeight ); // top right glTexCoord2f( 0, 0 ); glVertex2f( -halfWidth, -halfHeight ); } glEnd(); glPopMatrix(); glDepthMask( GL_TRUE ); // re-enable Z writes }
  4. Problem with FBO

    Quote:Original post by Gage64 You don't seem to specify any texture coordinates. Where? When I'm rendering to my FBO's texture, I'm trying to render a red square, and to do this I use orthographic projection and wireframe mode.
  5. Problem with FBO

    Quote:Original post by HuntsMan Set the MIN/MAG filters to LINEAR after glTexImage2D. By default GL uses mipmapped filters and your texture doesn't have mipmaps, so the texture object is invalid. I added: glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST ); glTexParameteri( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST ); Right after glTexImage2D() and this didn't change anything. And yes, GL_NEAREST is what I want.
  6. Problem with FBO

    Hello, I'm currently following the FBO 101 guide here: http://www.gamedev.net/reference/articles/article2331.asp I am trying to render a square (using orthographic project) to a texture and then render that texture to the screen in still an orthographic projection. However, the texture is full white when I render it. Even when I call glClear( GL_COLOR_BUFFER_BIT ) before I render anything, it still results in no changes. Here's the code I'm executing: void RenderSelectionTexture::InitializeOffscreenRendering( unsigned fboWidth, unsigned fboHeight ) { m_fboWidth = fboWidth; m_fboHeight = fboHeight; glGenTextures( 1, &m_fboTexture ); glBindTexture( GL_TEXTURE_2D, m_fboTexture ); glTexImage2D( GL_TEXTURE_2D, 0, GL_RGBA8, fboWidth, fboHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL ); glBindTexture( GL_TEXTURE_2D, 0 ); glGenFramebuffers( 1, &m_fbo ); glBindFramebuffer( GL_FRAMEBUFFER, m_fbo ); glFramebufferTexture2D( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_fboTexture, 0 ); assert( GL_FRAMEBUFFER_COMPLETE == glCheckFramebufferStatus( GL_FRAMEBUFFER ) ); glBindFramebuffer(GL_FRAMEBUFFER, 0); } void RenderSelectionTexture::RenderRectangle( unsigned x, unsigned y, unsigned width, unsigned height ) { float realX = (int)x - m_fboWidth / 2; float realY = (int)y - m_fboHeight / 2; //glPolygonMode( GL_FRONT_AND_BACK, GL_LINE ); glDisable( GL_LIGHTING ); glDisable( GL_DEPTH_TEST ); glClear( GL_COLOR_BUFFER_BIT ); glBindTexture( GL_TEXTURE_2D, 0 ); glBegin( GL_QUADS ); { glColor3f( 1.0f, 0, 0 ); glVertex2f( realX, realY+height ); glVertex2f( realX, realY ); glVertex2f( realX+width, realY ); glVertex2f( realX+width, realY+height ); } glEnd(); GLenum err = glGetError(); if( err != GL_NO_ERROR ) { std::string errDesc( reinterpret_cast<const char*>(gluErrorString(err)) ); int breakhere = 0; } //glPolygonMode( GL_FRONT_AND_BACK, GL_FILL ); } void RenderSelectionTexture::Render( rs::RenderSystem& renderSystem ) { if( m_rendering ) { glBindFramebuffer( GL_FRAMEBUFFER, m_fbo ); glPushAttrib( GL_VIEWPORT_BIT ); glViewport( 0, 0, m_fboWidth, m_fboHeight ); { RenderRectangle( m_selectionX, m_selectionY, m_selectionWidth, m_selectionHeight ); } glPopAttrib(); glBindFramebuffer( GL_FRAMEBUFFER, 0 ); RenderTexturedQuad( renderSystem, m_panelWidth, m_panelHeight, boost::bind( &BindTexture, m_fboTexture ) ); } } InitializeOffscreenRendering() is called once on construction, and then Render() is called once per game loop. Anyone have any idea as to why this isn't working for me? Note that in the guide I linked earlier, I skipped the RenderBuffer part. I didn't need a depth or stencil buffer so I skipped that. I assumed that would not prevent the FBO from working.
  7. Rotation question using quaternions

    What should the default quaternion for my model be? Should it be a 0 degree angle around the 0,1,0 axis? And then concatenate the world rotation quaternion to that one?
  8. Hi, I currently have a planet in my game which I plan to have a character walking on. I need to rotate this character so that his feet are always touching the ground. I have a feeling I'll need to use quaternions for this, but I'm not sure what to do mathematically. Here's what I know: 1) I know the center of the planet (0,0,0) 2) I know the location of the character on the surface in Cartesian coordinates. 3) I know the "up" vector (surface vector) at the current player's location by subtracting #1 and #2. 4) The default "up" vector for a player in world space is (0,1,0) (spine is aligned with the Y axis) I also have a quaternion object I can use. Help is appreciated.
  9. Cg Shader problem

    Quote:Original post by V-man Why don't you use a texcoord for your weights? Use TEXCOORD1 And what about this? I think GPUs prefer working with float int4 indices : BLENDINDICES; This might be a good work around for now, but I still need to figure out why my shader isn't working with BLENDWEIGHT. Also, the BLENDINDICES binding used to be a float4, however I switched it to int4 to see if it would work. They're indices, so it would make sense for them to be integers, not floats. If it doesn't work I was going to switch it back to float4
  10. Cg Shader problem

    I'm sending 4 float values in BLENDWEIGHT, so I need it to be float4. In the Cg documentation, they have Cg samples (under Improved skinning examples) that define BLENDWEIGHT as a float4.
  11. Cg Shader problem

    Hi, I'm compiling my Cg Shader for the profile arbvp1, however for some reason it won't let me use BLENDWEIGHT. I'll show you the source code below: struct vertex_in { float3 position : POSITION; float3 normal : NORMAL; float3 tangent : TANGENT; float4 weights : BLENDWEIGHT; int4 indices : BLENDINDICES; float2 uv : TEXCOORD0; }; struct vertex_out { float4 position : POSITION; float2 texCoord : TEXCOORD0; float3 normal : TEXCOORD1; }; // the main function vertex_out main( vertex_in IN, uniform float4x4 bones[5], uniform float4x4 modelView : state.matrix.modelview[0], uniform float4x4 projection : state.matrix.projection ) { vertex_out OUT; // skin the vertex position and normal float4 position = float4(IN.position, 1); float4 normal = float4(IN.normal, 0); // the skinned attributes float3 skinnedPos = (float3)0; float3 skinnedNormal = (float3)0; // perform the skinning for(unsigned int i=0; i<4; ++i) { skinnedPos += (mul( position, bones[IN.indices[i]] ) * IN.weights[i]).xyz; skinnedNormal += (mul( normal, bones[IN.indices[i]] ) * IN.weights[i]).xyz; } skinnedPos = mul( modelView, float4(skinnedPos,1) ).xyz; OUT.position = mul( projection, float4(skinnedPos,1) ); OUT.normal = mul( modelView, float4(skinnedNormal, 1) ).xyz; OUT.texCoord = IN.uv; return OUT; } This is the error I'm getting when I run it through the compiler: warning C7019: "weights" is too large for semantic "BLENDWEIGHT", which is size 1 And this is the command line I'm using: cgc test.cg -profile arbvp1 -nocode Does anyone know why it isn't working? I'm new to shaders, so go easy on me.
  12. Hi, Right now in my Cg vertex shader I have two vertex attributes bound to BLENDWEIGHT and BLENDINDICES. To set the BLENDWEIGHT attribute, I'm currently using glWeightfvARB(). What do I use to set BLENDINDICES?
  13. Cg Shader question

    Hi, I'm pretty new to Cg Shaders and I was just wondering what the overhead of cgGetNamedParameter() was. Basically, I have a small wrapper over the Cg runtime library that calls cgGetNamedParameter() each time a parameter will be set, instead of caching off the resulting CGparameter object. Is this inefficient? Or should I consider caching the CGparameter objects in an unordered_map? Also, as another question, what justifies needing to set uniform parameters again? In other words, if I set a uniform parameter only 1 time at Cg initialization time (like the ModelView projection) my geometry will not draw. However, if I set the uniform parameter 1 time before I bind the vertex shader each frame, it works. So, my question really is, what causes the uniform parameters to be "reset"? Thanks.
  14. OpenGL Object Oriented OGL Wrapper?

    Quote:Original post by Brother Bob Integers are mapped to flaoting point values (which is the base of all colors in OpenGL with regards to description in the specification) such that 0 maps to 0.0, and maximum integer value maps to 1.0. This means that the difference between GL_BYTE and GL_UNSIGNED_BYTE is that the first maps [0, 127] to [0, 1], while the second maps [0, 255] to [0, 1]. Negative values, which are allows in GL_BYTE, all maps to 0. No, signed and unsigned bytes does not have the same binary representation, and OpenGL does make a difference to them. The first one is a signed value, the second is an unsigned value, and the top bit (at least in two's complement) have different meaning. OpenGL is very consistent on these parts. Now that you've explained the mapping to [0, 1] I understand. It doesn't make much sense to have this mapping in the first place, but at least I understand the fundamental difference between GL_BYTE and GL_UNSIGNED_BYTE. When I said the binary representation is the same, what I mean to say is that if you have an array of bites as signed versus unsigned, the binary representation does not change as you cast between the two in any direction. If you look at a raw set of 8 bits, and the most significant bit happens to be 1, you have no way of telling if this is signed or unsigned. Signed/Unsigned, at least in regards to C++, is merely a concept. On two's compliment machines, signed/unsigned determines how to handle under/over flow, but nothing more. Just curious, though, why does OGL map 0, 127 to 0-1? Why not map -127, 127 to 0-1? What is the purpose of such a mapping? P.S: I apologize as I am getting slightly off topic from my original question, but for me this ties in a bit. Perhaps if I understand these concepts I'll void the need to have a wrapper to control state.
  15. OpenGL Object Oriented OGL Wrapper?

    While I do prefer something object oriented through C++, the main goal here is to have a system that takes care of state swapping automatically. For example, I want to have a texture object that automatically takes care of binding, and other related state. I've run into too many issues to have to waste time dealing with this myself. For example, the second-to-last parameter of glTexImage2D() takes one of GL_BYTE or GL_UNSIGNED_BYTE, however I find that GL_BYTE doesn't work at all, even though the data I'm passing in is an array of "char". Why would GL_UNSIGNED_BYTE work any differently? The binary representation of the data is the same in either case. The fact that OGL differentiates these in certain situations is dumb to me, or maybe I'm just dumb because I don't understand.
  • Advertisement