Shinpou

Members
  • Content count

    45
  • Joined

  • Last visited

Community Reputation

140 Neutral

About Shinpou

  • Rank
    Member
  1. Ok, the remaining issue was just a case of old drivers. Thanks for your help :)
  2. Hey sgsrules, thanks for your input. I think I found what was causing the problem, or at least part of it. What I had, when I collapsed my code: glBindFramebuffer(GL_FRAMEBUFFER, fboID_); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0,TState::GetSingleton().GetPreDef(ViewportWidth), TState::GetSingleton().GetPreDef(ViewportHeight)); glDisable (GL_BLEND); glDisable (GL_DITHER); glDisable (GL_TEXTURE_1D); glDisable (GL_TEXTURE_2D); glDisable (GL_TEXTURE_3D); glShadeModel (GL_FLAT); glPushMatrix(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Draw selection data here glReadPixels(x, rendererSettings_->screenHeight - y , 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixel); glPopMatrix(); glPopAttrib(); glBindFramebuffer(GL_FRAMEBUFFER, 0); glEnable (GL_BLEND); glEnable (GL_DITHER); glEnable (GL_TEXTURE_1D); glEnable (GL_TEXTURE_2D); glEnable (GL_TEXTURE_3D); glShadeModel (GL_SMOOTH); And somehow, the glShadeModel-call is what would destroy the texture rendering after 1 frame. Also, having it in that form made me realize, that as there is a pushAttrib and a matching popAttrib, I didn't even need those enables in the first place. So I purged the last glEnable and glShadeModel-calls, and changed my glPushAttrib call to this: glPushAttrib(GL_VIEWPORT_BIT | GL_COLOR_BUFFER_BIT |GL_ENABLE_BIT | GL_LIGHTING_BIT); This, as far as I know fixed the problem and made nvidia accept my code. However, I've stumbled into other issues, and it seems as tho there is now another issue on some nvidias and not all. My question thus is, are you sure I would need to call glReadPixel after I've called DeactivateFBO(), or did you mean something else, because I've found no resources supporting this approach. Best regards, Lari
  3. Hey, Ok so my problem is, that I'm using an FBO to render my selection data, and then glReadPixeling from that texture. However: When I render my scene, the first frame renders absolutely perfectly, and second frame ends up with all textured quads being fully white. Here is with what code I activate and de-activate my FBOs: void CFBO::ActivateFBO() { glBindFramebuffer(GL_FRAMEBUFFER, fboID_); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0,0,TState::GetSingleton().GetPreDef(ViewportWidth), TState::GetSingleton().GetPreDef(ViewportHeight)); } void CFBO::DisableFBO() { glPopAttrib(); glBindFramebuffer(GL_FRAMEBUFFER, 0); } Some other guy somewhere in the vast waves of the internets said that it's the call to glBindFramebuffer(GL_FRAMEBUFFER, 0); which is causing this, however have any of you had this same problem? Also, this issue is only happening on nvidia-cards, the code works perfectly on ATI-cards. Here's how I'm generating my FBO, for reference's sake: GLenum errorvalue = glGetError(); if (errorvalue != GL_NO_ERROR) { std::cout << "OpenGL errors present..." << std::endl; } GLuint width = TState::GetSingleton().GetPreDef(ViewportWidth); GLuint height = TState::GetSingleton().GetPreDef(ViewportHeight); glGenFramebuffers (1, &fboID_); glGenRenderbuffers (1, &depthBufID_); glBindFramebuffer(GL_FRAMEBUFFER, fboID_); glBindRenderbuffer (GL_RENDERBUFFER, depthBufID_); glRenderbufferStorage (GL_RENDERBUFFER, GL_DEPTH_COMPONENT, width, height); errorvalue = glGetError(); if (errorvalue != GL_NO_ERROR) { std::cout << "Error in frame buffer creation: glRenderbufferStorage(selectionDbID_)\n"; } glFramebufferRenderbuffer (GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, depthBufID_); errorvalue = glGetError(); if (errorvalue != GL_NO_ERROR) { std::cout << "Error in frame buffer creation: glFramebufferRenderbuffer(depthBufID_)\n"; } glGenTextures(1, &textureID_); glBindTexture(GL_TEXTURE_2D, textureID_); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA8, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureID_, 0); GLenum status = glCheckFramebufferStatusEXT(GL_FRAMEBUFFER); if (status != GL_FRAMEBUFFER_COMPLETE_EXT) { std::cout << "Frame buffer object error\n"; } glBindFramebuffer(GL_FRAMEBUFFER, 0); And here's how I'm using that FBO in my selection code: selectionFBO_->ActivateFBO(); glDisable (GL_BLEND); glDisable (GL_DITHER); glDisable (GL_TEXTURE_1D); glDisable (GL_TEXTURE_2D); glDisable (GL_TEXTURE_3D); glShadeModel (GL_FLAT); glPushMatrix(); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); for (unsigned int i=0; i < selectionBuffer_.size(); i++) { AssetMetaData * CurrItem = selectionBuffer_[i]; glMatrixMode( GL_MODELVIEW ); // Load up the position matrix of the object glLoadIdentity(); activeCamera_->Update(); // Move object to it's position glTranslatef(CurrItem->Position.x, CurrItem->Position.y, -CurrItem->Depth); // Rotate object to it's orientation glRotatef(CurrItem->Orientation, 0.0f, 0.0f, 1.0f); glColor3ub(CurrItem->gColorID[0],CurrItem->gColorID[1],CurrItem->gColorID[2]); glBegin( GL_QUADS ); // Top-left vertex (corner) glVertex3f( -CurrItem->Center.x, CurrItem->Size.y - CurrItem->Center.y, 0 ); // Bottom-left vertex (corner) glVertex3f( CurrItem->Size.x - CurrItem->Center.x, CurrItem->Size.y - CurrItem->Center.y, 0 ); // Bottom-right vertex (corner) glVertex3f( CurrItem->Size.x - CurrItem->Center.x, -CurrItem->Center.y, 0 ); // Top-right vertex (corner) glVertex3f( -CurrItem->Center.x, -CurrItem->Center.y, 0); glEnd(); } unsigned char pixel[3]; float x = TState::GetSingleton().GetPreDef(MouseCoordinateX); float y = TState::GetSingleton().GetPreDef(MouseCoordinateY); glReadPixels(x, rendererSettings_->screenHeight - y , 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixel); if (!(pixel[0] == 0 && pixel[1] == 0 && pixel[2] == 0)) { for (uint32 i = 0; i < selectionBuffer_.size(); ++i) { if (selectionBuffer_[i]->gColorID[0] == pixel[0] && selectionBuffer_[i]->gColorID[1] == pixel[1] && selectionBuffer_[i]->gColorID[2] == pixel[2]) { currentAssetUnderMouse_ = selectionBuffer_[i]->Asset; break; } } } glPopMatrix(); selectionFBO_->DisableFBO(); glEnable (GL_BLEND); glEnable (GL_DITHER); glEnable (GL_TEXTURE_1D); glEnable (GL_TEXTURE_2D); glEnable (GL_TEXTURE_3D); glShadeModel (GL_SMOOTH); [Edited by - Shinpou on June 19, 2010 11:07:04 AM]
  4. Hey, I think the problem is in your vertex shader code. I've had this same problem multiple times by having code in my vertex shader that scales or shifts the texture coordinates somehow, and when I keep implementing my fragment shader code I never remember this shift, and tend to get behaviour which only ends with my fragment shader being applied to half the quad. I'd check that first for errors. Here's code that should give you normal texture coord mapping: varying vec2 texCoord; void main(void) { texCoord = gl_MultiTexCoord0; gl_Position = ftransform(); }
  5. [SOLVED] FBO with glReadPixel

    Oh yes! That was the issue exactly, thank you so much for that, I didn't realize it needed to be bound before that's being done. :)
  6. [SOLVED] FBO with glReadPixel

    Yeah, I did have glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); Right before the glPopAttrib() call, Also, I changed the color format, but still it doesn't seem to be working. I'm just gonna post the whole rendering code here with all the redundant stuff that's in it: if (SelectionBuffer.size() > 0) { glBindFramebuffer(GL_FRAMEBUFFER, selectionFbID_); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0, 0, rendererSettings_->screenWidth, rendererSettings_->screenHeight); // Disable everything that might change the end-product color of a rendered quad CurrentAssetUnderMouse = NULL; glDisable(GL_TEXTURE_2D); glDisable (GL_BLEND); glDisable (GL_DITHER); glDisable (GL_FOG); glDisable (GL_LIGHTING); glDisable (GL_TEXTURE_1D); glDisable (GL_TEXTURE_2D); glDisable (GL_TEXTURE_3D); glShadeModel (GL_FLAT); glPushMatrix(); for (unsigned int i=0; i < SelectionBuffer.size(); i++) { AssetMetaData * CurrItem = SelectionBuffer[i]; glMatrixMode( GL_MODELVIEW ); // Load up the position matrix of the object glLoadIdentity(); ActiveCamera->Update(); // Move object to it's position glTranslatef(CurrItem->Position.x, CurrItem->Position.y, (GLfloat)-CurrItem->Depth); // Rotate object to it's orientation glRotatef(CurrItem->Orientation, 0.0f, 0.0f, 1.0f); glColor3ub(CurrItem->gColorID[0],CurrItem->gColorID[1],CurrItem->gColorID[2]); glBegin( GL_QUADS ); // Top-left vertex (corner) glVertex3f( -CurrItem->Center.x, CurrItem->Size.y - CurrItem->Center.y, 0 ); // Bottom-left vertex (corner) glVertex3f( CurrItem->Size.x - CurrItem->Center.x, CurrItem->Size.y - CurrItem->Center.y, 0 ); // Bottom-right vertex (corner) glVertex3f( CurrItem->Size.x - CurrItem->Center.x, -CurrItem->Center.y, 0 ); // Top-right vertex (corner) glVertex3f( -CurrItem->Center.x, -CurrItem->Center.y, 0); glEnd(); } unsigned char pixel[3]; float x = X::GetSingleton().EngineState->GetPreDef(MouseCoordinateX); float y = X::GetSingleton().EngineState->GetPreDef(MouseCoordinateY); glReadPixels(x, rendererSettings_->screenWidth - y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixel); std::stringstream temp; temp << "R: " << (int)pixel[0]; temp << " G: " << (int)pixel[1]; temp << " B: " << (int)pixel[2]; temp << " X: " << x; temp << " Y: " << y; DrawText(temp.str(),0,90,0.3); // now our picked screen pixel color is stored in pixel[3] // so we search through our object list looking for the object that was selected for (int i = 0; i < SelectionBuffer.size(); ++i) { if (SelectionBuffer[i]->gColorID[0] == pixel[0] && SelectionBuffer[i]->gColorID[1] == pixel[1] && SelectionBuffer[i]->gColorID[2] == pixel[2]) { CurrentAssetUnderMouse = SelectionBuffer[i]->Asset; break; } } glPopMatrix(); glEnable(GL_TEXTURE_2D); glEnable(GL_DITHER); glEnable(GL_BLEND); glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glPopAttrib(); glBindFramebuffer(GL_FRAMEBUFFER, 0); } Am I using the FBO right?
  7. Moved the post to a new thread.
  8. Decided to put this to a new post, kinda different issue: You guys got any samples or input on how to use the FBO's with glReadpixel? I've created my FBO: glGenFramebuffers(1, &selectionFbID_); Created a depth buffer, and a color buffer: (Or that's what I Think I'm doing, the tutorials regarding this are awfully vague) glGenRenderbuffers(1, &selectionDbID_); glGenRenderbuffers(1, &selectionCbID_); Initialized those two buffers, and bound them to the fbo, afaik glBindRenderbuffer(GL_RENDERBUFFER, selectionDbID_); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH_COMPONENT, rendererSettings_->screenWidth, rendererSettings_->screenHeight); glBindRenderbuffer(GL_RENDERBUFFER, selectionCbID_); glRenderbufferStorage(GL_RENDERBUFFER, GL_RGBA4, rendererSettings_->screenWidth, rendererSettings_->screenHeight); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, selectionCbID_); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, selectionDbID_); Am I missing something crucial? After this, glCheckFramebufferStatus and glGetError show's it's supposedly a-ok. The above code is in my renderer class' constructor, now when I do render my selection quads, I have this code in a member function of the renderer class: // <---- Function start ----> glBindFramebuffer(GL_FRAMEBUFFER, selectionFbID_); glPushAttrib(GL_VIEWPORT_BIT); glViewport(0, 0, rendererSettings_->screenWidth, rendererSettings_->screenHeight); // Huge number of gl-disables here for (...) { glBegin(); // Quad rendering code that does work glEnd(); } // Then after rendering I wanna read from my FBO: glReadPixels(x, viewport[3] - y, 1, 1, GL_RGB, GL_UNSIGNED_BYTE, pixel); // Check for color matches here // And finally glPopAttrib(); glBindFramebuffer(GL_FRAMEBUFFER, 0); // <---- Function end----> Is there something I haven't taken into account for this to not be working? I'm one hundred percent sure the rendering code works fine, because my selection worked fine before trying to add the FBO-rendering to it... I'm guessing glReadPixels is reading the wrong place, or that my FBO is incomplete, any input regarding this? [Edited by - Shinpou on May 9, 2010 12:55:02 PM]
  9. dynamic streaming of assets?

    I'm not an expert on the matter, and what you're gonna get from me is just some intuition on how to go about it. With a 2d game, I'd probably start from generating some sort of spatial partitioning out of quads the size of the screen (even very simple will do), so with which I can tell easily and pretty quickly what part of my level is going to be exposed. Then all I'd do is always keep the assets needed to render what is an area about 9 times the size of the viewport under the camera, and depending what your maximum camera velocity, or if the camera can be zoomed out quickly, you can bring that preloaded area bigger. In fact, if you just give your resource loader class some maximum amount of memory it can take up, and make the loading logic to always cap that memory no matter what, so with that you can get the pre-emptive loading you want. (The loader starts from the center point of your screen, and starts working outward of that point until it caps it's memory limit) Now regarding technical issues, I'd say what you want is: 1. Reference counting class for storing your assets, so when you load your level - you create an instance of every possible asset that stores it's own file path and has the loading logic ready to be launched - this sorts out the problem of loading duplicates. 2. Multi-threaded loading logic, that invokes the load-functions of your assets pre-emptively on a logic you want. 3. Generic memory management, if an asset has 0 references, it should be loaded out. 4. Some common sense - Every asset should count how many times it's been drawn, the higher that value is, the more priority it can get when loaded in, and the lower priority it gets when it's taken out of memory. 5. (Optional, but maybe very unnecessary for a 2d game) Include some data to your level-files, about how many times a given asset is in a level, and use that to determine what should always be in memory, and what is worth streaming in and out. Finally: Only run your asset de-loading logic, when the cap-memory amount is overrun, then check your assets in this priority order: 1. if Reference count == 0, Unload from memory 2. Find asset with the smallest rendercount && biggest distance from camera && lowest occurrence value in this level and unload it 3. Repeat until you go under your memory cap That's just me bantering tho, you can find some good multi-threading from Boost-libraries, and also it should have some good stuff to be used for those safe reference counting pointers. Also check out Enginuity for some ideas on making that memory management work for you :)
  10. Hey all :) Ok so I made selection to my pet game project using color picking. However, I got some extremely weird behaviour, when I would have two quads very close to eachother and slightly to the distance - when I would move my mouse to the intersection of these two quads, glReadPixel would return something of an average value of these two. Now, I disabled everything for the time I render the colored quads, but frankly never got around the error, till I thought - what if the driver is forcing anti-aliasing. And it did indeed, and thus the inaccuracy is now gone. (Aka. I had enabled 2x anti-aliasing from the driver settings to be forced) That problem being fixed, it still remains a problem, that my game will not work if someone enables forced AA. Question thus is, what can I do to not let the forced AA happen? Best regards, Lari
  11. Yup, I tried doing selection with it to my pet project as well, and it was so ridiculously slow, that I had to scrap the damn thing.
  12. Ze PC game scene

    And as one more thing that comes to mind we'll lose with generalized cross platform game engines is that evolution of games you would see happen on a single platform, while that platform always remained the same. Like on the ps2 you would in the very end of it's time see final fantasy 12, and my god you wouldn't believe your eyes for that game and those graphics to be running on that platform. That's what I loved to see, I loved to be completely mind-blown by games, for it is a fact, that when a mind is blown, it cannot be unblown.
  13. Ze PC game scene

    I'm not saying I liked the fact that your hardware used to run out of juice basically every cycle a new game was released, I'm more saddened by the fact that what it implies is in fact as you guys also said, that the whole PC game market is dying. Quote: No, grasshopper, the history in computing just keeps repeating itself. Only difference this time, is that the 486/66 was a great leap in performance, and thus made a plethora of new games appear, as it was the de-facto best gaming platform of that time I spose. This time tho, there is no good motivation to make games for a PC. I see all the studios using game engines which can cross-compile the same work to all the platforms, and thus you get underwhelming games for the PC until a new generation of consoles is made. Quote: Many of these games look really grotesque compared to what's on my 10 year old Gamecube. Very true, and I agree it being way more important to have the art down properly, however I see the whole point of generating cutting edge technology (game-engines and stuff) to take out all that juice from cutting edge computers, was to have that active community of R&D, when the game industry sets on a never-changing platform the amount of innovation will also decrease, and I for one don't love that prospect. I'm not saying it doesn't take innovation and creativity to get all the juice out of your platform when it's a clearly limited platform you're working on (any console) but I'm saying using cross-compiling game engines on all the platforms will make all the games boring, monotonous and they will show us nothing interesting in terms of innovation.
  14. OpenGL GLSL Basics

    Heya :) I'm not sure where the problem lies specifically but here's two things that you should check regarding GLSL: - Do check that you have glUseProgram and you're using the right index for it, and you should be calling it before glBegin for vertices you want shaded. - Set your uniforms after glUseProgram, and before glBegin And here's just another suggestion for replacing that vert-shader code with something slightly simpler which I think does the same thing: void main(void) { gl_Position = ftransform(); } Just do what karwosts said about debugging it part by part, and when it comes to GLSL, it really comes down to doing things in the right order to get it working. Hope that's of any help :)