Jump to content
  • Advertisement

derodo

Member
  • Content Count

    31
  • Joined

  • Last visited

Community Reputation

122 Neutral

About derodo

  • Rank
    Member
  1. Hi there. I just had the same problem, and finally solved it adding a: LoadLibrary("opengl32.dll") in my rendered constructor, and NEVER calling the corresponding FreeLibrary (not even in the destructor). It's quite weird but it has to do with some problem on either the graphics drivers or the windows system drivers (maybe after a windows update). The fact is that the gl drivers are supposed to override some of the gdi funtions, and depending on how your application triggers the dll loading, it might be the case that the gdi32.dll gets loaded after the opengl32.dll (or the other way around), and the overriden function pointers get corrputed, thus, any call on on them will crash. It was a terrible headache finding it out, and I finally saw the answer in stackoverflow.com just a few minutes ago while desperately searching Google for answers! (don't have the link handy now). I was having the problem in some computers (well, in just ONE computer); the app was an OpenGL active-x plug-in running in Internet Explorer...it crashed when "reloading" the page containing the active-x control...and the firefox/chrome version did the same (but this time in all computers I tested). Maybe it's not the same case for you, but it worked fine for me. Now I just wonder if not calling the FreeLibrary might have anothre undesirable side-effect... Hope this helps.
  2. Hi again! I've finally found the problem. It is absolutely crazy, but the fact is that on my Radeon 9700Pro, having the alpha blending enabled while calling glClear(GL_DEPTH_BUFFER_BIT) completly ruined the GPU performance. Don't ask me why, but after I went out of "logic" ideas, and finding out some demos doing the same flawlessly, I started to think It had something to do with a "wrong" GL state... So I decided to clear all states to "default", and after a little trial and error, I isolated the problem. It was the damn alpha blending! Just wanted you to know :) Thaks for your kind support,
  3. Hi again, Still having the problem, but gathered some more info...i've been able to speed up the application to 22FPS on my ATI 9700...it may seem OK, but the fact is I'm rendering NOTHING in the framebuffer, just clearing the GL_DEPTH_BUFFER_BIT; I'm not even displaying its contents, I just render a couple of textured primitives to the system framebuffer to show the framerate... I know the framerate is not a good measure, I'm just dispplaying it to compare the performance changes...in the nVidia 7800 it shows about 2000FPS...the same rate it shows in ATI if I just use a plain 8bit RGB color texture for the framebuffer instead of a FP16 one. I donwloaded an HDR demo from humus.ca, that is doing the same (FP16 texture color buffer+16bit depth renderbuffer) and works perfectly. I've even compiled it and debugged in VS to check the GL calls are the same as mine (you know, framebuffer generation, renderbuffers, etc.)...and they seem to be the same :( I'm running out of ideas...could it be there some GL state I'm forgetting?
  4. I've tested the tricky method you propose to clear the depth buffer, but its the same...or even worse...it keeps running fine on my nVidia card, but runs even slower on my ATI...far less than 0.5FPS... Don't know what the hell is going on...I must be doing something completely wrong, but cant find it...I've even removed all unnecesary lines of code, but its allways the same: complete desaster on my Radeon 9700Pro. I'll keep investigating...
  5. I think it has something to do with the interaction of the depth buffer and the floating point color buffer, as if I change any of them the application runs smoothly again. I mean: RGB texture + depth renderbuffer=OK ( RGB_FP16 texture = OK depth renderbuffer = OK RGB_FP16 rexture + depth renderbuffer=FAIL Maybe I should try using a texture target for the depth buffer instead of a renderbuffer. I have also checked that using a color renderbuffer instead of a texture yields the same results. If anyone finds anything than can help...
  6. Hi there, This time I'm playing with the framebuffer object extension, trying to render to an off-screen floating point buffer...everything seems to be working fine with my nVidia card (GF7800), but if I run the same application on my ATI-based PC (9700Pro), then the framerate drops drastically to less than 1 frame per second. After a bit of trial and error I found quite some interesting facts: 1) The best internal format to be used for floating point targets is GL_RGB_FLOATxx_ATI (16 or 32 bits), as the "standard" GL_RGBxxF_ARB is not supported by ATI until the X800 series and above...so it is no so standard :P 2) My GeF7800 does not support 32bits floating point framebuffers if the render target is a texture; it only works if its a render buffer (glCheckFrameBufferStatus fails) 3) My GeF7800 does not support 16 bit depth buffers (glCheckFrameBufferStatus fails) 4) The ATI problem dissapears if I use a plain 8bit color RGB framebuffer. Besides that, I've traced my application and found that the problem is caused only if I perform a glClear(GL_DEPTH_BUFFER_BIT) once the floating point buffer is bound and fully working (the glCheckFrameBufferStatusEXT says so at least). If I do not clear the depth buffer, the applications runs smoothly on both ATI and nVidia, but if just uncommet that damn line, then I start getting 1FPS or less on ATI...oh! and listen to this: the more bits the depth buffer has, the worse it runs...I mean, if i attach a 16bits depth buffer, then it runs at, lets say 1FPS, but if I attach a 24bit one, then it runs at 0.5FPS... With the nVidia configuration, it always works (as long as I use 16bit FP color render targets and 24bit depth targets). Anyone knows what the hell is going on? I'm kinda lost now. Could it be a problem with the ATI drivers? I have googled the problem, but didnt find anything about this... I'd appreciate some comments on this. Thanks in advance,
  7. Hi all, As far as I know, with GLSL you can use uniforms, varyings, and you got access to all usefull GL state variables (matrix, planes, lights, etc) but... Is there any kind of "global" variables shared by all program objects, apart from the GL states? I mean, if you want to use a variable "time" in some shaders (for doing some kind of time dependent computation), and you define it as a "uniform float", then you will need to update it one time per each shader using it for every frame. So I'm just wondering if I missed some point in the GL Shading Language spec, and there is any way to set a common variable than can be read by all shaders, without having to programataically set it for each one ¿do you know what I mean? So far, the only way I can think of doing it is by setting over an unused GL state variable (the X component of some user clip plane, for example), and program the shaders asuming that state variable will actaually contain a "time" value... Any hints?
  8. derodo

    Texture performance

    I guess is as you say hplus, as a normal scene containing a whole doom3 map gives me about 200 fps (rendering all areas with a single bump-map shader without doing any culling). So maybe I'm better keep coding and forget about the "issue". It was just that it suprised me thw first time I saw that "stange" behaviour by change while doing some tests.
  9. derodo

    VBO policy

    That's what I call a direct answer :)
  10. mmm...I kinda lost now :) But I guess everything is subjective...what I do is consider the base reference system as the one formed by (1,0,0) (0,1,0) (0,0,1), so it really doesn't matter wether the actual "z away" axis is positive or negative :P as everything depends on where you make the camera look to :P And as I set my default camera to stay at (0,0,0) and look to (0,0,1), then what I "see" is that incrasing z values make objects go "far away" :) But yes. You ARE right. If you dont apply any camera transformations, the objects going "away" are those going in the negative z direction. Quite nice thought lightbringer (that is you, isn't it:)...now I know all those years I've been thinking the wrong way. It's never late to re-learn things... Thanks for the reminder :) [Edited by - derodo on November 24, 2005 1:00:19 PM]
  11. You're righ gorgorath. I was just looking at the wrong hand when I wrote the post :P The only thing I'm not sure now is the Z forward direction...I think its along the positive axis...shoulnd't a right handed coordinate system look like? Y(0,1,0) ^ Z(0,0,1) | \ | \ | \| <------+ X(1,0,0) At least that's what I get if render three lines from (0,0,0) to (100,0,0), (0,100,0) and (0,0,100)...
  12. derodo

    Texture performance

    I was just about to test with another driver version. But I want to make some more tests before, as the application I'm running is not so simple and I may be doing some work behind the scenes that could be disturbing the results (it shouldn't but just in case). When I get home today, I will make a simple dumb test to really make sure the beheviour is as I explain here. And by the way, I inserted the glFinish() after I noticed this situation to see if it could be of any help in my analysis. I'll let you know when I have the new results.
  13. Just one thing more, i'm not sure, but I think GL uses a default left handed coordinate system. So if you do nothing to change it (that is, setting your custom transformation), the standard "up" basis vector should be (0,1,0). The basis (1,0,0) would point "right", and (0,0,1) would point forward away. That's why I wrote those numbers in the post before, asuming (0,0,0) to be the world origin, and (0,0,500) a position just 500 units away (not 500 units up). If you don't know what your reference system is, then you may think your doing things wrong, while they're just right. I mean, if you call gluLookAt with an UP vector being (0,0,1), while using a left handed coordinate system, the camera will be rotated -90 degrees arround the Z axis, so if you just render an object against a black background, apparently, the rotations about its X axis would look as if they were done about it's Y axis...¿do you know what I mean? If you do nothing, the default reference system is somethign like: Y (0,1,0) ^ | Z (0,0,1) | / | / |/ +-------> X (1,0,0) At least, that's how it works with all the programs I have made. And it really helps to know what your base reference is. Hope this helps a little bit.
  14. And what are you getting instead? If you just perform: 1) "look at" 2) rotation 3) render It sound work fine. You should be able to see the object centered in the world origin (0,0,0) and rotating around its own center. Maybe the problem is you are not setting the camera right. Just try to set the camera so it points to (0,0,0) from a quite distant point (0,0,500). The camera will be positioned just in front of your object, at 500 units of distance, aligned with the world Y axis (that is, looking straight forward). If you want your object rotating around its center, but positioned anywhere in the world, just do: 1) "look at" 2) translation to the point you want the object to be 3) rotation 4) render No matter where you put the camera, your object will be at the position sated in step 2, rotating arround its center.
  15. derodo

    Render to texture, etc.

    Mmmm...I tried pbuffers too, and I was really disappointed by the poor performance they showed. Basically you need to make a reder context change for each pbuffer you want to render to, and that could be a real pain as it involves lot of GL state. Besides, if not specified, pbuffers do NOT share texture objects, and if you make the GL call to make it share texture objects among pbuffers, then the render context change is even slower. I would rather use the GL_EXT_framebuffer_object or whatever the extension name is, as it doesn't imply a context switch. So it *should* be faster. The only problem is that it is a relative "new" extension, so not all cards may support it.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!