Archived

This topic is now archived and is closed to further replies.

karmalaa

OpenGL OpenGL Shutdown Problem

Recommended Posts

Hi guys, upon the close up of an OpenGL sample application of mine, I get a "Invalid Address specified to RtlFreeHeap(...)" error. The stack context in which it happens to some "finalization" code within the G400ICD.dll called from the "exit()" procedure. I really can''t guess why this happens. OpenGL finalization ("wglMakeCurrent(NULL, NULL)", "ReleaseDC(hWhd, hDC)", and "wglDeleteContext(hRC)") and window destruction ("DestroyWindow(hWnd)" and "UnregisterClass()") do their job correcly. However, the problem persist. Is there something I''m missing? Hope someone can spot some Light on this problem of mine... [home page] [e-mail] --- "Lifting shadows off a dream once broken She can turn a drop of water into an ocean"

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well, My shutdown code is something like:

hDC = GetDC(hWnd);
wglMakeCurrent(hDC, NULL);
wglDeleteContext(hRC);
ReleaseDC(hWnd, hDC);

I''m not sure what effect (in your case) wglMakeCurrent(NULL, NULL); would have. Do other OpenGL programs function okay on your hardware?

Share this post


Link to post
Share on other sites
Dear Anonimous Poster,
according to the official documentation, passing NULL as second argument to "wglMakeCurrent()" make the first one to be ignored. So, using both "wglMakeCurrent(hDC, NULL)" and "wglMakeCurrent(NULL, NULL)" should be equivalent.

I''ve tried, though, and it makes no difference to pass a valid DC pointer as the first argument. The application still "hung up" upon exiting.

BTW, other people''s (such as NeHe) sample code compiles and works perfectly on my computer.





[home page] [e-mail]

---
"Lifting shadows off a dream once broken
She can turn a drop of water into an ocean"

Share this post


Link to post
Share on other sites
Well what is different about your code than NeHes code?

Are you using any Extensions any special features?

I would try disabling all of my drawing code and as much of my initialization code as I can and try to pinpoint where the error is originating from. Step through the shutdown and examine the return values.

Basically what that error is saying is that it is trying to deallocate a memory heap that either never existed or (more likely) there is a reference to memory that has already been deallocated which it is trying to deallocate again as part of standard cleanup procedures. Since it is being initiated by your video card driver DLL (Im assuming you have a G400?) it might be a context issue.

Try making sure you set device context and rendering context variables to NULL after deleting and releasing them.

Seeya
Krippy

Share this post


Link to post
Share on other sites
quote:
Original post by Krippy2k

Well what is different about your code than NeHes code?




I''ve changed the order in which the window-class and fullscreen switching is done. In general I''ve cleaned up the code a lot. No substantial modification, however.

quote:
Original post by Krippy2k

Are you using any Extensions any special features?




No. Not at all.

quote:
Original post by Krippy2k

I would try disabling all of my drawing code and as much of my initialization code as I can and try to pinpoint where the error is originating from. Step through the shutdown and examine the return values.




I''ve already checked up on it. Initialization and finalization works perfectly. It is surely a RENDERING CONTEXT issue, since not creating it (and of course not using and not deleting it) make the problem to vanish.

quote:
Original post by Krippy2k

Try making sure you set device context and rendering context variables to NULL after deleting and releasing them.




Oh, my God... I really do not dispose twice the same resources and I *HOPE* the videocard driver DLL does not mess up with them. Just joking...

As a general practice, I always reset to NULL the pointers after deleting them.



---
[home page] [e-mail]

Share this post


Link to post
Share on other sites
Are you using WinNT/2k and/or multiple threads?

And have you tried just for giggles commenting out the wglDeleteContext to see what happens?



Share this post


Link to post
Share on other sites
quote:
Original post by Krippy2k

Are you using WinNT/2k and/or multiple threads?




Yep, I''m developing both under Windows 2000 Professional (SP1) (using Visual .NET - Beta 1) and under Windows NT4 (SP6) (using Visual Studio 6 - SP5). Strangely enough, it works on the first one but not on the second.

I''m currently NOT using any additional thread.

quote:
Original post by Krippy2k

And have you tried just for giggles commenting out the wglDeleteContext to see what happens?




Yep. Nothing seems to change. The error still remains...



---
[home page] [e-mail]

Share this post


Link to post
Share on other sites
I''m getting the same error.

HEAP[lesson2.exe]: Invalid Address specified to RtlFreeHeap( 130000, 1b80e8 )

That''s what I get when NEHE''s lesson 2 program exits. This ONLY happens in the Visual C++ IDE though, outside of it the program runs fine. I''m using VC++ 6, Windows NT 5 (2000). I''ve been having some problems with VC++ lately though, so I''m going to reinstall it and see if that helps.

Share this post


Link to post
Share on other sites
quote:
Original post by Supernova

I''m getting the same error.

HEAP[lesson2.exe]: Invalid Address specified to RtlFreeHeap( 130000, 1b80e8 )

That''s what I get when NEHE''s lesson 2 program exits. This ONLY happens in the Visual C++ IDE though, outside of it the program runs fine.




Of course it happens ONLY within the Visual C++ IDE, since the error is raised by an INT3 instruction...

quote:


I''m using VC++ 6, Windows NT 5 (2000). I''ve been having some problems with VC++ lately though, so I''m going to reinstall it and see if that helps.




I really don''t think it depends on the compiler, though. Or am I wrong?

---
[home page] [e-mail]

Share this post


Link to post
Share on other sites
Well then there ya go

Well if I had to make a guesstimate, I would say it is a bad design in the video card drivers for NT. I have had strange issues with Matrox in the past on NT. They seem to put much effort into their 9x drivers but not as much thought goes into the NT drivers.

What I can see is that companies that make drivers for NT sometimes mismanage multiple threads of their own. And if you try and shutdown the accelerator interface while a thread is still using it you will get an access violation when the still-active thread tries to shutdown. The wgl docs say that it is an error to delete a context that is being used by another thread, but I think the real error is in trying to close that thread after the context was deleted.

You say the NeHe tutorials work though. But then they don''t do a whole lot and don''t use much memory. I am still unclear as to the complexity of your code.

Is it just a skeleton or are you doing some good processing in between init and kill?

If the problem persists and you don''t have any secrets in there maybe you should post the full code

If it is not a driver issue we would be certain to find out from the code. If it is a driver issue I''m sure Matrox would want to know about it.

And if I''m crazy I would certainly like to know

hehe

Seeya
Krippy

Share this post


Link to post
Share on other sites
quote:
Original post by Krippy2k

Well then there ya go

Well if I had to make a guesstimate, I would say it is a bad design in the video card drivers for NT. I have had strange issues with Matrox in the past on NT. They seem to put much effort into their 9x drivers but not as much thought goes into the NT drivers.




Ugh! :|

quote:
Original post by Krippy2k

You say the NeHe tutorials work though. But then they don''t do a whole lot and don''t use much memory. I am still unclear as to the complexity of your code.

Is it just a skeleton or are you doing some good processing in between init and kill?




For now, the OpenGL core engine doesn''t do very much a part of displaying a bunch of triangles nicely put together to for a landscape e some (basic) models...

I haven''t tried to code a simple "Initialize & Kill" code. Surely that''s the way to walk by to spot the problem, by now...

quote:
Original post by Krippy2k

If the problem persists and you don''t have any secrets in there maybe you should post the full code




Just let me prepare a simple "Initialize & Kill" application based on the code of mine and if the problem will persist I''ll post the code at one (or better, I''ll post the URL where to find it).

quote:
Original post by Krippy2k

If it is not a driver issue we would be certain to find out from the code. If it is a driver issue I''m sure Matrox would want to know about it.




Well, indeed the would...

BTW, I''ve tried and debugged the application under Windows 2000 Professional SP1 using Visual Studio 6.0 SP5... the "Invalid Address" exception is fired when the "wglDeleteContext()" command is issued and not just at the very end of the program.



---
[home page] [e-mail]

Share this post


Link to post
Share on other sites
All right pals,
here it is the URL for a minimal example code that show the same problem of the application of mine. The code was not written by myself, I just found it in the net and put it up in a Visual C++ 6.0 project.

Compile, test, and, please, let me know your considerations.

On my computer (a PII 350 Mhz featuring an ASUS P2B motherboard, 128 MB of system memory, and a Matrox Mystique G200 graphic card) during the debug it crashes just when the "wglDeleteContext(...)" instruction is issued.

I''m quite convinced the problem is due to the Matrox IDC for the G200 graphic card. Some quick inspections within specific forums showed many other guys encontered problems with the Matrox "interface" to OpenGL (more frequently they are Windows 2000 users like me).

By the way, latest drivers installed were installed by myself, of course.




---
[home page] [e-mail]

Share this post


Link to post
Share on other sites
Well I ran it fine on a 1gig Athlon/256MB Geforce 2 GTS Pro and all is fine. Exits normally every time on 98SE, Me and 2000. It also worked on a 650mhz 128MB Viper V770 Ultra on Win Me. Don''t have an NT4 box here to test it right now but I am pretty certain it will work on there too.

Damn them Matrox! Can''t even spell Matrix how they gonna make video cards? LOL jk

I would say it''s pretty safe to bet the problem is with the driver though.

I think I have a buddy with a Millenium I will see if I can test it on his system later so I can see exactly whats happening though.

Seeya
Krip

Share this post


Link to post
Share on other sites
Here's what I get: a message box saying "User breakpoint called from code at 0x77fa018c" and it points to that line (address) in assembly which just says "int 3". In the message window it says "HEAP[lesson2.exe]: Invalid Address specified to RtlFreeHeap( 130000, 1428d8 )" This only happens if I run the program from within the IDE meaning if I do "Debug->Go", if I "execute program" that doesn't happen. And only for OpenGL programs.

Edited by - Supernova on June 9, 2001 10:45:00 PM

Share this post


Link to post
Share on other sites
Guys,
that''s it!

Disabling the "bus mastering" seems to solve every problems with the G200 (under Windows 2000) of mine!

Tomorrow, at work, I''ll try and solve the problems with the G400 (under Windows NT4) in a similiar way... I''ll let you know...

See ya soon...


Karmalaa

---
[home page] [e-mail]

Share this post


Link to post
Share on other sites
quote:
Original post by Supernova

Bus mastering? Is that in the BIOS or a video card setting?


You can found it in the video-card setting panel. Look the ADVANCED settings within the SCREEN control panel, OPTIONS sub-folder. You should be able to find a CHECK-BOX for enabling/disabling the "Bus Mastering"...



---
[home page] [e-mail]

Share this post


Link to post
Share on other sites
Well, the Voodoo doesn''t seem to have that setting. Oh well, at least I figured out a way to compile without getting the error. Thanks anyway.

Share this post


Link to post
Share on other sites

  • Forum Statistics

    • Total Topics
      627783
    • Total Posts
      2979035
  • Similar Content

    • By JFT
      Hi,

      I am trying to optimize shadow mapping for an rts-view (nearly top down) with directional light.

      My approach so far is to intersect the camera view frustum with the plane of the terrain on which all the units are moving and
      fit a box around the four intersection points:

      Then I use the center of the box and the light direction to construct a view matrix.
      To construct the orthographic projection matrix I use the corners of my box. 

      With that I somehow do not get the wanted results (no shadows) /shadow map is not correctly created for the view of the camera.
      I think I am maybe missing some translation/rotation?

      Is there a better way for rts-views with a single shadow map ?

      Thanks for your help beforehand!
    • By lonewolff
      Hi guys,
      With OpenGL not having a dedicated SDK, how were libraries like GLUT and the likes ever written?
      Could someone these days write an OpenGL library from scratch? How would you even go about this?
      Obviously this question stems from the fact that there is no OpenGL SDK.
      DirectX is a bit different as MS has the advantage of having the relationship with the vendors and having full access to OS source code and the entire works.
      If I were to attempt to write the most absolute basic lib to access OpenGL on the GPU, how would I go about this?
    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
  • Popular Now