Multiple windows rendering different scenes

Started by
8 comments, last by Kincaid 14 years, 10 months ago
I am writing a map editor for my game in C using OpenGL under Windows. My editor uses an MDI (multi-window) interface, and I would like to support having a "Model Viewer" window open for multiple (independent) objects at the same time. The model viewer works perfectly when just a single object is open, but opening two windows viewing different objects makes my application crash. I looked around for information about rendering multiple OpenGL scenes in one application, but the information was either specific to GLUT (which I do not want to use) or simply recommended putting the separate OpenGL "instances" in separate threads and amusing they will work independently without conflicting. I am already trying the latter, but to no avail. My ultimate question is whether or not I need to do anything extra aside from creating separate OpenGL render contexts in separate threads to make my OpenGL "instances" work side-by-side. Thank you.
Advertisement
Hello
I had used 4 viewport type child windows for my 3d editor.
All can be done in a single thread, no need for multi-threading.
Basically all you need to do is setup a viewing WNDCLASSEX which
can be shared commonly by each view window, then you create 4 gl
type windows each with its own (inactive) rendering context.
No window shall hog the RC so it must remain inactive until it
is needed for rendering then you must deactivate when done.
If you need some sample code i can post it or you can PM me.
I think the easiest solution for an arbitrary number of models would be to render a model to texture (attach texture to FBO) then texture a quad with the result, placing the quad where you want your "sub-window". This way you also don't need to mess with multi-threading or trying to juggle multiple contexts.
Im working on kind of the smae thing, an editor with as many viewports the user wants (also MDI). this works great.
No, you only have to set up the rendercontexts for each window.
And call wglMakeCurrent(rc) to switch between viewports to render. (basically)

You can also play around with setting up multiple viewports in a single context.

Rendering whatever will work, whether its the same scene in multiple ports. or something else in all ports.
I have x views on the scene + a few ports with some independant stuff.

Where exactly does it crash ??
post some code. theres a lot that can cause it to crash of course

(multithreading will also work fine, but all this can be done without. and should be in first place, to avoid a LOT of debugging)
Thank you for all the help.

My original code was based on the (apparently false) assumption that if I created exactly one render context per thread, OpenGL would automatically use the render context belonging to the thread from which a given OpenGL function was called. Instead, I just ended up with a lot of mixed-up API calls and a crashing program.

I re-coded the program to use a single thread and put my render code in a "WM_TIMER" message in the viewport window procedure (where I explicitly set the correct render context before drawing). This seems to have solved the problem, and I am now able to open multiple viewport windows without any serious errors.

The only trouble is, whenever I open a second window, my framerate is cut exactly in half, from 60 FPS to 30 FPS, even though my system is more than capable of rendering both scenes as 60 FPS. I am guessing this is happening because the the system is waiting a full refresh cycle (1/60 of a second) before switching render contexts and drawing the scene in the other viewport. Is there any way (preferably without getting back in to multithreading) to keep the framerates from dropping like this? Thank you.
no.
in a single thread rendering things twice, costs twice as much.
This would be the point to turn back on the multithreading to get those fps back.
When you do, post me the code :) cause I keep postponing multithreading in my app :D

(edit: seeing 60 makes say 'turn off v-sync')
I turned off VSync with the "wglSwapIntervalEXT" extension, and I can now run two windows at full speed without a problem (everything runs at exactly 64 FPS). The strange thing is, if I try to run three windows, it still breaks, and even worse than before. With three windows, my CPU usage jumps from 0% to 98%, my memory usage jumps from 30MB to 400MB, and my framerate goes down to 4 FPS in all three windows. If I close one of the windows (so that only two are still open), everything instantly recovers, except the RAM, which doesn't recover until I close all OpenGL windows (but I do not need to restart the program). Any idea what the problem is now? I am using large display lists when rendering, could that be a problem? Sorry for all the questions, and thanks again.

EDIT: I just tried commenting out the "glCallList" function that actually draws the scene (leaving only the matrix code that controls the view and the ending "SwapBuffers" call), and I can now open as many windows as I want without any slowdown. The windows are blank now (obviously), but at least I've partially narrowed down the problem. I'll keep messing with it, but any input would still be appreciated. Thank you.

[Edited by - punmaster on June 8, 2009 6:51:29 PM]
that could be alot of things. maybe youve reached some limit where software rendering kicks in (maybe large textures, the ram...).
Maybe its a leak that happens when you open the third window.
(but i think not, since it seems to reside in the display list)
Does it differ with the size of the windows ?
e.g. making them smaller would maybe allow 4 to open ?
or is it exactly the number of 3?
Try rendering part by part to see where it goes wrong.
I just tried it with an extremely simple model (just a textured box), and I was able to open about 25 windows before the framerate dropped. It was like before in that as soon as I crossed a specific threshold, performance dropped instantly from > 60 to < 10 FPS, but it took a lot more than 3 windows to do it with the simple box. I'm now going to try ditching the display lists and rendering things directly. This isn't the first time I've had video memory issues with display lists, so I guess I'm not completely surprised.

EDIT: Well, I re-coded the draw functions to avoid using display lists and that did nothing to help me. Then I found a memory bug in my texture loading function, fixed that, and now it works much better. There is still "slowdown wall" after a certain number of windows, like before, but the number of windows it takes is much larger now. It could just be a real-world limitation of my hardware at this point, and I wouldn't consider the current results unreasonable by any means. If anyone has any further input, feel free to share. If not, I should be able to live with it how it is. Thanks again for all the help. :)

[Edited by - punmaster on June 9, 2009 8:54:57 PM]
Quote:Original post by punmaster
I just tried it with an extremely simple model (just a textured box), and I was able to open about 25 windows before the framerate dropped. It was like before in that as soon as I crossed a specific threshold, performance dropped instantly from > 60 to < 10 FPS, but it took a lot more than 3 windows to do it with the simple box. I'm now going to try ditching the display lists and rendering things directly. This isn't the first time I've had video memory issues with display lists, so I guess I'm not completely surprised.

EDIT: Well, I re-coded the draw functions to avoid using display lists and that did nothing to help me. Then I found a memory bug in my texture loading function, fixed that, and now it works much better. There is still "slowdown wall" after a certain number of windows, like before, but the number of windows it takes is much larger now. It could just be a real-world limitation of my hardware at this point, and I wouldn't consider the current results unreasonable by any means. If anyone has any further input, feel free to share. If not, I should be able to live with it how it is. Thanks again for all the help. :)



im pretty sure that stems from the texture. (its size or something).
Try rendering without texture.
is number of textures growing ? copied ? on opening windows ? (or texture size itself, resulting in odd sizes making it switch to software mode)

This topic is closed to new replies.

Advertisement