miggs

Members
  • Content count

    40
  • Joined

  • Last visited

Community Reputation

260 Neutral

About miggs

  • Rank
    Member
  1. i do call wglMakeCurrent, but the thing is: i don't call it in each thread, since i only run 1 thread. I do run both window computations and draw methods one after the other. (both windows don't need a lot of computations so this works for me i'd like to keep it simple)   imagine my loop is like: instance1 = new inst... // an instance containing it's scene logic, window, and opengl context instance2 = new inst... while(myloopShouldBeRunning) { instance1->update()->draw(); // calls makecurrent does it's stuff and renders instance2->update()->draw();  }   could the problem be that i run multiple contexts on the same thread?     EDIT: problem solved   a background worker thread had some old not cleaned up messy code that switched the context in some cases at uncontrolled times...
  2. hi,   my current renderer uses a number of frame+renderbuffers and rendertextures at the moment. I wanted to implenent the possibility to render to multiple independent windows and got multiple OpenGL 3.0 contexts running and at first it worked, without render buffers on the other windows.   But when the second windows' scene got more complex and I added renderbuffers and framebuffers the problems began.     GLint maxBuffers; gl::GetIntegerv(gl::GL_MAX_DRAW_BUFFERS, &maxBuffers); returns 8, and the first window consumes 5-6. I tried starting my app.exe twice with only 1 window and both ran without any trouble (2 app.exe each 1 window), but when starting only 1 app.exe and having it spawn 2 windows with each a context, i get GL_INVALID_VALUE on gl::GenFramebuffers, which indicates to me that both contexts' buffer count add up.     How can i have my HGLRC's work indepentent like 2 separate apps, within one? Thanks in advance
  3. it seems as if glClear(GL_DEPTH_BUFFER_BIT); does not do it's job properly (probably due to some wrong setup on my side). when i render the depth into a color attachment (the commented code instead of the uncommented code) using the gl_Position.z component in the fragment shader everything works fine, when my sun is moving and the shadow map is recalculated, it is correctly reset and redrawn and i have correct shadows. here is the setup code: [CODE] CheckGL(glGenFramebuffers(1, &m_Fbo)); CheckGL(glBindFramebuffer(GL_FRAMEBUFFER, m_Fbo)); //glActiveTexture(GL_TEXTURE0); CheckGL(glGenTextures(1, &m_ShadowTexture)); // depth CheckGL(glBindTexture(GL_TEXTURE_2D, m_ShadowTexture)); CheckGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT16, sMapWidth, sMapHeight, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0)); CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST)); CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST)); CheckGL(glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE)); CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE));//GL_CLAMP_TO_BORDER CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE));//GL_CLAMP_TO_BORDER CheckGL(glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_ShadowTexture, 0)); //CheckGL(glBindTexture(GL_TEXTURE_2D, m_ShadowTexture)); //CheckGL(glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, sMapWidth, sMapHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0)); //CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR)); //CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR)); //CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE)); //CheckGL(glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE)); //CheckGL(glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_ShadowTexture, 0)); // disable writing to the color buffer CheckGL(glDrawBuffer(GL_NONE)); CheckGL(glReadBuffer(GL_NONE)); CheckGL(glClearColor( 1.0f, 1.0f, 1.0f, 1.0f )); CheckGL(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)); bool succeeded = OpenGLCheckFramebuffer(__FILE__, __LINE__); CheckGL(glBindTexture(GL_TEXTURE_2D, 0)); CheckGL(glBindFramebuffer(GL_FRAMEBUFFER, 0)); [/CODE] and thats how i use them [CODE] CheckGL(glBindFramebuffer(GL_FRAMEBUFFER, m_Fbo)); CheckGL(glDrawBuffer(GL_NONE)); CheckGL(glReadBuffer(GL_NONE)); // when using color attachment //CheckGL(glDrawBuffer(GL_COLOR_ATTACHMENT0)); CheckGL(glViewport(0,0,sMapWidth, sMapHeight)); CheckGL(glClearColor( 1.0f, 1.0f, 1.0f, 1.0f )); CheckGL(glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT)); // when using GL_DEPTH_BUFFER_BIT with depth_attachment, i only see black, while when clearing both color and depth i see the depth attachment without beeing cleared (screenshot attached) CheckGL(glColorMask(0,0,0,0)); // this is 1,1,1,1 if i use color_attachment // render scene... CheckGL(glBindFramebuffer(GL_FRAMEBUFFER, 0)); CheckGL(glColorMask(1,1,1,1)); [/CODE] here's how it looks like: the top 2 images are using the depth attachment, i can't provide a video, but when the sun is moving, this is what happens (no depth clearing) the bottom 2 images clear and render correctly [img]http://i47.tinypic.com/vxj0c6.png[/img]
  4. thanks that helped a lot, i've set mine to 0.35 / 1000 at the moment, and in case i need to see farther i might render in a different clipping context. one question remains, which is why my postions texture looks so weird. is that normal? because it doesn't look anything like the gdebugger output.
  5. i tried repositioning my camera and setting the projection near/far from 0.1/100000 to 10/10000, and now i see the depth texture, and the positions. still they do not look like they do in gdebugger, but seeing those values, i think it is just a matter of displaying the values sent to the shader differently. here is my engine output [img]http://i47.tinypic.com/w7lrgh.png[/img] and this is what gdebugger says: [img]http://i48.tinypic.com/s113eo.png[/img] how can i change my shader or render calls to visualize the textures in a better color range? because needing to have the far/near plane at 10/10000 is not accepable if i have a first person camera running on that land/heightmap. i'd at least like them to be 0.3/3000
  6. hi i have a fbo mrt setup that currently works for normals and color, and i tried to add the possibility to render the depth and position values, and it seems as if my shaders render correctly into the frame buffer (gDebugger screenshot), but when i try rendering them like i do with normals, i either get a white screen for the depth texture, or some red glitch for the positions. for the depth texture, i'm guessing because of my nearZ and farZ values beeing 1 and 100000, the depth values range between 0.9992 and 0.995, but i don't know how i could transform them to get a decent rendered depth texture. as for the positions, i'm clueless, maybe my formats are wrong? the opengl context is 4.2.11733 Compatibility Profile Context here is what gdebugger shows for the textures: [img]http://i50.tinypic.com/30mlbtl.png[/img] and this is the failed result for the positions (depth buffer result is a white screen, and positions is like the gdebugger output): [img]http://i45.tinypic.com/104p83c.png[/img] that is how i set up the fbo and textures: [CODE] bool KBuffer::Initialize(uint width, uint height) { GLint maxBuffers; glGetIntegerv(GL_MAX_DRAW_BUFFERS, &maxBuffers); if(maxBuffers < 3) FCThrow("MRT max buffer < 3"); m_Width = width; m_Height = height; m_Quad = MeshProvider::CreateTexturedScreenQuad(); // generate buffers glGenFramebuffers(1, &m_Fbo)); glGenRenderbuffers(1, &m_DepthBuffer); glGenRenderbuffers(1, &m_ColorBuffer); glGenRenderbuffers(1, &m_PositionBuffer); glGenRenderbuffers(1, &m_NormalBuffer); glBindFramebuffer(GL_FRAMEBUFFER, m_Fbo); glBindRenderbufferEXT(GL_RENDERBUFFER, m_ColorBuffer); glRenderbufferStorageEXT(GL_RENDERBUFFER, GL_RGBA, width, height); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, m_ColorBuffer); glBindRenderbufferEXT(GL_RENDERBUFFER, m_PositionBuffer); glRenderbufferStorageEXT(GL_RENDERBUFFER, GL_RGBA32F, width, height); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_RENDERBUFFER, m_PositionBuffer); glBindRenderbufferEXT(GL_RENDERBUFFER, m_NormalBuffer); glRenderbufferStorageEXT(GL_RENDERBUFFER, GL_RGBA16F, width, height); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_RENDERBUFFER, m_NormalBuffer); glBindRenderbufferEXT(GL_RENDERBUFFER, m_DepthBuffer); glRenderbufferStorageEXT(GL_RENDERBUFFER, GL_DEPTH_COMPONENT32, width, height); glFramebufferRenderbufferEXT(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_RENDERBUFFER, m_DepthBuffer); // create the textures glGenTextures(eGBufferTexture::_Count, m_Textures); // diffuse/color - 8 bit per channel glBindTexture(GL_TEXTURE_2D, m_Textures[eGBufferTexture::Color]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, m_Textures[eGBufferTexture::Color], 0); // position - HDR texture with 32 bit per channel glBindTexture(GL_TEXTURE_2D, m_Textures[eGBufferTexture::Position]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, GL_TEXTURE_2D, m_Textures[eGBufferTexture::Position], 0); // normal - 16 bit per channel glBindTexture(GL_TEXTURE_2D, m_Textures[eGBufferTexture::Normal]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA16F, width, height, 0, GL_RGBA, GL_FLOAT, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, GL_TEXTURE_2D, m_Textures[eGBufferTexture::Normal], 0)); // depth glBindTexture(GL_TEXTURE_2D, m_Textures[eGBufferTexture::Depth]); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_COMPARE_MODE, GL_NONE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT24, width, height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, m_Textures[eGBufferTexture::Depth], 0); bool succeeded = OpenGLCheckFramebuffer(__FILE__, __LINE__); glBindTexture(GL_TEXTURE_2D, 0); glBindFramebuffer(GL_FRAMEBUFFER, 0); return succeeded; } // render completed scene position texture a quad: glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, m_Textures[eGBufferTexture::Position]); // Normals and Color draws correctly glUniform1i(colorSamplerParam, 0); m_Quad->Render(); // vertex shader #version 330 core precision highp float; ... out block { vec2 TexCoord0; } Out; void main() { gl_Position = u_mMVP * vec4(a_vPosition, 1.0); Out.TexCoord0 = a_vTexCoord0; } // fragment shader uniform sampler2D u_sColor; void main() { FragColor = vec4(texture(u_sColor, In.TexCoord0).xyz, 1); } [/CODE] [b]EDIT :[/b] i thoght maybe somehow my quad is rendered wrong and tried with glBindFramebuffer(GL_READ_FRAMEBUFFER, m_Fbo); glReadBuffer(GL_COLOR_ATTACHMENT1); glBlitFramebuffer(0, 0, m_Width, m_Height, 0, 0, m_Width, m_Height, GL_COLOR_BUFFER_BIT, GL_LINEAR); glBindFramebuffer(GL_READ_FRAMEBUFFER, 0); but that yields the same wrong result
  7. ah very interesting. i had also thought about seperating meshes from models once but i took the convenient (and slow) route and set the buffer pointers on each mesh draw call. but with your way i can bind a buffer for a model of a kind once and then render it on different positions with different parameters. this brings me to another question: when you have your meshes sorted in a list, if we say for example 3 meshes share the same buffers and are drawn one after the other, but for some reason one of those wants to use a different shader. what would you prioritise? - would you rather group meshes with the same buffer first, and then with those meshes sort those with the same shaders and change shaders more often - or group shaders, and then within shaders group meshes which share buffers? thanks is advance, your answer was very very helpful
  8. hi (i'm using opengl 4.0 but this question is rather theroetical so thats not important) currently i have a structure, like this (simplified) [CODE]model{ // object on screen with a world matrix modelpart{ // a part of the model with a material, e.g. arm, leg, or pants, or window (if model is a house..) mesh { vertBuffer, iBUffer... etc.. } } }[/CODE] when i render, i activate the models shader, a normal shader for example ( sometimes in renderlists a shader is even activated along multiple models) my question is, what if i would like to have different shaders on each modelpart, how do you guys go on about that? if for example the naked arm has a different shader than some fancy blink metal armor on the chest, or a part has normalmapping, and a part is parallax? because at the moment the Model->Render() method sets a ModelViewMatrix for the active shader, and now each ModelPart would need a reference to the parent Model's ModelViewMatrix, and activate it's own shader and set the modelviewmatrix is this situation even realistic or does a shader per model suffice in most cases? Or is it that extreme that instead of rendering Models, you have renderlists of modelparts with the same shader? would it help to chang my model structure somehow? thanks in advance
  9. [color=#333333][size=4]i've implemented LuaPlus in my engine eventmanager successfully and really like the flexibility i gained.[/size] [size=4]but i'm still not exactly where i want to by, because i can't link my c++ classes to a lua class.[/size] [size=4]for example i have a Actor class in c++, and i want to be able to create the same class in lua and gain access to members with luaplus, but i can't figure how i can achieve that.[/size] [size=4]Is this actually luaplus built in functionality, or do i have to write my own interface that exchanges data tables between c++ and lua?[/size] [size=4]my current approach would be to fire an event in luascript that creates an new actor class in c++ code, and transfer its id and the data i need to back to lua. when i modify the data i send the modifications back to c++ code again, but i actually thought there's something in luaplus that exposes this functionality already.[/size] [/color]
  10. out of memory!

    [quote name='john_woo' timestamp='1297679676' post='4774024'] no_pixel is around 29,5XX, it represents a counter of pixels within a specific area of an img, [/quote] do you mean 29-thousend?, in this case you try allocating roughly 800MB of memory. becase you create a byte[30000][30000] in this case. Is this really what you want?
  11. out of memory!

    [quote name='john_woo' timestamp='1297676975' post='4774008'] hello, i got a message from mfc application stating "Out of Memory", the code that cause this error is below: [attachment=1403:oom.gif] could you plz advise about it , thanks [/quote] do you know the line where the error is thrown? did you try debugging? whats the size of no_pixel?
  12. thanks for the quick response. just after i posted it i also had figured a solution for my problem that can be achieved differently. But it's always good to know, i'll maybe also use this in future. i'd like to post my solution, to help others who struggle on threading and have a hard time figuring a concept. excuse my english, it's not my native language: i still use boost::signals2, but now the signals don't directly call the function. i have multiple signals (event lists), example: class EventManager { boost::signals2::signal< void () > OnClick_LMB; boost::signals2::signal< void () > OnClick_RMB; ... boost::signals2::signal< void () > OnRender boost::signals2::signal< void () > OnXXX and besides that i have jobPools for the various things that can run in paralell: JobList m_AnimationJobs; JobList m_RenderJobs; JobList m_PhysicsJobs; ... } now i have my: class FancyShootingGuy { void MissileEvent() // does stuff that is thread safe. the joblist add should be wrapped with a mutex i guess, but i didn't want to write it all { someJobListAcessor->m_AnimationJobs.Add(FancyShootingGuy::FireMissile&, this); // add others functions, like the ones handling physics or CD.. } void FireMissile() { ... } } i then register the Event EventManager->OnClick(boost::bind(&FancyShootingGuy::MissileEvent, this)); now i can add any event and invoke any funktion, and the processes are added to the joblists. you now have to manually figure out how you want to process your joblists. depending on your game, you can run the animation and ai joblists in parallel, then run the collision detection, and during all those proceses have a thread listen to network stuff and collect it and use it on the next loop. hope this helps someone. i'd also be thankful for improvements to this
  13. hi, i created a simple eventmanager using boost::signals2 and all ran great up to now. but now i want to improve performance and run certain events in threads. is there a simple possibility to get a list of the functions bound to a boost::signal2::signal<...> sx, and instead of calling sx(), just interate through this list in a loop and execute each function onto a separate thread? if not, please advise alternatives. maybe a jobpool? but thats kind of weird to manage by popping and pushing events to it.
  14. create and compress archive

    [quote name='SiCrane' timestamp='1297381986' post='4772613'] [url="http://www.codeproject.com/KB/files/zip_utils.aspx"]This page[/url] is the first hit on google when I search for "create zip file C++". [/quote] mine is not. my google search results must be forged by all previous searches about zlib and the boost implementation. thanks.
  15. create and compress archive

    [quote name='KulSeran' timestamp='1297373855' post='4772561'] [quote] how do you handle compressing? do you zip by hand with 7zip or winrar or whatever you use? [/quote] zip command line. Integrate it into your build process, so you don't have to "by hand" anything. When you build your levels, it passes the folder into zip and have it drop out an archive. [/quote] i thought about that, but is there really no open lib out there? if i want to ship stuff and implement some packing functionality for certain stuff this is not the solution :/