Revelation60

Members
  • Content count

    131
  • Joined

  • Last visited

Community Reputation

122 Neutral

About Revelation60

  • Rank
    Member
  1. I am trying to find a clean way to make OpenAL remove a source when it has stopped playing. In my game, a lot of sound effects like gunshots are created and they all use a different source. I can imagine that after a certain time, OpenAL will start to complain. I looked if there are any callback routines, but I haven't found any. I know I can query the state of the source with alGetSourcei, but from OpenGL I have learned that these queries are slow. Is this the same with OpenAL? What can I do best?
  2. The malloc delete thing was stupid. I thought I new'ed it... Storing the buffer in the function itself is probably a good idea, thanks! For users interested in using this code: please note that the interal format I use is always GL_RGBA. If you want to use different formats, remember to change the buffer size, the format in glGetTexImage and both formats in glTexImage2D.
  3. I wrote a solution :) It involves copying the texture to a temporary buffer before the context gets destroyed. void Renderer::ToggleFullScreen() { Singleton<TextureManager>::Instance()->SaveToBuffer(); // save textures // set dimensions to desktop dimensions m_pSurface = SDL_SetVideoMode(m_nDesktopWidth, m_nDesktopHeight, 0, m_unFlags ^ SDL_FULLSCREEN); if (!m_pSurface) m_pSurface = SDL_SetVideoMode(0, 0, 0, m_unFlags); else m_unFlags ^= SDL_FULLSCREEN; InitGl(); // reload OpenGL Singleton<TextureManager>::Instance()->LoadFromBuffer(); // load textures } void TextureManager::SaveToBuffer() { for (int i = 0; i < m_aList.size(); i++) { glBindTexture(GL_TEXTURE_2D, m_aList[i].m_nInternalID); m_aList[i].m_pBuffer = (char*)malloc(m_aList[i].m_nWidth * m_aList[i].m_nHeight * 4); // RGBA -> 4 bytes glGetTexImage(GL_TEXTURE_2D, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_aList[i].m_pBuffer); } } /* Assumes SaveToBuffer was called first */ void TextureManager::LoadFromBuffer() { for (int i = 0; i < m_aList.size(); i++) { glGenTextures(1, &m_aList[i].m_nInternalID); glBindTexture(GL_TEXTURE_2D, m_aList[i].m_nInternalID); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); // new texture, so set parameters glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, m_aList[i].m_nWidth, m_aList[i].m_nHeight, 0, GL_RGBA, GL_UNSIGNED_BYTE, m_aList[i].m_pBuffer); delete [] m_aList[i].m_pBuffer; } } FYI: all my textures are stored as GL_RGBA. I hope this codes helps others as well!
  4. Then you probably store more information in your texture class than I do. I just record width, height and internal id. The buffer used to load the texture in memory is deleted on texture creation. It seems you store the filename, and maybe type (GL_RGBA, GL_LUMINANCE_ALPHA, etc.). This may work if all textures are loaded from file, but my fonts are loaded from a buffer I generate when the font is loaded from a custom format. Of course you can't tell if a texture is a font texture (and you don't want your font manager to know the difference) so there is no way to recreate this texture, without explicitly loading the font again which generates a different ID. Another solution would be to not delete the buffers. I'd hate to waste memory like that, just for this.
  5. Quote:or about switching video modes (e.g. changing resolution or switching between fullscreen and windowed)? I am, but the situation is equivalent for both resizing and video mode switching. Quote:The context thing is an oft-lamented issue with SDL, but the solution is pretty straightforward, and that is to recreate dynamic resources (such as textures) as needed. You said this would ruin your code's modularity, so I take it you've got things set up in a way that's not particularly conducive to dynamic reloading of resources. Unfortunately though, that may be the best (and perhaps only) solution to the problem. My current situation is that textures are loaded in the initialization phase of the game, by asking the texture manager to load from file. While it is possible to add textures in a later stage, I haven't done that yet. The texture manager stores the textures in an array, and uses the index as a unique identifier for the texture. A texture is a simple class that contains the OpenGL ID and other information. If I would allow reloading, I should clear the array and somehow tell the game to reinitialize, but only the textures. I would also have to tell that to my font manager, which stores its font in the texture manager. As you can see this is not something you want to do. The texture manager should know nothing about games nor fonts. My suggestion of reading the texture buffer back from Opengl, storing it, changing view mode and then loading them back sounds like a better solution. But it still is horrible.
  6. Hi, I am using SDL and OpenGL as the foundation of my engine. I recently found out that calling SDL_SetVideoMode trashes the GL context. Seeing how most of my engine is done, it would be a shame to switch to another library at this stage. The implication of this bug (I cannot imagine that this is a welcome feature) is that I cannot let my users resize their screen, so they cannot go from windowed to full screen without losing all textures and states. My question is how I can deal with this problem. I can't "just" reload all textures, because that would completely ruin the modularity of my code. My texture manager stores the textures handles, so I can call glGetTexImage to copy the textures to a buffer and then reload them. This however is extremely slow and ugly. How did you guys solve this problem?
  7. [C++] Template inheritance

    Thanks :)
  8. I am trying to make an abstract generic vector class, but I keep getting errors. template <class T, unsigned int D> class Vector { private: public: T m_Val[D]; Vector<T, D>() { for (int i = 0; i < D; ++i) m_Val[i] = 0; } ... etc }; ... template <class T> class Vector3 : public Vector<T, 3> { T& X() { return m_Val[0]; } }; What I want is that for a vector with dimension 3, I can access the elements by using x, y, z. I tried to do this with partial template specialization, but I keep getting the same error as now: ‘m_Val’ was not declared in this scope. What am I doing wrong?
  9. I want to make my engine a dynamic library, but I have got little experience with dynamic libraries. What would be the easiest way to expose classes and variables? Should I really prefix all classes with macro's like EXPORT and IMPORT or is there an easy way to just expose everything?
  10. Spatial data structure

    Should I divide the level with the octree or the objects too? If I do that, I may get the same problems as with the BSP tree.
  11. I have built BSP trees for my level and I see problems up ahead. Now I draw the level mesh by mesh and I cehck if the mesh is visisble or not. The BSP leafs don't contain meshes, but a set of triangles. So should I build a vertex index array for each subset of triangles that is in a different mesh or is in a different material group but still in the same object? That sounds very complex. Is there a better way?
  12. PCF in GLSL

    Hi, GPU gems has [url=http://http.developer.nvidia.com/GPUGems/gpugems_ch11.html]a nice article[/url] about shadow maps, and it states that this is a good method to reduce the aliasing: offset = (float)(frac(position.xy * 0.5) > 0.25); // mod offset.y += offset.x; // y ^= x in floating point if (offset.y > 1.1) offset.y = 0; shadowCoeff = (offset_lookup(shadowmap, sCoord, offset + float2(-1.5, 0.5)) + offset_lookup(shadowmap, sCoord, offset + float2(0.5, 0.5)) + offset_lookup(shadowmap, sCoord, offset + float2(-1.5, -1.5)) + offset_lookup(shadowmap, sCoord, offset + float2(0.5, -1.5)) ) * 0.25; Only this code is CG, and I am using GLSL. Can anyone convert these sentences: offset = (float)(frac(position.xy * 0.5) > 0.25); // mod offset.y += offset.x; // y ^= x in floating point I don't understand why a float can have .y and .x. Thanks!
  13. Problems with GL_EXT_stencil_two_side

    It's not visible in this snippet, but the stencil test is on and it should be on after the shadow volumes are rendered, because when I draw the scene, the stencil should function as a mask.
  14. Hi, I'm trying to implement two sided stencil tests for my shadows into my engine. To do that I altered some code: glEnable(GL_CULL_FACE); glEnable(GL_DEPTH_TEST); glDisable(GL_LIGHTING); glShadeModel(GL_FLAT); glDepthMask(0); glDepthFunc(GL_LESS); if (HasTwoSidedStencilExtension) { //Enable 2 sided stencil glEnable(GL_STENCIL_TEST_TWO_SIDE_EXT); glDisable(GL_CULL_FACE); // disable cull facing, important! //Increment(with wrapping) for back face depth fail glActiveStencilFaceEXT(GL_BACK); glStencilFunc(GL_ALWAYS, 0, ~0); glStencilMask(~0); glStencilOp(GL_KEEP, GL_INCR_WRAP_EXT, GL_KEEP); //Decrement(with wrapping) for front face depth fail glActiveStencilFaceEXT(GL_FRONT); glStencilFunc(GL_ALWAYS, 0, ~0); glStencilMask(~0); glStencilOp(GL_KEEP, GL_DECR_WRAP_EXT, GL_KEEP); //Draw the shadow volume DrawShadowsForEveryObject(GameScene, lp); glDisable(GL_STENCIL_TEST_TWO_SIDE_EXT); } else { glColorMask(0, 0, 0, 0); glStencilFunc(GL_ALWAYS, 0, ~0); glCullFace(GL_FRONT); glStencilOp(GL_KEEP, GL_INCR, GL_KEEP); //DrawShadow(o, f, lp, true); DrawShadowsForEveryObject(GameScene, lp); glCullFace(GL_BACK); glStencilOp(GL_KEEP, GL_DECR, GL_KEEP); //DrawShadow(o, f, lp, true); DrawShadowsForEveryObject(GameScene, lp); } glStencilFunc(GL_EQUAL, 0, ~0); glStencilOp(GL_KEEP, GL_KEEP, GL_KEEP); glEnable(GL_STENCIL_TEST); glDepthFunc(GL_LEQUAL); glDepthMask(GL_FALSE); glColorMask(1, 1, 1, 1); GameScene.DrawScene(); Because GL_EXT_stencil_two_side only takes over the drawing of the shadow volume, I haven't altered any other part of the code. The above change gives incorrect results: I can see the whole shadow volume, even the parts in the open. I haven't got a clue where to look for the problem. I've tried everything. Note that my hardware does support both GL_EXT_stencil_two_side and GL_EXT_stencil_wrap. The normal z-fail routine works like a charm. Have you got any ideas?