• Advertisement

Neosettler

Member
  • Content count

    39
  • Joined

  • Last visited

Community Reputation

150 Neutral

About Neosettler

  • Rank
    Member
  1. Thank you for your input guys, well I have probably the most powerful video card on the market right now, NVidia 690 GTX that can render millions of polygon without any trouble but once in LINE or POINT mode it goes down to 1 FPS... there is clearly something fishy with these features. Even with all the recommendation suggested.
  2. Greetings,   When rendering with glPolygonMode GL_LINE or GL_POINT, I'm getting a performance drop compared to GL_FILL. Even with GL_LINE_SMOOTH disabled. Someone in another thread mentioned to disable the user clipping... it does not make sense to me as I wouldn't know how to do that in the first place. Any secret of the ancient here?   thx,
  3. glPolygonMode slows down[solved]

    Hello Mars, excuse me to bring this thread from the grave but I'm having a major performance drop while rendering in GL_Line mode. I try to find information on how to disable the user clipping and I couldn't find anything so far... so my question is, how do you disable the user clipping?
  4. Delete VBOs at run time?

      The validity check comes from:   - In the old days, resizing a window was emptying the video memory. (Not a valid argument anymore) - I'm using a mixture of SFML and QT and for some reason, it does prevent me from creating buffer at initialization of my API as there is a switch GL context or some funny stuff similar to this. (I will start by investigate this.) - I have the option of resetting programs at run time. (I'll make re-initialization on reset() instead.)   The uploading data is rather tricky, as my approach supports multi-materials, I have 2 different techniques that I would like to debate:   - Geometries hold vertex attributes and meshes. - Meshes hold face ids.   For each geometry: 1 - One VBO for the vertex attributes and one VBO for each mesh(i). 2 - One VBO for the vertex attributes and one VBO for all mesh(i) using buffer offset.   Surprisingly, number 2 is giving better performance even if it has to call subData every draw.   To resume, You are right, I'm in the process of redesign and I'd love some insights.
  5. Delete VBOs at run time?

    My apologies, I find pasting code here very painful, I guess I tried cutting corners in my explanation but my code looks more like this:   void OpenGL::SetVertexBuffer(VertexBuffer *in_buffer) {     UInt &l_id = in_buffer->GetArrayId();     if (l_id == 0)     {         glGenBuffers(1, &l_id);     }     glBindBuffer(GL_ARRAY_BUFFER, l_id);     if (in_buffer->IsUpdated())     {         glBufferData(GL_ARRAY_BUFFER, in_buffer->GetArraySize(), NULL, gl_BufferTypes[in_buffer->GetType()]);     } } template <class T> void OpenGL::SetVertexData(ArrayBuffer<T> &in_array) {     if (in_array.IsValid())     {         glBufferSubData(GL_ARRAY_BUFFER, in_array.GetOffset(), in_array.GetSize(), in_array.GetData());     } } template <class T> void OpenGL::SetVertexAttribute(ArrayBuffer<T> &in_array) {     if (in_array.IsValid())     {         e_VertexAttributes &l_index = in_array.GetIndex();         glVertexAttribPointer(l_index, gl_AttributeSizes[l_index], gl_AttributeTypes[l_index], gl_AttributeNormalized[l_index], gl_AttributeStrides[l_index], (void*) in_array.GetOffset());         glEnableVertexAttribArray(l_index);     }     else         glDisableVertexAttribArray(in_array.GetIndex()); } template <class T> void OpenGL::Draw(VertexBuffer *in_buffer, ArrayBuffer<T> &in_indices, UInt in_offset, const e_RenderTypes &in_type) {     UInt &l_id = in_buffer->GetElementId();     if (l_id == 0)     {         glGenBuffers(1, &l_id);     }     glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, l_id);     if (in_buffer->IsUpdated())     {         in_buffer->SetUpdated(false);         glBufferData(GL_ELEMENT_ARRAY_BUFFER, in_indices.GetSize(), in_indices.GetData(), gl_BufferTypes[in_buffer->GetType()]); /// Only if reseted.     }     glDrawRangeElements(gl_RenderTypes[in_type], in_offset, in_offset + in_indices.GetSize(), in_indices.GetSize(), GL_UNSIGNED_INT, (void*) in_indices.GetOffset());        glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0); }  If my understanding is correct, unbinding any subData attached to the buffers was my missing link before deletion at run time.   void OpenGL::DeleteVertexBuffer(UInt &in_id) { if (in_id != 0) {   glBindBuffer(GL_ARRAY_BUFFER, in_id);   for (UInt i = 0; i < 16; ++i)   {    glDisableVertexAttribArray(i);   }   glBindBuffer(GL_ARRAY_BUFFER, 0);   glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, 0);   glDeleteBuffers(1, &in_id);   in_id = 0; } }
  6. Delete VBOs at run time?

    Thank you for your input Bob,  I used this to delete the VBO now: The crash is gone but I'm still not sure if this is the right way of doing it.
  7. Greetings GL Masters, I recently run my application with gDEBugger GL: http://www.gremedy.com/download.php I was chock to my very core that all these years, I had video memory leaks. After endless efforts, I managed to find the source of the leaks. All I needed to do was to match every glGenBuffers with glDeleteBuffers and my life was peachy again.   each VBO looks somewhat like this: glGenBuffers(1, &l_id1); glBindBuffer(GL_ARRAY_BUFFER... glBufferData(GL_ARRAY_BUFFER... glBufferSubData(GL_ARRAY_BUFFER... glGenBuffers(1, &l_id2); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER... glBufferData(GL_ELEMENT_ARRAY_BUFFER... glDeleteBuffers(1, &l_id1) glDeleteBuffers(1, &l_id2)   The problem is, all the while this is working fine when opening and closing the API. Deleting buffers at run time makes the next draw call ends with an access violation.   I cant find any relevant info on how to properly delete VBOs buffers at run time so far. Any wisdom of the ancestral would be very welcome.   Thx
  8. very good find Nyssa with the gDEBugger, Nice tool! It turns out that my glitch was caused by the lack of calling glFinish! case closed. thank you all for your inputs. Cheers!
  9. Yes indeed, Well, I was under the impression a if statement is a if statement... apparently, the portion inside a condition that is not being use has to be considered has if it was in order for the shader to render correctly. Strangely, the bug is not constant and very hard to track. I did noticed however that it rendered correctly when removing few objects from my scene which use no textures. I must be missing something. ...digging the uber shader.
  10. VBO and Multi-Material!?

    Oh, Vertices and Vextex Attributes duplication is the main thing I was trying to avoid but at least I know there is no other solution... really? Anyhow, I'm curious to know how 3D packages like 3dmax/Maya/Softimage recalculate VBOs of the geometry as the material ids/face clusters are being changed... very puzzling to me. Thank you for your inputs Slicer, very appreciated.
  11. VBO and Multi-Material!?

    Hi Slicer, thank you for your input I'd be very interested if you could elaborate this concept. I use your B scenario, not only for textures but for all material properties. My main concern is how to pass the face indices from the new spit meshes to the VBO. As it is for now, the materials seems to be distributed to the right faces but some vertices go to 0,0,0 stretching the whole model or, some vertices are connecting faces that shouldn't be connected. I wonder if I do have to reorder the vertices some how for the meshes to render properly.
  12. Interesting, I was not aware we could do preprocessor #if. Is this a GLSL feature or you need to parse your shader and string edit it?   I agree but unless I’m missing something, while these seems to be valid solutions, it will make the unique shader concept collapse as we might alternate the skinning condition every draw.   This seems to be to most logical direction. The shader uses glBufferSubData for the vertex attributes, I’ll have to investigate glMapBuffer… this is puzzling. Thanks again for your inputs Nyssa.  
  13. Thx for your suggestion Nyssa, Did you have in mind setting all values using glVertexAttribIPointer or in the shader some how? EDIT: can't assign to varying in the shader. I'm guessing you meant glVertexAttribIPointer. PS: I do use glDisableVertexAttribArray a_Deformers when no skinning is needed. Still digging.
  14. Greetings, OpenGL Masters,   I've been using a unique shader for several years without problem and suddenly, between minors renderer modification and NVidia drivers upgrades, something went terribly wrong.   My shader is fairly complex but it can render any geometry with any material properties and light count. (I’m aware that this is not optimal but it’s very easy to maintain).   For instance, I do skinning like so:   uniform int u_DeformerCount; in vec4 a_Weights; in ivec4 a_Deformers; uniform mat4 u_XFormMatrix[40]; /// Deformation matrices.                   if (u_DeformerCount > 0) /// Matrix deformations.                 {                                 mat4 l_deformer;                                                                 for (int i = 0; i < 4; ++i) /// Maximum of 4 influences per vertex.                                 {                                                 l_deformer += a_Weights[i] * u_XFormMatrix[a_Deformers[i]];   SNIP....                 }   The problem: While everything works without a glitch when u_DeformerCount > 0, there seems to be data corruption with geometries that doesn’t have the skinning condition enabled. I’m basically getting black frames frequently, like a kid playing with the light interrupter.   Now, I can make the problem disappeared by using u_XFormMatrix[0] instead of u_XFormMatrix[a_Deformers[i]]... I tried everything I could think of so far to fix this and I’m at the point where I could use Jedi Master’s wisdom.   - How could a part of the shader that is not used explicitly affect its output? - Any known pitfall using uniform arrays?   PS: The major downside of a unique shader is that every uniforms needs to be set/reset every draw and I’m guessing it could be the source of my problem.
  • Advertisement