Jump to content

  • Log In with Google      Sign In   
  • Create Account

Neosettler

Member Since 17 Nov 2011
Offline Last Active Apr 09 2013 10:28 AM

Topics I've Started

glPolygonMode GL_LINE and GL_POINT performance drop!

16 March 2013 - 10:48 AM

Greetings,

 

When rendering with glPolygonMode GL_LINE or GL_POINT, I'm getting a performance drop compared to GL_FILL. Even with GL_LINE_SMOOTH disabled. Someone in another thread mentioned to disable the user clipping... it does not make sense to me as I wouldn't know how to do that in the first place. Any secret of the ancient here?

 

thx,


Delete VBOs at run time?

26 January 2013 - 04:34 PM

Greetings GL Masters,

I recently run my application with gDEBugger GL: http://www.gremedy.com/download.php

I was chock to my very core that all these years, I had video memory leaks. After endless efforts, I managed to find the source of the leaks. All I needed to do was to match every glGenBuffers with glDeleteBuffers and my life was peachy again.

 

each VBO looks somewhat like this:

glGenBuffers(1, &l_id1);
glBindBuffer(GL_ARRAY_BUFFER...
glBufferData(GL_ARRAY_BUFFER...
glBufferSubData(GL_ARRAY_BUFFER...

glGenBuffers(1, &l_id2);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER...
glBufferData(GL_ELEMENT_ARRAY_BUFFER...

glDeleteBuffers(1, &l_id1)
glDeleteBuffers(1, &l_id2)
 
The problem is, all the while this is working fine when opening and closing the API. Deleting buffers at run time makes the next draw call ends with an access violation.
 
I cant find any relevant info on how to properly delete VBOs buffers at run time so far. Any wisdom of the ancestral would be very welcome.
 
Thx


GLSL: Unique shader and Data Corruption!

12 January 2013 - 02:32 PM

Greetings, OpenGL Masters,

 

I've been using a unique shader for several years without problem and suddenly, between minors renderer modification and NVidia drivers upgrades, something went terribly wrong.

 

My shader is fairly complex but it can render any geometry with any material properties and light count. (I’m aware that this is not optimal but it’s very easy to maintain).

 

For instance, I do skinning like so:

 

uniform int u_DeformerCount;

in vec4 a_Weights;

in ivec4 a_Deformers;

uniform mat4 u_XFormMatrix[40]; /// Deformation matrices.

 

                if (u_DeformerCount > 0) /// Matrix deformations.

                {

                                mat4 l_deformer;

                               

                                for (int i = 0; i < 4; ++i) /// Maximum of 4 influences per vertex.

                                {

                                                l_deformer += a_Weights[i] * u_XFormMatrix[a_Deformers[i]];

 

SNIP....

                }

 

The problem:
While everything works without a glitch when u_DeformerCount > 0, there seems to be data corruption with geometries that doesn’t have the skinning condition enabled. I’m basically getting black frames frequently, like a kid playing with the light interrupter.

 

Now, I can make the problem disappeared by using u_XFormMatrix[0] instead of u_XFormMatrix[a_Deformers[i]]... I tried everything I could think of so far to fix this and I’m at the point where I could use Jedi Master’s wisdom.

 

- How could a part of the shader that is not used explicitly affect its output?

- Any known pitfall using uniform arrays?

 

PS: The major downside of a unique shader is that every uniforms needs to be set/reset every draw and I’m guessing it could be the source of my problem.


VBO and Multi-Material!?

12 January 2013 - 01:51 PM

Greetings everyone,

I’ve been looking around and found many threads on this topic but none of them seems to tackle my concern specifically. So I’ll try to explain my approach in drawing geometries with multiple materials.

For simplicity sake, let’s focus on only one geometry.

Geometry:
-Meshes Array
-Vertices
-Vertices Attributes
-VBO

Mesh:
-Face Indices
-Material

This concept first appeared ideal for sharing the vertices between meshes but consider a geometry composed of 4 Quads:

---------
| A | B |
----+----
| C | D |
---------

- VBO is aligned ABCD
- Mesh1: Material1: Face Indices AD
- Mesh2: Material2: Face Indices BC

The problem:
Chances are that Mesh2 will be rendered correctly because the Vertices are packed one after the other in the VBO but Mesh1 might have wrong data since the Face Indices AD are not align in the VBO.

The goal:
To share V, VA and VBO between meshes. Avoiding geometry reconstruction and using FI to render whatever part of the geometry, leaving the VA undisturbed from their original structure (presumably imported from an CG package).

Now, could this be done or I’m dreaming a fantasy world?


OpenGL 4.x and OS X Mountain Lion

15 December 2012 - 12:55 PM

Greetings,

I’m looking for a solution to load GL extensions manually on Mac OS X without the need of a library like Glew.

On Windows I use:
#include <windows.h>
#include <glcorearb.h>
LoadLibraryA("opengl32.dll");
GetProcAddress
…and it works beautifully.

On Linux I use:
#include <GL/glx.h>
#include <glcorearb.h>
install libX11-dev
link to X11 lib
glXGetProcAddress
…and the world is a wonderful place.

Now, afaik, glx doesn’t support context above version 2.1 on OS X and I’m trying to find a solution but I’m not sure which direction to take. I’m already digging into XQuartz but I can’t find any relevant example how to create a 4.x context so far.

To help me in my decision making, here is a few questions that could clear things up a bit:

1 - What is the most up to date technique to use OpenGL 4.x on Linux and OS X?
2 - Is there a common way to use OpenGL 4.x between Linux and OS X?
3 - With distribution in mind, is there a way to gather GL dependencies within the API root folder? Think of this as running a GL API on a vanilla OS without the need of installing anything.

Any help would be greatly appreciated.

PARTNERS