Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 10 Apr 2007
Offline Last Active Apr 08 2016 03:15 PM

Topics I've Started

Texture Deletion Race Condition

12 May 2014 - 11:46 AM

I built a small demo using the Kinect for PC that simply reconstructs 3D position and color. Time was short, and the simplest solution just took the raw client-side data and copied it into a new texture for every frame.
The way this worked in the main loop was as follows:

--Grab new data from Kinect (client-side bytes)

--Allocate new OpenGL textures and glTexImage2D the new data into them

--glFlush and glFinish all the things (just in case)


--Draw the frame using the new textures (vertex shader sorcery)


--glFlush and glFinish all the things (just in case)

--Flip buffers

--glFlush and glFinish all the things (just in case) (again)


--Delete the new textures (and delete the client side data)

The above was my debugging attempt, and it's obviously redundant with all its flushes/finishes. Even with this code, however, I would get an error where the texture would essentially not exist every n ~= 15 frames. It would just appear black.
My workaround at the time was to just queue the last 10 frames for deletion (so each frame it would delete the texture used 10 frames ago). This "fixed" the problem. However, I want to know why the original issue occurred. Some thoughts/notes:

--I want to say it's not because the client-side data is being deleted before being copied into the texture; the client side data was deleted at the end of a frame, after the object has been drawn. The flushes everywhere and the buffer flipping should prevent it being a pipelining issue. However, I can't think of what else it could be.

--It's not some compiler reordering breaking it. Without optimizations, the problem still occurred.

--The draw-the-frame step actually draws different views into three separate windows, with a shared OpenGL context. The issue only seemed to affect one window at a time, which I found odd.

--The issue was not data from the Kinect being broken. This is evidenced by, among other things, queueing for deletion fixing the problem. Also, the Kinect copies its data into a client-side array which is reused, but this array is copied into a freshly-allocated client-side array associated with each texture. When the new OpenGL texture is created, the data is pulled from this newly-allocated array, which persists for the length of the frame and until the texture is deleted.

--My drivers are up-to-date; the GPU is NVIDIA GeForce 580M.

I feel like it's some driver-related pipelining issue related to texture upload, but I'm not completely sure. Ideas?

Shadow Volume Generation Precision

28 April 2014 - 12:50 AM


So I have reimplemented my old shadow volumes algorithm--mainly for completeness. This implementation uses a geometry shader to generate depth-fail (read: two-ends capped) geometry.

All works well, except that for some glancing angles, the geometry is extruded out from where it shouldn't. This results in artifacts. This seems to happen particularly where a triangle's edge that should be shadowed actually thinks it's a silhouette edge.

Here's a simple example. All seems well:
But rotate it, and an artifact develops:
Here's a closeup of such an artifact:
The dark face at the bottom should be completely shadowed, and therefore should not generate an edge along the bottom-left. It generates one anyway, screwing up the render.

The normal used to compare to the light direction is generated from the cross product of the geometry itself, so it's not an interpolation issue. Also, the artifacts seem to only happen in a narrow range of glancing angles. The same face will correct itself if turned further.

My question is, is this a common thing? The only thing I could think of is a precision issue, but the error is really quite large. I realize I haven't given much information about the implementation, but does anyone seen this sort of thing with volume extrusion before?


SEGFAULT with OpenGL ES glDrawArrays

02 January 2014 - 05:24 AM


I'm trying to draw simple things with OpenGL ES on a Raspberry Pi.

I created a context through the X system and the glx_create_context_es2_profile extension. This resulted in what I hope is an OpenGL ES 2.0 context (the testing functions gave weird values), but it seems to work for some things (e.g., it gave salient feedback on a shader, which it successfully compiled).

The code is:
GLint loc_vert = program->get_location_attribute("vertex");
printf("Loc vert=%d, vertex size=%d\n",loc_vert,sizeof(Vertex));
glEnableVertexAttribArray(loc_vert); glVertexAttribPointer(loc_vert, 4,GL_FLOAT, GL_FALSE, sizeof(Vertex),vertices);

printf("Begin glDrawArrays=%p with %d,%d,%d\n",glDrawArrays, mode,0,num_vertices);
glDrawArrays(mode, 0,num_vertices);
printf("End glDrawArrays=%p\n",glDrawArrays);

The "vertices" array is a static array of four vertices

The output is:

Loc vert=0, vertex size=68
Begin glDrawArrays=<some ptr> with 1,0,2

I checked against the documentation and my previous code, and this should be valid (it also runs fine on two other systems).

I took the liberty of disassembling glDrawArrays:

Dump of assembler code for function glDrawArrays:
0xb6e63500 <+0>: ldr r3, [pc, #42985368] ; 0xb6e63528 <glDrawArrays+40>
0xb6e63504 <+4>: push {r4, lr}
0xb6e63508 <+8>: mov r4, r0
0xb6e6350c <+12>: ldr r3, [pc, r3]
0xb6e63510 <+16>: bl 0xb6e6f110
0xb6e63514 <+20>: ldr r3, [r0, r3]
0xb6e63518 <+24>: mov r0, r4
0xb6e6351c <+28>: ldr r3, [r3, #3122144] ; 0x4d8
0xb6e63520 <+32>: blx r3
0xb6e63524 <+36>: pop {r4, pc}
0xb6e63528 <+40>: andeq r8, r1, r12, asr sp
End of assembler dump.

It segfaults on "0xb6e63510 <+16>", which is a jump.


GLEW Linker Error/Check Extension Support at Compile Time?

27 June 2013 - 03:59 AM



I am trying to compile and run some of my new code on my old laptop. My current code was developed on a GL 4-capable machine, and uses several key extensions that are not supported on the older device.


When building on the newer machine, it works perfectly. When building on the older machine, despite a nearly identical build chain, it fails with linker errors. Specifically, some GLEW functions are referenced as undefined. I am linking against the library, in the same place as the working build, and, incidentally, last in the linker order. The undefined functions include __glewUniform1D and __glewBindImageTextureEXT. Such functions come from extensions (in those cases, GL_ARB_gpu_shader_fp64 and GL_ARB_shader_image_load_store, repsectively)--notably extensions that the older laptop doesn't list as supported.


I have an extremely fuzzy conception of how GLEW actually works, but since other GLEW functions (e.g. glewInit) are not listed as undefined, I draw the conclusion that perhaps the extensions don't exist and so can't be linked to. Though, since the library versions of GLEW are the same (1.5) and I can't think of a mechanism besides, I can't be sure.


However, it also gives rise to another idea. Since the extensions aren't supported anyway, it would be convenient to be able to remove code that won't work at compile time. Though, since GPU detection, I expect, can only be done at runtime, I don't see how that could work.


So: why the linker errors? And, can extensions be checked at compile-time?




*Want* Multiple Base Objects

31 May 2013 - 07:59 PM



I had come across an . . . unusual programming situation.


I have a FBO2D base class, which implements a 2D framebuffer object, and I wanted to implement a FBOCube class derived from it, implementing a cubemap framebuffer object. The point is that FBOCube is essentially six FBO2D classes, but it still kinda makes sense to derive from it. In effect, I want six FBO2D parents for a single object. The language is C++.


I realize this is awful, and I'm redesigning it, but I thought it was an interesting thing. I'm pretty sure it's not possible in C++. I'm curious: what does the programming community think?