advise on streaming data from CPU-GPU

Started by
3 comments, last by vergauth28 15 years, 3 months ago
Hi, I'm fairly new to opengl programming, and new to this forum. This is my first real use of opengl as a programmer, so it will be a good project to learn a lot :) I'd like to present my problem here and ask if people can give me some starting information, and present some possible/efficient ways to do this. I'd like to stick to pure opengl with min. pixelshader 3.0 support. I've several threads which generate data on the CPU. These are small structs which contain an X and Y image coordinate, a 3x floating point XYZ colour, an floating point alpha value, a integer normalization factor, and an buffer ID. I'll refer to this struct for now as a pixelpacket. These are generated at approx 100.000 pixelpackets per second, and need to be fed into the GPU, where a fragment shader will use them to splat/reconstruct using a gaussian filter them onto a HDR image texture. I'll call this HDR image texture a target for now. I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket. The targets need to be mixed/scaled together and tonemapped via a fragment shader for display. I'd like to have everything work propperly in realtime. The target mixing/tonemapping is pretty straightforward, however i'm a bit unsure as to what would be the best way of implementing the streaming of the packets and splatting on the GPU. Please advise :) Thanks!
Advertisement
GL supports rendering certain types of primitives : GL_POINTS, GL_LINES, GL_TRIANGLES and some others which are basically GL_TRIANGLES for all practical purposes.

Take your pick.

For sending your vertices/color and whatever, you would fill a VBO buffer.
http://www.opengl.org/wiki/index.php/VBO

Quote:I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket.


You could do MRT (multiple render targets) but you have to render to all your buffers at the same time.
Or you could just render all your points for buffer1 and if bufferID isn't what you want, you move the vertices offscreen in the vertex shader.
Then procede to buffer 2.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Quote:Original post by V-man
GL supports rendering certain types of primitives : GL_POINTS, GL_LINES, GL_TRIANGLES and some others which are basically GL_TRIANGLES for all practical purposes.

Take your pick.

For sending your vertices/color and whatever, you would fill a VBO buffer.
http://www.opengl.org/wiki/index.php/VBO

Quote:I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket.


You could do MRT (multiple render targets) but you have to render to all your buffers at the same time.
Or you could just render all your points for buffer1 and if bufferID isn't what you want, you move the vertices offscreen in the vertex shader.
Then procede to buffer 2.


Hi,

Thanks for your reply,
but i don't think drawing primitives would suit my application...

I need practical information on how to efficiently transport the data, then have it available in a fragment shader for splatting onto a framebuffer or equiv...

The splatting i basically a fragment operation, which splats the pixels on to the image using a custom filter kernel.
All this needs to stay 'custom' as it involves XYZ colour etc...

Basically, i need to build an efficient FIFO stream of these packets, from a thread in my CPU executable, all the way to data available in a fragment shader.

Thanks
Quote:I need practical information on how to efficiently transport the data, then have it available in a fragment shader for splatting onto a framebuffer or equiv...


I am not entirely certain what you are trying to do but if I'm correct, you would have to upload your data as an array of uniforms to the fragment shader (FS).
You would store XY coordinates in that array.
You render a fullscreen quad.
In the FS, you sample the HDR texture, based on the XY coordinates of the fragments, based on how close it is to one of those XY coordinates in the array, you apply a filter.

It very much depends on what you are trying to achieve.
I suggest studying the graphics pipeline, get to know shaders (GLSL or Cg).
Get to know FBO
http://www.opengl.org/wiki/index.php/GL_EXT_framebuffer_object

Get to know PBO
http://www.opengl.org/documentation/specs/
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);
Hi,

Thanks for that, i've started learing about FBO's...
I'll ask more questions if i have any along the way.

Thanks!

This topic is closed to new replies.

Advertisement