Jump to content
  • Advertisement
Sign in to follow this  
vergauth28

OpenGL advise on streaming data from CPU-GPU

This topic is 3541 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm fairly new to opengl programming, and new to this forum. This is my first real use of opengl as a programmer, so it will be a good project to learn a lot :) I'd like to present my problem here and ask if people can give me some starting information, and present some possible/efficient ways to do this. I'd like to stick to pure opengl with min. pixelshader 3.0 support. I've several threads which generate data on the CPU. These are small structs which contain an X and Y image coordinate, a 3x floating point XYZ colour, an floating point alpha value, a integer normalization factor, and an buffer ID. I'll refer to this struct for now as a pixelpacket. These are generated at approx 100.000 pixelpackets per second, and need to be fed into the GPU, where a fragment shader will use them to splat/reconstruct using a gaussian filter them onto a HDR image texture. I'll call this HDR image texture a target for now. I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket. The targets need to be mixed/scaled together and tonemapped via a fragment shader for display. I'd like to have everything work propperly in realtime. The target mixing/tonemapping is pretty straightforward, however i'm a bit unsure as to what would be the best way of implementing the streaming of the packets and splatting on the GPU. Please advise :) Thanks!

Share this post


Link to post
Share on other sites
Advertisement
GL supports rendering certain types of primitives : GL_POINTS, GL_LINES, GL_TRIANGLES and some others which are basically GL_TRIANGLES for all practical purposes.

Take your pick.

For sending your vertices/color and whatever, you would fill a VBO buffer.
http://www.opengl.org/wiki/index.php/VBO

Quote:
I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket.


You could do MRT (multiple render targets) but you have to render to all your buffers at the same time.
Or you could just render all your points for buffer1 and if bufferID isn't what you want, you move the vertices offscreen in the vertex shader.
Then procede to buffer 2.

Share this post


Link to post
Share on other sites
Quote:
Original post by V-man
GL supports rendering certain types of primitives : GL_POINTS, GL_LINES, GL_TRIANGLES and some others which are basically GL_TRIANGLES for all practical purposes.

Take your pick.

For sending your vertices/color and whatever, you would fill a VBO buffer.
http://www.opengl.org/wiki/index.php/VBO

Quote:
I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket.


You could do MRT (multiple render targets) but you have to render to all your buffers at the same time.
Or you could just render all your points for buffer1 and if bufferID isn't what you want, you move the vertices offscreen in the vertex shader.
Then procede to buffer 2.


Hi,

Thanks for your reply,
but i don't think drawing primitives would suit my application...

I need practical information on how to efficiently transport the data, then have it available in a fragment shader for splatting onto a framebuffer or equiv...

The splatting i basically a fragment operation, which splats the pixels on to the image using a custom filter kernel.
All this needs to stay 'custom' as it involves XYZ colour etc...

Basically, i need to build an efficient FIFO stream of these packets, from a thread in my CPU executable, all the way to data available in a fragment shader.

Thanks

Share this post


Link to post
Share on other sites
Quote:
I need practical information on how to efficiently transport the data, then have it available in a fragment shader for splatting onto a framebuffer or equiv...


I am not entirely certain what you are trying to do but if I'm correct, you would have to upload your data as an array of uniforms to the fragment shader (FS).
You would store XY coordinates in that array.
You render a fullscreen quad.
In the FS, you sample the HDR texture, based on the XY coordinates of the fragments, based on how close it is to one of those XY coordinates in the array, you apply a filter.

It very much depends on what you are trying to achieve.
I suggest studying the graphics pipeline, get to know shaders (GLSL or Cg).
Get to know FBO
http://www.opengl.org/wiki/index.php/GL_EXT_framebuffer_object

Get to know PBO
http://www.opengl.org/documentation/specs/

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!