Sign in to follow this  

OpenGL advise on streaming data from CPU-GPU

This topic is 3290 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm fairly new to opengl programming, and new to this forum. This is my first real use of opengl as a programmer, so it will be a good project to learn a lot :) I'd like to present my problem here and ask if people can give me some starting information, and present some possible/efficient ways to do this. I'd like to stick to pure opengl with min. pixelshader 3.0 support. I've several threads which generate data on the CPU. These are small structs which contain an X and Y image coordinate, a 3x floating point XYZ colour, an floating point alpha value, a integer normalization factor, and an buffer ID. I'll refer to this struct for now as a pixelpacket. These are generated at approx 100.000 pixelpackets per second, and need to be fed into the GPU, where a fragment shader will use them to splat/reconstruct using a gaussian filter them onto a HDR image texture. I'll call this HDR image texture a target for now. I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket. The targets need to be mixed/scaled together and tonemapped via a fragment shader for display. I'd like to have everything work propperly in realtime. The target mixing/tonemapping is pretty straightforward, however i'm a bit unsure as to what would be the best way of implementing the streaming of the packets and splatting on the GPU. Please advise :) Thanks!

Share this post


Link to post
Share on other sites
GL supports rendering certain types of primitives : GL_POINTS, GL_LINES, GL_TRIANGLES and some others which are basically GL_TRIANGLES for all practical purposes.

Take your pick.

For sending your vertices/color and whatever, you would fill a VBO buffer.
http://www.opengl.org/wiki/index.php/VBO

Quote:
I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket.


You could do MRT (multiple render targets) but you have to render to all your buffers at the same time.
Or you could just render all your points for buffer1 and if bufferID isn't what you want, you move the vertices offscreen in the vertex shader.
Then procede to buffer 2.

Share this post


Link to post
Share on other sites
Quote:
Original post by V-man
GL supports rendering certain types of primitives : GL_POINTS, GL_LINES, GL_TRIANGLES and some others which are basically GL_TRIANGLES for all practical purposes.

Take your pick.

For sending your vertices/color and whatever, you would fill a VBO buffer.
http://www.opengl.org/wiki/index.php/VBO

Quote:
I can have multiple targets, and need to splat a pixelpacket to the correct target using the buffer ID in the pixelpacket.


You could do MRT (multiple render targets) but you have to render to all your buffers at the same time.
Or you could just render all your points for buffer1 and if bufferID isn't what you want, you move the vertices offscreen in the vertex shader.
Then procede to buffer 2.


Hi,

Thanks for your reply,
but i don't think drawing primitives would suit my application...

I need practical information on how to efficiently transport the data, then have it available in a fragment shader for splatting onto a framebuffer or equiv...

The splatting i basically a fragment operation, which splats the pixels on to the image using a custom filter kernel.
All this needs to stay 'custom' as it involves XYZ colour etc...

Basically, i need to build an efficient FIFO stream of these packets, from a thread in my CPU executable, all the way to data available in a fragment shader.

Thanks

Share this post


Link to post
Share on other sites
Quote:
I need practical information on how to efficiently transport the data, then have it available in a fragment shader for splatting onto a framebuffer or equiv...


I am not entirely certain what you are trying to do but if I'm correct, you would have to upload your data as an array of uniforms to the fragment shader (FS).
You would store XY coordinates in that array.
You render a fullscreen quad.
In the FS, you sample the HDR texture, based on the XY coordinates of the fragments, based on how close it is to one of those XY coordinates in the array, you apply a filter.

It very much depends on what you are trying to achieve.
I suggest studying the graphics pipeline, get to know shaders (GLSL or Cg).
Get to know FBO
http://www.opengl.org/wiki/index.php/GL_EXT_framebuffer_object

Get to know PBO
http://www.opengl.org/documentation/specs/

Share this post


Link to post
Share on other sites
Hi,

Thanks for that, i've started learing about FBO's...
I'll ask more questions if i have any along the way.

Thanks!

Share this post


Link to post
Share on other sites

This topic is 3290 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now