Jump to content

  • Log In with Google      Sign In   
  • Create Account

Calling all IT Pros from Canada and Australia.. we need your help! Support our site by taking a quick sponsored surveyand win a chance at a $50 Amazon gift card. Click here to get started!


Member Since 16 Mar 2012
Offline Last Active May 10 2015 12:22 PM

Topics I've Started

3D Texture upload fails

08 May 2015 - 03:30 PM

Hello guys,
I am developing a direct volume renderer and have issues with the 3D texture upload. I have a ct scan with 421 slices and each slice has a size of 512x512 with one channel containing an unsigned short value. This is how I upload the data at the moment:

glBindTexture(GL_TEXTURE_3D, _volumeTextureID);

glTexImage3D(GL_TEXTURE_3D, 0, GL_R16F, 512, 512, 421, 0, GL_RED, GL_UNSIGNED_SHORT, _volumeData);

The problem is in the fragment shader. The values of the volumeTexture are not clamped to the range of [0,1]. I do raycasting in a cube which contains my 3D texture. As you can see in the for loop I check if the sampled value of my volumeTexture is bigger than 0.1 and set the color output to red. Unfortunately I don't see anything red in the cube. The values are all below 0.1 because when I do checks of the values below 0.1 then I already can see the lung, spine etc. 

#version 430

out vec4 color;

uniform sampler2D backTex;
uniform sampler2D frontTex;
uniform sampler3D volumeTexture;

smooth in vec4 position;

uniform int width;
uniform int height;
uniform uint samplingSteps;

void main(void){
  vec2 texCoor = vec2( (gl_FragCoord.x-0.5)/width, (gl_FragCoord.y-0.5)/height);
  vec3 front = texture(frontTex, texCoor).xyz;
  vec3 back = texture(backTex, texCoor).xyz;
  float length = length(back - front);
  vec3 dir = normalize(back - front);
  vec3 step = dir * 1/samplingSteps;
  vec4 pos = vec4(front, 0);
  float volValue = 0;
  vec4 src = vec4(0);
  vec4 finalColor = vec4(1);
  float max = 0;
  for(int i = 0; i < samplingSteps*length; i++){
    volValue = texture(volumeTexture, pos.xyz).r;
    if(volValue >= 0.1){
      finalColor = vec4(0.5, 0, 0, 0.5);
    pos.xyz += step;
    if(pos.x > 1 || pos.y > 1 || pos.z > 1){
  color = finalColor;

What is happening here? Why are all my values in the volume texture clamped to [0, 0.1] and not to [0, 1]?
Best regards,

Raytracing via compute shader

14 March 2013 - 03:15 PM

I am trying to do some raytracing on the GPU via the compute shader in OpenGL and I came across a very strange behaviour.

For every pixel in the screen I launch a compute shader invocation and this is how the compute shader looks like:



#version 430

struct Camera{
    vec4    pos, dir, up, xAxis ;
    float   focalLength;
    float   pW, pH;

struct Sphere{
    vec4    position;
    float   radius;

struct Ray{
    vec3    origin;
    vec3    dir;

uniform Camera      camera;
uniform uint        width;
uniform uint        height;

uniform image2D outputTexture;

float hitSphere(Ray r, Sphere s){
    float s_ov = dot(r.origin, r.dir);
    float s_mv = dot(s.position.xyz, r.dir);
    float s_mm = dot(s.position.xyz, s.position.xyz);
    float s_mo = dot(s.position.xyz, r.origin);
    float s_oo = dot(r.origin, r.origin);
    float d = s_ov*s_ov-2.0f*s_ov*s_mv+s_mv*s_mv-s_mm+2.0f*s_mo*s_oo+s.radius*s.radius;
    if(d < 0){
        return -1.0f;
    } else if(d == 0){
        return (s_mv-s_ov);
    } else {
        float t1 = 0, t2 = 0;
        t1 = s_mv-s_ov;
        t2 = (t1-sqrt(d));
        t1 = (t1+sqrt(d));
        return t1>t2? t2 : t1 ;

Ray initRay(uint x, uint y, Camera cam){
    Ray ray;
    ray.origin = cam.pos.xyz;
    ray.dir = cam.dir.xyz * cam.focalLength + vec3(1, 0, 0)*( float(x-(width/2)))*cam.pW
                              + cam.up.xyz * (float(y-(height/2))*cam.pH);
    ray.dir = normalize(ray.dir);
    return ray;

layout (local_size_x = 16, local_size_y = 16, local_size_z = 1) in;
void main(){
    uint x = gl_GlobalInvocationID.x;
    uint y = gl_GlobalInvocationID.y;
    if(x < 1024 && y < 768){
        float t = 0.0f;

        Ray r = initRay(x, y, camera);
        Sphere sp ={vec4(0.0f, 0.0f, 20.0f, 0.0f), 2.0f};

        t = hitSphere(r, sp);
        if(t <= -0.001f){
            imageStore(outputTexture, ivec2(x, y), vec4(0.0, 0.0, 1.0, 1.0));
        } else {
            imageStore(outputTexture, ivec2(x, y), vec4(0.0, 1.0, 0.0, 1.0));

Rendering on the GPU yields the following broken image:


Rendering on the CPU with the same algorithm yields this image:


I can't figure out the problem since I just copied and pasted the "hitSphere()" and "initRay()" functions into my compute shader. First I thought I haven't dispatched enough work groups, but then the background wouldn't be blue, so this can't be the case. This is how I dispatch my compute shader:

#define WORK_GROUP_SIZE 16
//width = 1024, height = 768
void OpenGLRaytracer::renderScene(int width, int height){

    glDispatchCompute(width/WORK_GROUP_SIZE, height/WORK_GROUP_SIZE,1);


Then I changed the position of the sphere in x direction to the right:


In y direction to the top:


And in both directions (right and top):


When I change the position far enough in both directions to the left and to the bottom, then the sphere actually disappears. It seems that all the calculations on the GPU only work in one quarter of the image (top-right) and happen to yield false results in the other three quarters.


I am totally clueless at the moment and don't even know how to start fixing this.

Camera for Raytracing

13 March 2013 - 10:48 AM



I am working on a raytracer at the moment and I come across some issues with the camera model. I just don't seem to manage the calculation of the direction vector of my rays.


Let's say we have given an image resolution of resX x resY, a position pos of the camera, the up vector, the direction vector dir (the direction in which the camera is looking) and a field of view for the horizontal and vertical component fovX and fovY. With the help of these values I can manage to calculate the focal length and thus the width of my pixels.



But when I do the same for the height I'll get another focal length.



There is something I must be missing because the focal length determines the distance between my camera position and the image plane and thus must be unique.

But let's suppose that I've got one unique focal length, then calculating the direction of a ray for the screen coordinates (x,y) should be done with this formula.


I adjust the direction vector in such a way, that he is always in the center of a pixel.


Unfortunately applying this to my raytracer yields absolutely no results. sad.png

Texture as background for framebuffer

12 October 2012 - 02:53 PM


I am curious if it is possible to set a texture as the background for the framebuffer on which the 3D scene is rendered on in an easy way.
To be more specific:

I want to take the frames of my webcam and draw my 3D scene (which consits of particles in a black space) on it. I experimented with framebufferObjects and render-to-texture techniques. At the moment I create a framebuffer object and attach two textures to it. On one texture I render my 3D scene, while I copy the frame of my webcam to the other texture. I thought I could do something with glBlitFramebuffer() but unfortunately it just copies one texture to the other.

I thought I could somehow work with stencil buffers because I just need to punch out the black space of my 3D particle scene and draw it on my webcam frame, but I couldn't find any helpful ressources on this topic so far.

Thanks for any help in advance! :)

Compute Shader Invocations

04 October 2012 - 09:58 AM


I am new to OpenGL and currently working on a particle system which makes use of the compute shader. I've got two questions. The first is about the compute shader itself. I create the particles and store them in shader storage buffer so I can access their position in the compute shader. Now I want to create a thread for every particle, which computes its new position. So I dispatch an one dimensional work group.
#define WORK_GROUP_SIZE 128
glDispatchCompute((_numParticles/WORK_GROUP_SIZE), 1, 1);
Compute shader:
#version 430
struct particle{
	 vec4 currentPos;
	 vec4 oldPos;

layout(std430, binding=0) buffer particles{
	 	 	 struct particle p[];

layout (local_size_x = 128, local_size_y = 1, local_size_z = 1) in;
void main(){
	 uint gid = gl_GlobalInvocationID.x;

	 p[gid].currentPos.x += 100;

But somehow not all particles are affected. I am doing this the same way it was done in this example but it doesn't work.

When I want to render 128.000 particles, then the code above would dispatch 128.000/128=1.000 1-dimensional work groups and each of them would have the size of 128. Doesn't it thus create 128*1.000 = 128.000 threads which execute the code in the compute shader above and thus all particles are affected? Each thread would have a differen ID at gl_GlobalInvocationID.x because all work-groups are 1-dimensional Am I missing something?

My other question is relating to glDrawArrays().
The vertex shader receives all the vertices from the shared-storage-buffer and passes them through to the geometry shader, where I emit 4 particles to create a quad on which I map my texture in the fragment shader. The structure which is stored in the shared-storage-buffer for every particle looks like this:
struct Particle{
glm::vec4 _currPosition;
glm::vec4 _prevPosition;
When I draw the scene I do the following:
glBindBuffer(GL_ARRAY_BUFFER, BufferID);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(glm::vec4), 0);
glDrawArrays(GL_POINTS, 0, _numParticles*2);
glBindBuffer(GL_ARRAY_BUFFER, 0);

Somehow when I just call glDrawArrays(GL_POINTS, 0, _numParticles) not all particles are rendered. Why does this happen?
I would suggest the number of the vec4-vectors in the particle-struct is the reason but I am not sure. Could somebody explain it please? Posted Image