Jump to content

  • Log In with Google      Sign In   
  • Create Account


StanLee

Member Since 16 Mar 2012
Offline Last Active Mar 15 2013 06:22 AM

Topics I've Started

Raytracing via compute shader

14 March 2013 - 03:15 PM

I am trying to do some raytracing on the GPU via the compute shader in OpenGL and I came across a very strange behaviour.

For every pixel in the screen I launch a compute shader invocation and this is how the compute shader looks like:

 

 

#version 430

struct Camera{
    vec4    pos, dir, up, xAxis ;
    float   focalLength;
    float   pW, pH;
};

struct Sphere{
    vec4    position;
    float   radius;
};

struct Ray{
    vec3    origin;
    vec3    dir;
};

uniform Camera      camera;
uniform uint        width;
uniform uint        height;

uniform image2D outputTexture;

float hitSphere(Ray r, Sphere s){
    
    float s_ov = dot(r.origin, r.dir);
    float s_mv = dot(s.position.xyz, r.dir);
    float s_mm = dot(s.position.xyz, s.position.xyz);
    float s_mo = dot(s.position.xyz, r.origin);
    float s_oo = dot(r.origin, r.origin);
    
    float d = s_ov*s_ov-2.0f*s_ov*s_mv+s_mv*s_mv-s_mm+2.0f*s_mo*s_oo+s.radius*s.radius;
    
    if(d < 0){
        return -1.0f;
    } else if(d == 0){
        return (s_mv-s_ov);
    } else {
        float t1 = 0, t2 = 0;
        t1 = s_mv-s_ov;
        
        t2 = (t1-sqrt(d));
        t1 = (t1+sqrt(d));
        
        return t1>t2? t2 : t1 ;
    }
}

Ray initRay(uint x, uint y, Camera cam){
    Ray ray;
    ray.origin = cam.pos.xyz;
    
    ray.dir = cam.dir.xyz * cam.focalLength + vec3(1, 0, 0)*( float(x-(width/2)))*cam.pW
                              + cam.up.xyz * (float(y-(height/2))*cam.pH);
                              
    ray.dir = normalize(ray.dir);
                              
    return ray;
}

layout (local_size_x = 16, local_size_y = 16, local_size_z = 1) in;
void main(){
    uint x = gl_GlobalInvocationID.x;
    uint y = gl_GlobalInvocationID.y;
    
    if(x < 1024 && y < 768){
        float t = 0.0f;

        Ray r = initRay(x, y, camera);
        
        Sphere sp ={vec4(0.0f, 0.0f, 20.0f, 0.0f), 2.0f};

        t = hitSphere(r, sp);
        
        if(t <= -0.001f){
            imageStore(outputTexture, ivec2(x, y), vec4(0.0, 0.0, 1.0, 1.0));
        } else {
            imageStore(outputTexture, ivec2(x, y), vec4(0.0, 1.0, 0.0, 1.0));
        }
        
    }
}
 

Rendering on the GPU yields the following broken image:

quarter.png

Rendering on the CPU with the same algorithm yields this image:

normal.png

I can't figure out the problem since I just copied and pasted the "hitSphere()" and "initRay()" functions into my compute shader. First I thought I haven't dispatched enough work groups, but then the background wouldn't be blue, so this can't be the case. This is how I dispatch my compute shader:

#define WORK_GROUP_SIZE 16
//width = 1024, height = 768
void OpenGLRaytracer::renderScene(int width, int height){
    glUseProgram(_progID);

    glDispatchCompute(width/WORK_GROUP_SIZE, height/WORK_GROUP_SIZE,1);

    glMemoryBarrier(GL_TEXTURE_FETCH_BARRIER_BIT);
}

Then I changed the position of the sphere in x direction to the right:

half1.png

In y direction to the top:

half2.png

And in both directions (right and top):

full.png

When I change the position far enough in both directions to the left and to the bottom, then the sphere actually disappears. It seems that all the calculations on the GPU only work in one quarter of the image (top-right) and happen to yield false results in the other three quarters.

 

I am totally clueless at the moment and don't even know how to start fixing this.


Camera for Raytracing

13 March 2013 - 10:48 AM

Hello,

 

I am working on a raytracer at the moment and I come across some issues with the camera model. I just don't seem to manage the calculation of the direction vector of my rays.

 

Let's say we have given an image resolution of resX x resY, a position pos of the camera, the up vector, the direction vector dir (the direction in which the camera is looking) and a field of view for the horizontal and vertical component fovX and fovY. With the help of these values I can manage to calculate the focal length and thus the width of my pixels.

fovx.jpg

 

But when I do the same for the height I'll get another focal length.

fovy.jpg

 

There is something I must be missing because the focal length determines the distance between my camera position and the image plane and thus must be unique.

But let's suppose that I've got one unique focal length, then calculating the direction of a ray for the screen coordinates (x,y) should be done with this formula.

formel.jpg

I adjust the direction vector in such a way, that he is always in the center of a pixel.

 

Unfortunately applying this to my raytracer yields absolutely no results. sad.png


Texture as background for framebuffer

12 October 2012 - 02:53 PM

Hello,

I am curious if it is possible to set a texture as the background for the framebuffer on which the 3D scene is rendered on in an easy way.
To be more specific:

I want to take the frames of my webcam and draw my 3D scene (which consits of particles in a black space) on it. I experimented with framebufferObjects and render-to-texture techniques. At the moment I create a framebuffer object and attach two textures to it. On one texture I render my 3D scene, while I copy the frame of my webcam to the other texture. I thought I could do something with glBlitFramebuffer() but unfortunately it just copies one texture to the other.

I thought I could somehow work with stencil buffers because I just need to punch out the black space of my 3D particle scene and draw it on my webcam frame, but I couldn't find any helpful ressources on this topic so far.

Thanks for any help in advance! :)

Compute Shader Invocations

04 October 2012 - 09:58 AM

Hello,

I am new to OpenGL and currently working on a particle system which makes use of the compute shader. I've got two questions. The first is about the compute shader itself. I create the particles and store them in shader storage buffer so I can access their position in the compute shader. Now I want to create a thread for every particle, which computes its new position. So I dispatch an one dimensional work group.
#define WORK_GROUP_SIZE 128
_shaderManager->useProgram("computeProg");
glDispatchCompute((_numParticles/WORK_GROUP_SIZE), 1, 1);
glMemoryBarrier(GL_SHADER_STORAGE_BARRIER_BIT);
Compute shader:
#version 430
struct particle{
	 vec4 currentPos;
	 vec4 oldPos;
};

layout(std430, binding=0) buffer particles{
	 	 	 struct particle p[];
};

layout (local_size_x = 128, local_size_y = 1, local_size_z = 1) in;
void main(){
	 uint gid = gl_GlobalInvocationID.x;

	 p[gid].currentPos.x += 100;
}


But somehow not all particles are affected. I am doing this the same way it was done in this example but it doesn't work.
http://education.sig...eShader_6pp.pdf

When I want to render 128.000 particles, then the code above would dispatch 128.000/128=1.000 1-dimensional work groups and each of them would have the size of 128. Doesn't it thus create 128*1.000 = 128.000 threads which execute the code in the compute shader above and thus all particles are affected? Each thread would have a differen ID at gl_GlobalInvocationID.x because all work-groups are 1-dimensional Am I missing something?

My other question is relating to glDrawArrays().
The vertex shader receives all the vertices from the shared-storage-buffer and passes them through to the geometry shader, where I emit 4 particles to create a quad on which I map my texture in the fragment shader. The structure which is stored in the shared-storage-buffer for every particle looks like this:
struct Particle{
glm::vec4 _currPosition;
glm::vec4 _prevPosition;
};
When I draw the scene I do the following:
glBindBuffer(GL_ARRAY_BUFFER, BufferID);
glVertexAttribPointer(0, 4, GL_FLOAT, GL_FALSE, sizeof(glm::vec4), 0);
glEnableVertexAttribArray(0);
glEnableClientState(GL_VERTEX_ARRAY);
glDrawArrays(GL_POINTS, 0, _numParticles*2);
glDisableClientState(GL_VERTEX_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);

Somehow when I just call glDrawArrays(GL_POINTS, 0, _numParticles) not all particles are rendered. Why does this happen?
I would suggest the number of the vec4-vectors in the particle-struct is the reason but I am not sure. Could somebody explain it please? Posted Image

Regards,
StanLee

Reverse projection

28 March 2012 - 02:08 PM

Hello!

I am stuck on a math problem at the moment. I want to project 2D screen coordinates to a plane in 3D view space which is parallel to the view plane and has got a particular distance.
Let's suppose I have got a 2D point P2(x, y) on my view plane. My resolution is 800x600, so my x is ranging from 0 to 800 and my y from 0 to 600. I define a distance dz which the plane in 3D view space and the view plane should have. Now I want to estimate the the 3D point P3 (x', y', z') in view space on that specific plane with distance dz to my view plane.

A projection matrix is given:
XMMATRIX projMatrix = XMMatrixPerspectiveFovLH(XM_PIDIV4, (float)(screenWidth) / (float)(screenHeight), 0.1f, 1000.0f);

The camera position vector camPos, the lookTo-vector and the up-vector are also given.

How do I estimate P3 now? I tried to calculate the differen coordinates like that:
z' = camPos + lookTo*dz
x' = z' + cross(lookTo, up)*x
y = z' + up*y

The problem is, that due to perspective projection the plane in view space is bigger than the view plane itself and it gets bigger the bigger the distance dz is between them. Thus when I got a big distance between those planes the projected plane on the on the view plane is very small and when I move my cursor to the right top corner to draw something for example it gets drawn somewhere in the middle of the screen.

I have to scale the x and y values somehow but I don't know how. Is there maybe another approach to solve this?

PARTNERS