Jump to content
  • Advertisement
Sign in to follow this  
serious_learner07

OpenGL Ray Tracing doubts

This topic is 3625 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have just started learning Ray tracing.I have few doubts in Ray Tracing. 1. How many rays from the camera position? 2. In one of the books, it is mentioned that for ray tracing we need to determine incoming direct light at the point of intersection by casting rays to the light sources. 3. Can I implement Ray tracing using OpenGL/Direction.If so, how can i implement 4. Finally Is Ray tracing a recursive algorithm? Thanks in advance.

Share this post


Link to post
Share on other sites
Advertisement
1. At least one ray per pixel of the image you wish to render. Additional rays are typically used to implement anti-aliasing or effects such as depth of field or motion blur.
2. That's right: For each primary hit point P you cast shadow rays to the light sources and use them to determine whether light reaches P or not. Only if the light source is not occluded you calculate its contribution to the color of P.
3. Ray tracing has been implemented on the GPU in pixel shaders. That requires you to encode primitives and the acceleration structure in textures. Google should give you plenty of results.
4. If you compute reflections/refractions/etc. through secondary rays, ray tracing is indeed recursive. For reflections you trace a ray starting at the point you're currently shading along the reflected direction vector and weigh in the color of the hit object.

Share this post


Link to post
Share on other sites
Quote:
Original post by serious_learner07
1. How many rays from the camera position?

At least one ray is fired for each pixel in the rendered image. If more than 1 ray is used, then a technique called "oversampling" is applied, but I would suggest you to shift oversampling into the future until the basics are clear. For the pixel resolution, construct the rays emanating at the camera's position and going through those positions of the camera's viewing plane that correspond to the pixels of the output image.

Quote:
Original post by serious_learner07
2. In one of the books, it is mentioned that for ray tracing we need to determine incoming direct light at the point of intersection by casting rays to the light sources.

Physically light is emitted from the light sources, hitting surfaces of objects and gets reflected, transmitted, absorbed there. The portion of light that finally arrives at the camera (or the eye if you want) defines what is seen. However, implementing a ray tracer that way would be an enormous waste of computation, so ray tracing goes the other way around: It starts at the camera; if a hit with a surface is detected, then look how that point is illuminated. One part of the illumination is directly incoming light. To determine the belonging amount of light, check whether the hit point is directly visible by any light source. That can be done by firing a ray from the point to each light source, and checking whether or not the ray intersects any surface on its way.

Quote:
Original post by serious_learner07
3. Can I implement Ray tracing using OpenGL/Direction.If so, how can i implement

OpenGL and D3D by itself are scanline rasterizers. They don't work like a ray tracer. "Misusing" the GPU as a kind of co-processor is possible but not directly related to OpenGL/D3D.

Quote:
Original post by serious_learner07
4. Finally Is Ray tracing a recursive algorithm?

Not necessarily, but usually. You can ray-trace a scene and determine its corresponding pixel in the output image by just the result of its first hit. The very first ray tracers were made that way. But then you can take only ambient illumination into account. Okay, ways exist to fake some direct illumination; also indirect illumination can be faked, but that veils IMHO the fact that those illumination was already computed earlier. Also transparency (at least including refraction) is done with recursive rays. You can determine from my "no ... yes ... but ..." that there are several exceptions. But ray tracers with branching rays are recursive by nature, and hence well suited for a recursive implementation.

Share this post


Link to post
Share on other sites

Thanks for the post. I am really grateful to you people. To further sharpen my knowledge, I have the following questions.

Quote:
Original post by Stereo

2. That's right: For each primary hit point P you cast shadow rays to the light sources and use them to determine whether light reaches P or not. Only if the light source is not occluded you calculate its contribution to the color of P.
.

What is a shadow ray. How do we generate the shadow of the object illuminated by the light source.

Again Should our ray tracing application know about the no of lights in the scene in advance. How should be do this.

Quote:
Original post by Stereo
4. If you compute reflections/refractions/etc. through secondary rays, ray tracing is indeed recursive. For reflections you trace a ray starting at the point you're currently shading along the reflected direction vector and weigh in the color of the hit object.


In this case, how will we terminate from recursive calls.

Share this post


Link to post
Share on other sites
Quote:
What is a shadow ray. How do we generate the shadow of the object illuminated by the light source.


Shadow ray is ray, its origin is at hitpoint of primary (or reflected/refracted) ray and its directions is toward one of the lights (you go through all of them in loop). If our shadow ray hits something before light, this pixel is in shadow, if not - this pixel is illuminated by the light.

Quote:
Again Should our ray tracing application know about the no of lights in the scene in advance. How should be do this.


Ye, our application has to know about number of lights in scene, How should we do this - we just increment integer with our light number when we're creating one light.
Then if we collide with some primitive, we just loop through the lights and do some calculations.

Quote:
In this case, how will we terminate from recursive calls.


If we reach some value, which we define (I personally use maximum of 5, even smaller is acceptable) - Imagine this - your first reflection bounce will be F.e. on 1/4 of total screen pixels, the second bounce will be lets say just 1/4 pixels on reflecting objects will do another reflections -> 1/16 of total screen pixels - and if we go further 3. bounce -> lets say 1/64 - and even this is sooo little number of pixels.

Share this post


Link to post
Share on other sites
Quote:


In this case, how will we terminate from recursive calls.


Usually you set a max depth value (a number between 4 and 8 is enough most of the times) and when you reach that depth you stop. For example, a sequence can be the following:

Depth = 5

-You shot the first ray from the camera.
-First hit. Surface features specular reflection. You shot a second generation ray.
-Second hit. Ray bounced on a transparent surface. Trace the transmission ray.
-Third Hit on the backside of the transmissive object. Light exits the material and continues its trip with a fourth generation ray.
-Fourth hit on another mirror. Trace the fifth ray
-Yet another mirror. But now the depth is 5 and you cannot go further. You set the contribute of the last ray to black.

As you can see, usually a typical scene wont have all those mirrors/glasses, so 5 is more than enought,.

In addition. unless your material are 100% reflective or transmissive, at each iteration the ne ray contribute will be scaled. So that if the first mirror had 50% reflectivity and the two transmissive surfaces had 50 transparency each, just the 17% of the fourth ray would have been taken into account. You may use this value to set a threshold.

This should work most of the times. Anyway, if the fourth ray hits a powerful light (or a surface lit by a powerful light) then it may contribute substantially to the final pixel color, and using that threshold trick may lead to quality loss.

Share this post


Link to post
Share on other sites
Quote:
Original post by serious_learner07
4. Finally Is Ray tracing a recursive algorithm?


You can do it in iterative form just for the sake of it. I have an example of such an iterative form in my online tutorial :

Raytracer in c++, iterative.

Of course it's not really optimized as I stated many times in the article, you can probably do better performing (without necessarily resorting to recursive function calls).

If you had to implement a raytracer in a language/platform that does not accept recursive function calls you would for example have to manage a pseudo-stack by hand (not necessarily first in first out by the way).

LeGreg

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
    • By Lewa
      So, i stumbled upon the topic of gamma correction.
      https://learnopengl.com/Advanced-Lighting/Gamma-Correction
      So from what i've been able to gather: (Please correct me if i'm wrong)
      Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format  
      Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
      First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)
       
      What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
      vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
      No gamma correction:
      With gamma correction:
       
      The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
      Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631380
    • Total Posts
      2999673
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!