Sign in to follow this  
smitty1276

OpenGL Fragment Shader problems

Recommended Posts

I'm working on a project which involves implementing a simple raytracer in a fragment shader. If I am just casting 1 ray per fragment things are great. When I start trying to uniformly sample an area though, I have problems. Basically, for each fragment I'm trying to uniformly sample the surface of a lens with 100 rays. The code looks roughly like this...
Ray r;
vec4 result = vec4(0.0,0.0,0.0,0);
for (int i = 0; i < 10; i++)
{
    for (int j = 0; j < 10; j++)
    {
        r.o = fragPosition;
        
        //calc the direction of ray as function of i and j...
        r.d = ...calc...

        r = traceThroughLensStack(r);

        // Trace ray from surface of outer lens into scene
        vec4 col = traceIntoScene(r);

        // If the ray makes it through the lens system,
        // add some contribution to this fragment 
	result = result + r.d;
    }
}    	

This seems simple... I cast a hundred rays, but I reuse tons of variables. GLSL doesn't like this though, and it complains about using "too many temporaries"... and that's after quite a while of working at it, assuming it doesn't lock up my machine entirely. Does anyone know why it does this? Is is possible at all to avoid this? This works fine if I remove the for-loops and use only one ray. (Unfortunately, the $60 OpenGL shading language book doesn't seem to address the ramifications of loops, or lots of other things, and focuses instead on basic syntax, and examples that can be found in a zillion places already.)

Share this post


Link to post
Share on other sites
Welll, that vec4 actually is needed to accumluate the radiant power reaching the fragment for each pass. Its a situation like...

result += traceRay(r)

...so it shouldn't matter too much. I thinking that the compiler "unrolled" the entire program for each iteration, and created new variables for the entire program for each pass. If that's the case, I'm somewhat annoyed.

Share this post


Link to post
Share on other sites
that does seem rather bizzare, maybe you should look into writing this shader in ASM? it will give you more control over when and where variables are created, it does seems rather slack if the compiler created a ton of variables when it unrolled the loop.

Share this post


Link to post
Share on other sites
Which hardware are you targeting, and which version of the drivers?

Unless you're targeting SM 3.0 hardware (Radeon X1600+, NVIDIA 6600+), long fragment shader loops won't work.

I would still declare 'col' at the top -- not to mention, you don't actually do anything with it in the code you've posted :-)

Share this post


Link to post
Share on other sites
hplus, the code I posted was simply something to reference to get the gist of it. That said, the example should read "result += col;", and you're right, I could try to stick that outside of the loop.

That hardware I'm developing this on is GeForce 6600GT. I was planning on giving it a run on a 7800 at work, though. I wish I had time to do it in ASM, but it is a fairly long program, and I don't really know the asm well enough to do it in the time available.

So, am I correct then in thinking that it *shouldn't* use any more temp registers than running the loop once? I'm also going to try to run it on an ATI card at work, so I'll see if that makes a difference.

Thanks for the input guys.


Share this post


Link to post
Share on other sites
the shader compilers are pretty good optimizing compilers, i can't see why it would do something retarded like create tons of copies of the same variables

Share this post


Link to post
Share on other sites
Well, I'm able to do up to 4 loops. Any more than that and it craps out on me.

I have a new problem now, though. I added one new small function, and now the compiler tells me that there is an error, "Too many instructions".

I was led to believe that there was virtually no limit on program length in pixel shader model 3.0. Is this wrong and is GF6600GT not sufficient for that?

Share this post


Link to post
Share on other sites
"With Pixel Shader 3.0 that instruction count has been increased to 65,535 and in the GeForce 6800Ultra the instruction count is unlimited."

Share this post


Link to post
Share on other sites
Hmmm... well it seems to complain about "too many instructions" at only several thousand. It must be unrolling the loop (which I thought new models would NOT do) and it must be using an older profile.

Is there a way to tell GLSL to compile to the most current shader version?

Share this post


Link to post
Share on other sites
Place #version 110 at the top of the shader file to explicitely tell the compiler which version of GLSL to compile for (ie: 1.1).

Aside, if you are using recursive calls, this is not defined behaviour in GLSL, and will cause all kinds of trouble.

Share this post


Link to post
Share on other sites
Oh! Thanks, I'll try that when I get home. Is the GLSL version tied to the shader model used, or is that purely a hardware issue? In other words, is it possible that my problem could be solved by compiling to a specific version, or is it already targeting my hardware?

Sorry for all of the questions. :-)

Share this post


Link to post
Share on other sites
I can't honestly tell you what the behaviour is, though the nvidia developer's site may contain information directly related to this.

However, I suspect that the compiler comes bundled with your driver, so it may very well be hardware optimized, etc. I've always assumed that this is why shaders are compiled at runtime (for now).

Share this post


Link to post
Share on other sites
Pre-compiled shaders won't be supported until the next version of OpenGL, according to the materials gleaned from the GDC 06 conference.

Share this post


Link to post
Share on other sites
Can you use the Cg runtime to load shaders written in GLSL but compiled with cgc? I noticed that nVidia uses the cgc compiler to compile the GLSL anyway.

Share this post


Link to post
Share on other sites
If they use the same shader/uniform data input model, I can't see why not.

As far as I recall, the driver I have (77.77) uses the 3DLabs compiler.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Forum Statistics

    • Total Topics
      627642
    • Total Posts
      2978354
  • Similar Content

    • By xhcao
      Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness. 
    • By cebugdev
      hi guys, 
      are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
      Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic 
      let me know if you guys have recommendations.
      Thank you in advance!
    • By dud3
      How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below? 
      Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
      Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.
       
      References:
      Code: https://pastebin.com/Hcshj3FQ
      The video shows the difference between blender and my rotation:
       
    • By Defend
      I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
      My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
      * make lots of VAO/VBO pairs and flip through them to render different objects, or
      * make one big VBO and jump around its memory to render different objects. 
      I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
      If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?
       
    • By test opty
      Hello all,
       
      On my Windows 7 x64 machine I wrote the code below on VS 2017 and ran it.
      #include <glad/glad.h>  #include <GLFW/glfw3.h> #include <std_lib_facilities_4.h> using namespace std; void framebuffer_size_callback(GLFWwindow* window , int width, int height) {     glViewport(0, 0, width, height); } //****************************** void processInput(GLFWwindow* window) {     if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS)         glfwSetWindowShouldClose(window, true); } //********************************* int main() {     glfwInit();     glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3);     glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3);     glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE);     //glfwWindowHint(GLFW_OPENGL_FORWARD_COMPAT, GL_TRUE);     GLFWwindow* window = glfwCreateWindow(800, 600, "LearnOpenGL", nullptr, nullptr);     if (window == nullptr)     {         cout << "Failed to create GLFW window" << endl;         glfwTerminate();         return -1;     }     glfwMakeContextCurrent(window);     if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress))     {         cout << "Failed to initialize GLAD" << endl;         return -1;     }     glViewport(0, 0, 600, 480);     glfwSetFramebufferSizeCallback(window, framebuffer_size_callback);     glClearColor(0.2f, 0.3f, 0.3f, 1.0f);     glClear(GL_COLOR_BUFFER_BIT);     while (!glfwWindowShouldClose(window))     {         processInput(window);         glfwSwapBuffers(window);         glfwPollEvents();     }     glfwTerminate();     return 0; }  
      The result should be a fixed dark green-blueish color as the end of here. But the color of my window turns from black to green-blueish repeatedly in high speed! I thought it might be a problem with my Graphics card driver but I've updated it and it's: NVIDIA GeForce GTX 750 Ti.
      What is the problem and how to solve it please?
  • Popular Now