FreOzgur

Member
  • Content count

    13
  • Joined

  • Last visited

Community Reputation

1098 Excellent

About FreOzgur

  • Rank
    Member
  1. Problems Found in Appleseed Source Code

      (Not talking about *this* project..) This is true; but however, this kind of null checks might be necessary if they are using compiler flags that disables exceptions.. (eg. -fno-exceptions). Some game engines may choose to disables RTTI and Exceptions if they implemented their own methods and don't want to waste resources for provided methods.
  2. Does anyone know what is the minimum model for a nVidia graphics card to support Bindless Textures? I have a nVidia 820m graphics card on my laptop, supports OpenGL 4.5 Core, and it seems like it does not support bindless textures.      glxinfo | grep "bindless" only returns the following two: - GL_NV_bindless_multi_draw_indirect,  - GL_NV_bindless_multi_draw_indirect_count   I also checked using GLEW_NV_bindless_texture and GLEW_ARB_bindless_texture, they both returned false.   Its quite surprising that my card supports most of the "modern" extensions but not bindless textures.
  3.   Woah, thanks for the information! Yeah I guess, giving me no error about that was a bug.. I added the 'flat' qualifier to both shaders with 'uint', and worked as intended.
  4. Update! Wow.. It seems like everything works perfectly if I simply use "float" instead of "uint" in interface block. I just changed it to: // Vertex Shader out InterfBlock { vec4 WorldPos; vec3 WorldNormal; vec2 TexCoord[2]; vec3 CamDir; float DrawID; // Change here } OUT; //Fragment Shader in InterfBlock { vec4 WorldPos; vec3 WorldNormal; vec2 TexCoord[2]; vec3 CamDir; float DrawID; // Change here } IN; and then simply cast my DrawIDs to float by: // Vertex shader OUT.DrawID = float(DrawIDs); // Fragment shader vec3 color = texture2DArray(colorTexture, vec3(IN.TexCoord[0], IN.DrawID)).xyz; Now everything works perfectly..   But what is the problem with "uint" ???  Was that a bug?? C'mon drivers..
  5. Thanks for the reply!   My buffer class is just a wrapper around OpenGL. I am filling the buffer like this:   unsigned int* buffer = (unsigned int*)drawids->MapRange(0, sizeof(unsigned int) * 6, BufferLock_Write); // My own buffer class. Just a simple wrapper, tested, working.. for(int i = 0; i < 6; ++i) buffer[i] = i; drawids->Unmap(); And then simply enable the attribute and and set the pointer: (Already use glVertexAttribIPointer())   glBindBuffer(GL_ARRAY_BUFFER, drawids->GetHandlerGL()); glEnableVertexAttribArray(8); glVertexAttribIPointer(8, 1, GL_UNSIGNED_INT, 0, BUFFER_OFFSET(0)); glVertexAttribDivisor(8, 1); Actually my attribute already works, I know it. Because when I only use it on vertex shader to get every instances (there are only two for testing) matrices, they can succesfully get their matrices. But the problem is just I cannot pass it to fragment shader. If I don't try to pass it to fragment shader, It works as intended.   And by "it stops working", literally just stops working. No GLSL error is generated, the glsl compiling looks failed but no information generated by glGetShaderInfoLog(), It returns an empty text with size 1. I know it failed since glGetShaderiv() with GL_COMPILE_STATUS returns GL_FALSE. Actual code for checking compile errors, if you interested:   static void TestCompileError(String file, GLuint handler) { GLint result = 0; glGetShaderiv(handler, GL_COMPILE_STATUS, &result); if (result == GL_FALSE) { GLint length; glGetShaderiv(handler, GL_INFO_LOG_LENGTH, &length); std::vector<char> buffer; buffer.resize(length); GLsizei final; glGetShaderInfoLog(handler, length, &final, &buffer[0]); std::string message(&buffer[0], length); std::cout << file.c_str() << ": " << length << ": " << message.c_str() << std::endl; } } I hope it is not a Driver Bug
  6. Hello friends! I have a strange behaviour with my glsl code.. I am trying to pass a simple 'uint' variable between stages (from vertex shader to fragment shader), but it seems like I'm doing something wrong. I have an interface block like this: // Vertex Shader out InterfBlock { vec4 WorldPos; vec3 WorldNormal; vec2 TexCoord[2]; vec3 CamDir; uint DrawID; } OUT; //Fragment Shader in InterfBlock { vec4 WorldPos; vec3 WorldNormal; vec2 TexCoord[2]; vec3 CamDir; uint DrawID; // This line is the problem, works if I remove. } IN; When I run my program, it simply generates no error. (There is an error checker already that uses glGetShaderInfoLog() and works when I simply forget to put a semicolon, so I know it works,) But also my GpuProgram class gives me the number of Uniforms in my shaders. But it says 0 (zero). (But there are a few uniforms) The strange thing is, the shader works perfectly when I simply remove the "uint DrawID;" from FRAGMENT SHADER.. (Also shows the correct amount of Uniforms if I remove). It doesn't matter if I keep it in Vertex Shader, it works. But when I add the line "uint DrawID;" in interface block of frag shader, It stops working. Any ideas?   Edit: NVIDIA Driver Version: 331.113 OS: Linux-x86_64 OpenGL version from glxinfo: 3.3.0 NVIDIA 331.113
  7. O(pow(N,12))

    I'm truly amazed!
  8. Hello guys, I have a problem with using glMultiDrawElements; In a typical drawing, I would do something like this: (Pseudo-code)   foreach obj in objects {     SetOtherStuffs();          SetModelMatrix(...);     glDrawElements(...); }   As you can see, I can set Model Matrices of every model. But with glMultiDrawElements, How can I achive something like this? How can I give a different matrix for every object? Is there a shader counter for draw calls? (So I can grab their matrices from something like a uniform.) If yes. How will this counter increased if some of the calls are instanced when I use glMultiDrawElementsIndirect()?
  9. Sleep time ideas. Per-Pixel Fluorescent?

    Per-Pixel Fluorescent Light This was just a sleep time idea. I have never seen or never noticed a real-time, per-pixel fluorescent lamps. Most of games just uses point lights under this kind of lamps. Actually the idea is pretty simple. Our fluorescent lamp is represented as a direction and a length. And the math is much more simple. Just find the nearest distance between a point and a line. (line is our fluorescent lamps direction). Actually it is just a simple projection using a Dot Product. Project the point on the fluorescent line. Black Dot: Fluorescent middle point (LightPos) Green: Fluorescent direction with length (lightDirection and maxLen) Red: Projected length (projectionLen) Blue: Projected point (actualLightPos) Yellow: Pixel Position (PixelPos) lightDirection must be normalized!! float projectionLen = dot(PixelPos.xyz - LightPos, lightDirection); But our Fluorescent direction is not a line but a line segment. So if our projection is out of the maximum length, we should limit it. projectionLen = clamp(projectionLen, -maxlen, maxLen); maxLen is our length from middle to ends. If our total length is 4 units, then maxLen is 2. Now we know the length of the light position from the center of fluorescent. Calculating the actual position is not that hard. vec3 actualLightPos = LightPos + (lightDirection * projectionLen); Just multiply our projectionLen with the light direction, than add it to the fluorescent light Position. Now we calculated the desired light position. From now on, everything is same as a point light. Results?? Here are few screenshots with simple artwork and a much more detailed one. With Attenuation: Detailed textures: And Specular:
  10. Fluorescent

  11. Grounded Pointers

    Those should be shown on schools. Good work, thanks :)
  12. Intersection Math & Algorithms - Learn to Derive

    BTW, in AABB - Ray algorithm.. Should't it be:   float4 tnear = f4min(tmin, tmax); float4 tfar = f4max(tmin, tmax);   ??