• Content count

  • Joined

  • Last visited

Community Reputation

210 Neutral

About xantier

  • Rank
  1. I am developing cross-platform. Linux Mac Windows. Open source is not obligatory but it should NOT be commercial.
  2. OpenAL seems deprecated but I would like to use it as it really resembles OpenGL with its API functions. Should I use OpenAL or do you recommend any other open source audio library?
  3. I am on my way to create a material system and what does a generic material info actually contain? Should it also include shaders?
  4. I got most of the points but I am actually still stuck at applying multiple effects. It seems as if something is wrong, I have to switch shader states or create a general shader that includes every possible shader effects like parallax mapping, skeletal anim, bump mapping, ambient occlusion, fog, reflection, directional and spot lights etc. which made me implement deferred rendering because it allowed me to separate effects as layers. Every example I see on internet implements those effects individually but I have never seen any example that mixes those and create something meaningful. 
  5. Ahhhh I think I got the idea now, you just draw the bounding volume of an object with color and depth mask off, GPU checks if it would be visible and tells the results in the next frame, if it is not visible, you just don't draw it. But I think it wouldn't be a pixel perfect solution right? What if I have a tree model and as you can guess its bounding volume would also fill empty spaces which would cause unnecessary occlusion? 
  6. I use a pre-processor (mcpp) to handle this. In my case, I implement methods in certain libraries (simple text fiels containing common functions), which are included in my shader files. A small script generates the final opengl shaders using mcpp. There are many ways to do it.       When using occlusion query, you render a object and checks afterwards, if opengl acutally really rendered at least a few pixel of it, or if every single pixel is occluded by others. Stencil operations work on a special buffer (stencil buffer) on a per pixel basis. The difference is, that occlusion queries gives feedback to the application (ok, check if object is visible), whereas stencil is more or less a masking tool without direct feedback to the appilcation. Stencilbuffers are used when rendering multiple passes, eg. in the first pass mark all pixels which represents a mirror surface (set stencil value to 1) and render in a second pass the mirrored scene to only marked pixels (stencil check).     About shaders, is it what modern engines do these days?   And occlusion queries, do they still make you process invisible pixels in fragment shader or does it prevent them from being unnecessarily processed?
  7. Since I gave up on deferred rendering because of performance and compatibility problems (plus transparency, AA, independent materials etc.), I got back to forward rendering. The problem is, I was able to apply as many as I want post processing effects in deferred rendering, as I kept normals, depth, color as textures. Now, I am able to have different materials for different objects but since there is no OOP logic in shaders, I can't make them inherit, for example, fog shader or apply global effects on them. Each object should use duplicate code to achieve this. Or I don't know how to do it yet.    One more question about occlusion culling, what is the difference between occlusion query and stenciling with z test? When should I use which one?
  8. I am storing the depth to recreate position from texture coords, camera and depth. So depth attachment and storing depth in a float are totally independent? The precision is determined by the format of the attachment, right?   What do you recommend instead of GL_RGBA32F ?
  9. Hello, Is it wise to keep normals+depth (x,y,z,depth) in a texture or does depth require much more precision that requires more than one component?   This is the way normal+depth texture is stored: glBindTexture(GL_TEXTURE_2D, textures[GBUFFER_TEXTURE_TYPE_NORMALS_DEPTH]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA32F, width, height, 0, GL_RGBA, GL_FLOAT, NULL); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + GBUFFER_TEXTURE_TYPE_NORMALS_DEPTH, GL_TEXTURE_2D, textures[GBUFFER_TEXTURE_TYPE_NORMALS_DEPTH], 0); And this is my depth attachment: glGenRenderbuffers(1, &depthTexture); glBindRenderbuffer(GL_RENDERBUFFER, depthTexture); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, width, height); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthTexture);
  10. guys i have found the problem. i was passing center of the light as uniform and multiplying it with gl_modelview to get in eye space. that's correct until there but since i draw the light sphere with gltranslate and glscale functions, my gl_modelview matrix corrupts the lightpos uniform. i dont know how to fix this but if i try really small variables to avoid scale, the lighting works.
  11. is there a glsl debugger, i still couldnt find the error. but it doesnt work :/
  12. i think i am about to solve this. as i posted setting front face to GL_CW was causing everything to disappear. this was because i applied the same rule in drawing full screen quad. so i have solved this part by disabling culling in the last part. so far so good, but the thing is why every mesh i load has clockwise vertex order ? actually this is not a problem since i totally solved the problem.
  13. please someone help me. i am totally desperate right now. my faces are inversed and i have tried almost every possible combinations of perspective. just a hint, please.
  14. All of them are glm:: matrices like glm::perspective etc. view matrix is built by glm::quats but i use fixed functionality for model matrix like glTranslate scale etc. and use gl_modelviewmatrix in glsl. if it is really about that, i can change things. but if not, i will have to stop developing my engine because this problem blocks me to advance further.
  15. glEnable(GL_CULL_FACE); glCullFace(GL_BACK); I have been trying to implement deferred rendering for 2 weeks. But all of the meshes in my test program are culled reversely. The code above should be the correct way to render a model. Every example i looked at draws the objects like that. But here is the result when i use GL_BACK as cull face: And this is the GL_FRONT: GL_FRONT should have been the result i expected from GL_BACK. Strange thing is when I set glFrontFace to GL_CW, everything disappears. So this problem is not about winding. I spent all my day on searching information about this. Only thing i found is something about depth buffers. But I can't see any problems in creation. Just because of this problem, I can't correctly do stencil pass for spot light rendering. I enable GL_DEPTH_TEST in geometry pass. This is what i attach to my fbo as depth buffer glGenRenderbuffers(1, &depthTexture); glBindRenderbuffer(GL_RENDERBUFFER, depthTexture); glRenderbufferStorage(GL_RENDERBUFFER, GL_DEPTH24_STENCIL8, width, height); glFramebufferRenderbuffer(GL_FRAMEBUFFER, GL_DEPTH_STENCIL_ATTACHMENT, GL_RENDERBUFFER, depthTexture); The main question is, what are the possibilities that would cause a reversed backface culling ?