Jump to content
  • Advertisement

Fnord42

Member
  • Content Count

    33
  • Joined

  • Last visited

Community Reputation

102 Neutral

About Fnord42

  • Rank
    Member
  1. Thank you very much. The last 2 posts have opened my eyes ;)
  2. Thanks for your reply. I knew that light can consist of different frequency-combinations which physiologically create the same impression of color, but I don't see how this is a reason to use different frequencies for the simulation of specular-reflections as for the simulation of diffuse-reflection like it's done for example in the OpenGL's fixed-function-pipeline's implementation of the Blinn-Phong-illumination-model. Why would someone simulating illumination, choose to use one set of incoming frequencies to calculate diffuse-reflection with diffuse-material-constants and another set of incoming frequencies to calculate specular-reflection with specular-material-constants? Shouldn't the simulated light emit only one set of frequencies which interact with both material-properties?
  3. I can't imagine another reason. But I also can't imagine a good artistic-example where someone might use different colors for specular and diffuse ;) Thanks for your reply. In retrospective my post may has been not as clear as it could have been ;) I wasn't wondering about the use of specular lighting. I was wondering about why there are both a specular- and a diffuse-color for an incoming light-ray while in physics there is only one. But thanks for your reply anyway.
  4. Hello everyone, can somebody tell me why is there, for example in the Phong illumination model, a specular light component? It makes sense to me, that there are distinct material-constants, because as I understand it, the specular reflected rays get reflected directly by the surface, while the diffuse reflected rays go a bit inside the material and scatter multiple times before they get out again. But why a seperate specular light component? Aren't the rays which a lightsource emits the same both for diffuse and specular reflection? What could be the use of a distinct specular light color? Thanks for your time!
  5. The z-buffer depth is non-linear to the actual depth of the fragment, so there's more precision for near objects and less precision for objects further away. I hope someone will correct me if I'm wrong, but I think the z-buffer-depth with a perspective projection get's calculated as following: z_buffer_value = a + b / z Where: a = zFar / ( zFar - zNear ) b = zFar * zNear / ( zNear - zFar ) z = distance from the eye to the object This happens inside the vertex-shader through multiplication of a vector with the projectionmatrix and the w-divide of the vector afterwards (v /= v.w). My perspective projection matrix for example looks like that: hFov..0.....0.....0 0.....vFov..0.....0 0.....0.....p1....p2 0.....0.....-1....0 vFov = 1.0 / (tan(vFovDegree * (PI / 360.0)); hFov = vFov / ratio; p1 = (farPlaneDistance + nearPlaneDistance) / (nearPlaneDistance - farPlaneDistance); p2 = (2.0 * farPlaneDistance * nearPlaneDistance) / (nearPlaneDistance - farPlaneDistance); I've never used gl_FragCoord so I can't answer that, but you can try to compare both variants in your fragment-shader. For example in the vertex-shader pass the projected, z-divided vertex and check in the fragment-shader how much it's interpolation differs from gl_FragCoord.z.
  6. Hi, I think it's not necessary to allocate memory for your renderbuffer, because you've already allocated memory for the texture. With both glFramebufferRenderbuffer and glFramebufferTexture2D you probably override the binding of the texture to GL_DEPTH_ATTACHMENT with the binding of the renderbuffer, which probably leads to rendering inside the renderbuffer instead of the texture. This is how I create the Framebuffer for my Shadowmap: glGenFramebuffers(1, &m_shadowMapFboId); glGenTextures(1, &m_shadowMapTextureId); // Allocate GPU-memory for the depth-texture. glBindTexture(GL_TEXTURE_2D, m_shadowMapTextureId); glTexImage2D(GL_TEXTURE_2D, 0, GL_DEPTH_COMPONENT32, m_shadowMapSize, m_shadowMapSize, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); // Bind the texture to the framebuffers depth-attachment. glBindFramebuffer(GL_FRAMEBUFFER, m_shadowMapFboId); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, m_shadowMapTextureId, 0); // Tell the Framebuffer we won't provide any color-atachments. glDrawBuffer(GL_NONE); // For depth-only-renderings, if you need also frag-color don't use this. if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) { ... } How do your shaders look, which you use to read/draw from/to the depth-texture? While reading from depth-texture you only need to use the first "color"-coordinate, afaik because you've allocated a float-texture with glTexImage2D, so even if you use GL_DEPTH_COMPONENT32 you don't need to reassemble the 8bit-color-channels and can just access the 32bit-float using the first color-index: float depth = texture(texDepth, vPos2D).x; Hope I could help.
  7. I think, moving glBufferData of static vertex-data into the draw-loop isn't what you want to do, because then you reupload the same data every frame. I think the problem was, that the calls to gl*Pointer for both buffers have overwritten each other. It should be enough to call glBindBuffer and the corresponding gl*Pointer calls in the render-loop. A common way to store this state is using VertexArrayObjects (VAO). Those store the opengl-state of one or more buffers and which attribute-locations are connected to them through the gl*Pointer-calls. For example you could do something like that: GLuint board_vao; glGenVertexArrays(1, &board_vao); glGenBuffers(1,&board_VBO); glBindVertexArray(board_vao); glBindBuffer(GL_ARRAY_BUFFER,board_VBO); glBufferData(GL_ARRAY_BUFFER,num_board_vertices * sizeof(vertex),board_vertices,GL_STATIC_DRAW); glVertexPointer(3,GL_FLOAT,sizeof(vertex),BUFFER_OFFSET(0)); glNormalPointer(GL_FLOAT,sizeof(vertex),BUFFER_OFFSET(12)); glTexCoordPointer(2,GL_FLOAT,sizeof(vertex),BUFFER_OFFSET(24)); If you do the same for the your other VBO's, which vertices you want to draw independently, then you can simply draw them like this: glBindVertexArray(board_vao); // remembers which VBO's are bound and which attribs have been connected via gl*Pointer. board_shaders->bind(); // Set texture-state ... glDrawArrays(GL_TRIANGLES,0,num_board_vertices); glBindVertexArray(piece_vao); piece_shaders->bind(); // Set texture-state ... glDrawArrays(GL_TRIANGLES,0,num_piece_vertices); ...
  8. Fnord42

    Turntable camera functionality

    edit: I'm sorry if I confused you, I'm not used to the immediate mode edit2: could you tell a bit more on how you want the camera and mode to behave? I'm not familiar with blender's view. What do you mean by rotating around the center of the screen? What you are doing now, is moving the object along the scenes axes, then rotating the object around the scene's origin and then moving and rotating the scene's origin to match the camera's position and direction.
  9. Hi there, I'm currently trying to implement MJP's deferred "MSAA" in OpenGL, where I need a multisampled depth-only geometry-prepass and use the results in a post-processing CFAA-shader. For that I've created a FBO with a multisampled texture. My problem is, that all the depth-samples seems to be equal if I query them in the post-processing shader. The strange thing is, if I profile the time needed for a 8x multisampled and a non-multisampled depth-only pass they both need the same time. I don't know how efficient hardware-MSAA is, but it I can't really believe that my ATI HD 5700 is that parallel on MSAA. I use the proprietary-ati-drivers on linux and glxinfo says I have OpenGL 4.2. This is how I create the FBO and texture: glEnable(GL_MULTISAMPLE); glGenTextures(1, &m_msaaDepthTexId); glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, m_msaaDepthTexId); glTexImage2DMultisample(GL_TEXTURE_2D_MULTISAMPLE, m_msaaSamples, GL_DEPTH_COMPONENT32, width, height, GL_TRUE); glGenFramebuffers(1, &m_msaaFboId); glBindFramebuffer(GL_FRAMEBUFFER, m_msaaFboId); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D_MULTISAMPLE, m_msaaDepthTexId, 0); glDrawBuffer(GL_NONE); glReadBuffer(GL_NONE); In the prepass-shader I don't explicitly assign a depthvalue. In the postprocessing-shader I access the multisampled depth-texture like this: uniform sampler2DMS texDepthMS; ... void main() { ivec2 sampleLocation = ivec2(texCoords * frameSize); for (int i = 0; i < 4; i++) float msDepth = texelFetch(texDepthMS, sampleLocation, i).x; } But the msDepth-values are all independent from i and equal to the non-ms-depth-value. I set the texture uniform like that: glActiveTexture(GL_TEXTURE1); glEnable(GL_TEXTURE_2D_MULTISAMPLE); glBindTexture(GL_TEXTURE_2D_MULTISAMPLE, m_msaaDepthTexId); glUniform1i(m_msaaShader->getUniformLocation("texDepthMS"), 1); Someone had different problems or knows what might be wrong? Greets and thanks for your time!
  10. Hey, thanks for your replies! I've tried it with my shadowmaps pcf-calculation, where in the current implementation I have an array that stores the offsets for each sample. If I set the sample-count too hight i get an instant freeze, so I don't think it's the GPU temperature. Yeah, I ment some kind of memory overflow of the GPU-memory. I don't know how the GPU works in detail, but It seemed reasonably for me to assume that each core had a stack and shared memory. I think you're right on the virtual machines, because I think they use the host-systems GPU-drivers to simulate parts of the ones on the guest-system (not sure about that). edit: just remembered that the last time I checked, a windows-guest on a linux-host had only highly experimental 3D support, so at least Virtualbox isn't an option. I've read about that, while searching for similar topics. So it might be possible to implement something similar on linux. If someone has found/implemented such a thing please let me know
  11. Hi there, I just unpleasently noticed that complex shaders can cause a system freeze. I guess stackoverflow is the thing to blame here. I'm using linux with proprietary ATI-drivers. Is there a possibility to run programs in a memory-safe mode? Would a virtual machine provide more safety for CPU- and especially GPU-memory-abuses? Or are there more lightweighted solutions to memory-safe development? Greets and thanks for your time.
  12. Abused the GLSL-texture()-function... I did float shadowFactor = texture(texShadowCubemap, vec4(wsLightDir, 1), wsDepth - 0.0005); instead of float shadowFactor = texture(texShadowCubemap, vec4(wsLightDir, wsDepth - 0.0005)); Now it works with linear-filtering.
  13. Hi there, I'm currently trying to optimize my shadowmapping-shaders. I'm using a deferred renderer with shadow-cubemaps and my system is linux with proprietary drivers for my ATI Radeon HD 5700. What I'm trying to do is to use the samplerCubeShadow in my lighting-shader to get a (at least a bit) smoothed shadow. As I've read here, a texture-lookup with a shadow-sampler should return the shadow-factor as a result. I've also read there, that if used with a texture with enabled linear-filtering, the behaviour of a shadow-sampler-texture-lookup is implementation-dependent but should always return a value on the range [0, 1], which is proportional to the number of samples in the shadow texture that pass the comparison. Now my results are a bit different. If I make a texture-lookup on my samplerCubeShadow without linear-filtering activated, I get the depth of the shadow-map as a result. If I make a texture-lookup with linear-filtering, I get constant 0 or 1 (in dependency of the GL_TEXTURE_COMPARE_FUNC). The depth-values of the shadow-cubemap and the depth-values I compare them to, both look fine. This is how I initialize the shadow-texture: glBindTexture(GL_TEXTURE_CUBE_MAP, m_shadowCubemapTextureId); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_WRAP_R, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_MODE, GL_COMPARE_R_TO_TEXTURE); glTexParameteri(GL_TEXTURE_CUBE_MAP, GL_TEXTURE_COMPARE_FUNC, GL_LEQUAL); for (char face = 0; face < 6; face++) { glTexImage2D(GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, 0, GL_DEPTH_COMPONENT32, size, size, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); } The Shaders that create my shadowmap calculate their own depth for a worldspace-direction as follows: // ===== Vertexshader ===== vec3 lightToPos = wsPosition.xyz - wsLightPosition; vDepth = dot(lightToPos.xyz, lightToPos.xyz) / squaredLightRadius; gl_Position = lightVPMatrix * wsPosition; // ===== Fragmentshader ===== gl_FragDepth = vDepth; The light-projection-matrix in lightVPMatrix goes from z=1 to z=lightRadius. The Shaders that draw the lighting of a light with it's shadowmap calculate the depth of the current position in the same way, just in the fragmentshader because of the deferred rendering, and compare that depth with the one of the shadowmap: // ===== Fragmentshader ===== uniform sampler2D texDepth; uniform samplerCubeShadow texShadowCubemap; ... varying out vec4 frag; void main(void) { // Reconstruct viewspace-position from depth vec3 vsPosition = ... vec3 wsLightDir = invNormal * (vsPosition - vsLightPosition); float wsDepth = dot(wsLightDir, wsLightDir)/squaredLightRadius; float shadowFactor = texture(texShadowCubemap, vec4(wsLightDir, 1), wsDepth - 0.0005); // frag = vec4(vec3(shadowFactor), 1); return; ... } Somebody knows what might be the problem? Thanks for your time!
  14. Fnord42

    OpenGL camera

    Hi there, nice name for asking that question :3 In general a camera is realized over a view-matrix. This is a 4x4-matrix, which moves and rotates your scene against the camera. So if your virtual camera rotates right, the view-matrix needs to rotate the vertices of your scene to the left. If your virtual camera moves forward, the view-matrix needs to move the vertices of your scene backwards. I would highly recommend to read about the basics of 3D-maths and computer-graphics matrices. Have a look at the red book's chapter about viewing. If you're using the deprecated fixed-function-pipeline (glBegin(), glVertex(), ... instead of shaders) you can change the view-matrix with the glTranslate- and glRotate-functions. Those functions change the matrix like you would expect it from a camera. // Choose the Model-View-Matrix glMatrixMode(GL_MODELVIEW); // Rotate the camera 90 degrees around it's x-axis glRotatef(90.0f, 1.0f, 0.0f, 0.0f); // Move the camera-position along it's new x-axis (after rotation) glTranslatef(10.0f, 0.0f, 0.0f); If you want to use shader, you need an extern matrix-class or a 16-float array representing that matrix, which you then load onto the gpu-memory. [/quote] If he is using old OpenGL he could far more easily use gluLookAt. (Which makes implementing a camera trivial) http://www.dei.isep....uLookAt.3G.html has a description of how the function works aswell if you wish to replicate it using a modern OpenGL version. [/quote] Nice addition, completely forgot to mention gluLookAt. But it doesn't always make camera-implementation more trivial. If he goes for first-person mouse-look for example, it would be easier to just add the the difference of the mouseposition to the rotation around the y-axis.
  15. Fnord42

    OpenGL camera

    Hi there, nice name for asking that question :3 In general a camera is realized over a view-matrix. This is a 4x4-matrix, which moves and rotates your scene against the camera. So if your virtual camera rotates right, the view-matrix needs to rotate the vertices of your scene to the left. If your virtual camera moves forward, the view-matrix needs to move the vertices of your scene backwards. I would highly recommend to read about the basics of 3D-maths and computer-graphics matrices. Have a look at the red book's chapter about viewing. If you're using the deprecated fixed-function-pipeline (glBegin(), glVertex(), ... instead of shaders) you can change the view-matrix with the glTranslate- and glRotate-functions. Those functions change the matrix like you would expect it from a camera. // Choose the Model-View-Matrix glMatrixMode(GL_MODELVIEW); // Rotate the camera 90 degrees around it's x-axis glRotatef(90.0f, 1.0f, 0.0f, 0.0f); // Move the camera-position along it's new x-axis (after rotation) glTranslatef(10.0f, 0.0f, 0.0f); If you want to use shader, you need an extern matrix-class or a 16-float array representing that matrix, which you then load onto the gpu-memory.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!