Jump to content
  • Advertisement

_Seth_

Member
  • Content Count

    10
  • Joined

  • Last visited

Community Reputation

122 Neutral

About _Seth_

  • Rank
    Member
  1. Yeah, I noticed the memory leak! A few people tried it out too and it works fine for them. But I still get weird values. I have an ATI Radeon 9800 pro... could it be the card? I have the lastest drivers...
  2. Could anyone post a link to a source that works fine and get the right values from the depth buffer? I'll compare it with my code and maybe find what's wrong. Thank you Jeff
  3. It's pretty simple. Camera init glMatrixMode ( GL_PROJECTION ); // Select The Projection Matrix glLoadIdentity ( ); // Reset The Projection Matrix gluPerspective(45.0f, 1.0f, 1.0f, 7.0f); glGetFloatv(GL_PROJECTION_MATRIX, lightprojection); glMatrixMode ( GL_MODELVIEW ); glLoadIdentity(); gluLookAt(lightPosition.x, lightPosition.y, lightPosition.z, 0.0f, 0.0f, -4.0f, 0.0f, 1.0f, 0.0f); glGetFloatv(GL_MODELVIEW_MATRIX, lightprojection); void Display(void) { GLfloat *depthBuff = new GLfloat[RESX*RESY]; glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); glViewport(0, 0, RESX, RESY); //Draw the scene glColor3f(1,0,0); glBegin(GL_QUADS); glVertex3f(-0.6,-0.6,-3.0); glVertex3f(0.6,-0.6,-3.0); glVertex3f(0.6,0.6,-3.0); glVertex3f(-0.6,0.6,-3.0); glEnd(); glReadBuffer(GL_BACK); glPixelStorei(GL_PACK_ALIGNMENT, 1); glReadPixels(0,0, RESX,RESY, GL_DEPTH_COMPONENT, GL_FLOAT, depthBuff ); for(int y=0;y < RESY;y++) for(int x=0;x< RESX;x++) reference << x <<" " << y << " " << depthBuff[x + ((RESY) * y)] << std::endl; glFinish(); glutSwapBuffers(); glutPostRedisplay(); delete[] depthBuff; } Like you said, it gives a weird depth buffer reading and I wonder why?! Jeff [Edited by - _Seth_ on November 1, 2004 11:33:44 AM]
  4. Sure. This image is rendered with the information contained into the depthbuff array with the camera unchanged. http://www.jflevesque.com/image.jpg On this image, I have zoomed out and moved the camera a little. Noticed the black pixels that follow the original viewing volume. (it's a bit hard to tell with just that image though) http://www.jflevesque.com/image2.jpg and here's the depthbuff array dump in this format: pixel x, pixel y, depth recorded the image is 300X300. http://www.jflevesque.com/reference.zip Jeff
  5. Hi guys, I'm trying to read the depth buffer and place that information in an array like this: GLfloat *depthBuff = new float[RESX*RESY]; glReadPixels(0,0, RESX,RESY, GL_DEPTH_COMPONENT, GL_FLOAT, depthBuff ); If I process that array and place a GL_POINT vertex to every hit the ReadPixel has recorded, then I found back the image I read (I use gluUnProject). The problem is, when I zoom out the camera, I find out that ReadPixel has detected "valid" pixels where there were none. It always happenx at the very bounderies of the viewing frustum and on the near plane and far plane. I decided to write a file with the depthBuff array information and found out that while most of the depth value of the pixels are correctly recorded as 0 or 1 (non existant), a few ones are given values where there were nothing in view. I'm using glClearDepth(1.0f); glDepthFunc(GL_LESS); glEnable(GL_DEPTH_TEST); and I have made my near and far plane very close to each other to get the most of depth precision. Is that a normal behavior? What would you suggest? Thanks jeff
  6. >I mean, when the camera moves, there would be pixels that were >visible the first time, but hidden the second, and vice versa Ohh, you're absolutly right here. That phenomenon is called holes and gaps. "Warp" is the word used when we're talking about image warping. I don't know where it comes from. Star Trek maybe :) Thanks for the depth buffer tip! I haven't though about that! Jeff
  7. Thanks Oddbot! I'll check this out and see if it's applicable. mikeman: Yeah you're right. It's a bit confusing. I didn't know how to put it into words. Next time I'll try to make more sense in my subject line at least! :P Here's a second try Given a reference image rendered from a camera's point of view and a second image of the scene rendered with the same camera but which has been moved a little. That movement is known and is in world coordinates. Now I want to warp every pixels of the second image into a central reference frame (the view given by the reference camera) It's a bit tricky because objects close to the camera "move" much more than objects which are farther. That's what perspective is all about. Here's an equation I found applicable to stereo rendering: dp = -dv / z dp is the vector disparity (change) in pixel dv is the known vector difference in viewing positions z is measured from the viewpoint into the scene and is the same for both views. That makes sense because the amount of movement (dv) is modulated by z. That means, if z is big (the pixels if from an very far object) dp will be small, if z is small, we gonna have a big change in the number of pixels. The problem is, dp is in pixel, dv is in world unit and so is z (i think). So, something is missing in that equation. Quick question: with glReadPixels, how do you know if a pixel read is valid and contains information about the scene or it's just "a big nothing" Thanks all! :) Jeff
  8. I see what you mean! And I guess it would work for that example. But if you have a complete scene where every pixel is probably drawn, it would be much harder to work around. What I'm trying to do is to warp an image taken from a given point of view to a reference point of view. Thanks for the reply :P Jeff
  9. Hi! I have a small problem. I hope someone can help me. I'd like to know the number of pixel a polygon has moved. Here's the picture I have a camera with a 45 degrees field of view. znear 0.8, zfar 4.0. I place my camera and I draw my reference image with, say a flat quad in it. I read back the pixels and the depthbuffer. I clear the scene. After that, I move my camera position a little to the right. Say, 0.4 in x. I also move the lookat of 0.4 in x, so the line of view is parallel to the reference camera. I draw the scene again. Read the pixels and the depthbuffer. What I need to know is how many pixels the quad has moved from the reference image to the second image. I tried a few things and got poor results. If you're wondering, it's for soft shadows where you take a few samples of a light sources. Thanks Jeff
  10. Hello guys, I'd like to code that technique for a project: http://graphics.stanford.edu/papers/shadows/ the first one called Layered attenuation map. First, I'd like to know has anyone seen that technique implemented somewhere, an example, a bit of code about it? My first question is about the second part of the algorithm. Once you have built your attenuation map with renders from the light point of view, you switch back to camera point of view (a bit like shadowmapping) and then you render the whole scene with texture and lightning. After that, for each pixel of the image, you check if that pixel is in your attenuation map and if so, modulate its color by a factor. I'd like to know how to get a pixel information from a rendered view and then change its color and write it back to produce a final render or image and so on for each pixel of the image? In OpenGL of course. Thank you! Jeff
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!