Followers 0

OpenGL Covering OpenGL viewport with a quad in perspective projection?

7 posts in this topic

Hi,

EDIT:
I'm causing confusion with unneeded details, the simplfied question I'm asking is:

Given a perspective projection, with known FOV, aspect ratio, etc, how can I find the corners of the view frustum at a given distance from the eye coordinate?

I'm sure I'm just missing something silly but it is driving me crazy!  As far as I can tell, in order to generate my corners, all I should need to do is use simple trig, ie:

frustumHalfWidth  = tan(viewportHorzFOVRads / 2.0))  * distanceFromCamera
frustumHalfHeight = tan(viewportVertFOVRads / 2.0))  * distanceFromCamera

In my tests if I render a quad at the specified distanceFromCamera using the calculated frustumWidth/Heights, it always appears 'too big' (or too close to camera depending on how you look at the problem).   I need to move the camera away from the quad along the z axis to get it to 'shrink' to viewport frustum size.

Can anyone explain why the math above wouldn't generate quad corners that fall on the horizontal and vertical edges of the viewport frustum?

Edited by Grumple
0

Share on other sites

Looks all right, but it's taken very much out of context and it's impossible to say if it's used correctly.

But I also want to ask: is the reason you're doing this because it's some kind of assignment and you have to show that you understand projections, or that you are under the misconception that you can't have more than one projection matrix?

With as little context as you actually provided, is it not a solution to draw this quad with an orthographic projection, and the rest with a perspective projection? After all, it seems like all you're after is getting the quad to fit to the screen.

0

Share on other sites

This is a personal project that involves displaying imagery captured with a real life camera at correct perspective in opengl.  Ultimately I don't really need or want it to perfectly cover the viewport, but I'm using this test case to ensure I have a good grasp on the projection.

My theory for this experiment is that if I have a camera with a 90 degree horizontal field of view, I should be able to create a GL viewport with the same field of view and aspect ratio, then perfectly cover the viewport with an image from the camera in my opengl scene.

The real goal is to ensure I can make my Opengl scene match the real world camera conditions at the time an image was taken.

0

Share on other sites

If your camera image is mapped to the entire screen, then the projection setup you use when drawing the image is entirely irrelevant and will have no relation to the camera capturing the image. You need to set up the matrix correctly for other things you draw, of course, and that involves matching the FOV, orientation, aspect ratio and such. But for the captured image itself, this process is entirely irrelevant.

0

Share on other sites

Yeah, the image stuff is certainly not critical to this specific problem, I was just giving some background information.

Maybe I should reword my problem to avoid confusion:

Given a perspective projection, with known FOV, aspect ratio, etc, how can I find the corners of the view frustum at a given distance from the eye coordinate?

0

Share on other sites

The formulas you gave your first post will do just that, given that you have calculated the field of view correctly based on the aspect ratio.

hfov = atan(tan(vfov/2)*aspect)*2;

0

Share on other sites

You can convert points from clipping space to camera space by multiplying vertices in clipping space. Clipping space is the view space between camera space and the screen. The x, y, and z of all of the points lie between -1 and 1. X and Y represent the screen position and Z represents the depth.  So the vertex (-1, 1, -1) transformed to camera space represents the top left corner sitting on the near clipping plane. Moving Z from -1 to 1 moves the transformed point from the near clipping plane to the far clipping plane.

To transform from clipping space to camera space you first take the inverse of the perspective matrix, I will refer to this matrix as P-1. For each vertex you add a fourth component, w, and set its value to 1. So (-1, 1, -1) becomes (-1, 1, -1, 1). You multiply the resulting 4 dimensional vector by P-1. After multiplying by P-1 you divide the entire vector its own w value. The resulting vector will be in camera space. You can then multiply the point by camera world matrix to convert that point to world space.

That is more or less what gluUnProject does, if you want to go this route I would recommend using gluUnProject.

The four points you would have to transform using this method to get a quad to cover the whole screen would be (-1, -1, z) (-1, 1, z) (1, 1, z) (1, -1, z) where z = (zFar + zNear + 2 * zFar * zNear / desiredZ) / (zFar - zNear). desiredZ is how far you want the plane and zNear and zFar are the near and far clipping plane distances for the camera.

That will give you the four corners of the quad that will cover the screen sitting at desiredZ away from the camera.

Although I would just recommend you set both the perspective and modelview matrix to the identity matrix and then render the quad (-1, -1, z) (-1, 1, z) (1, 1, z) (1, -1, z) where z is the same as I have described above. The reason why I would do this is if you try to calculate the quad in world space you are essentially transforming geometry from clipping space to world space then back again. By using the identity for the perspective matrix and modelview matrix you dont actually do any transformation because the geometry is pre transformed.

0

Share on other sites

Thanks to both of you for your help.   Having had Brother Bob confirm my general approach was something that 'should work', I took a closer look at some of my core functionality.

It appears the implementation I was using to generate a perspective projection matrix was wrong.  It's always worked for me and I've never tested it to this kind of accuracy.  After reviewing a tutorial on proper setup and making some changes to my gluPerspective() replacement, I'm getting the intended results when I render my 'full screen' quad.

HappyCoder, that is also a very useful trick that I hadn't thought of.

Cheers!

0

Create an account

Register a new account

Followers 0

• Similar Content

• Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
• By KarimIO
EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
Update: No crash occurs if I don't draw, just clear and swap.
static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));
• By Tchom
Hey devs!

I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.

uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
• By yahiko00
Hi,
Not sure to post at the right place, if not, please forgive me...
For a game project I am working on, I would like to implement a 2D starfield as a background.
I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

Is there someone who could have an idea of a distribution which could result in such a starfield?
Any insight would be appreciated

• I have just noticed that, in quake 3 and half - life, dynamic models are effected from light map. For example in dark areas, gun that player holds seems darker. How did they achieve this effect ? I can use image based lighting techniques however (Like placing an environment probe and using it for reflections and ambient lighting), this tech wasn't used in games back then, so there must be a simpler method to do this.
Here is a link that shows how modern engines does it. Indirect Lighting Cache It would be nice if you know a paper that explains this technique. Can I apply this to quake 3' s light map generator and bsp format ?

• 12
• 28
• 14
• 11
• 36