First of all, the main problem I'm working on actually requires perspective in it's final implementation, so I can't just use ortho and map the quad to the viewport dims. In this experiment I'm trying to prove my understanding of the perspective projection (and failing). Here is my problem:
I've set up a standard 16:9 perspective projection in opengl, and set up a view matrix aligned down the z axis, with 'eye position' at world origin.
I then create a quad at a given depth along the z axis, facing the camera. I use my horizontal FOV and aspect ratio to generate the quad width/height such that it should perfectly cover the viewport at the distance from camera I've specified.
In my tests the quad is creating with the right aspect ratio, but it always appears 'too big' (or too close to camera depending on how you look at the problem). I need to move the camera away from the quad along the z axis to get it to 'shrink' to viewport frustum size.
I'm sure I'm just missing something silly but it is driving me crazy! As far as I can tell, in order to generate my quad dimensions, all I should need to do is use simple trig, ie:
quadHalfWidth = tan(m_ImageHorzFOVRads / 2.0)) * quadDistanceFromCamera
quadHalfHeight = tan(m_ImageVertFOVRads / 2.0)) * quadDistanceFromCamera
I generate the quad corners by combining my quad centre (0, 0, distance from camera), with +/- halfwidth and halfheight offsets appropriately.
Can anyone explain why the math above wouldn't generate quad corners that fall on the horizontal and vertical edges of the viewport frustum?