lens distortion?

Started by
6 comments, last by someusername 18 years, 4 months ago
Hi everyone, I've got a 'camera matching' problem, it's bugging me all day.. Do you know if the major rendering applications use any kind of effects to simulate real-lens distortion in their images? I've got this bitmap produced with such a program. I know the exact camera position , target position, lens mm, fov, I took into account all width-height proportions that could influence anything, but when I try to simulate the same camera in a scene in direct3d (while using the aforementioned bitmap as background) the objects don't quite align as they should. And it's not just scale/orientation issues. It's obvious that the perspective is different in different portions of the image. I've used reference objects in the bitmap and their counterparts in the d3d scene, and they're noticeably off alignment especially close to the camera. I'm pretty sure I've got all the cam/viewport properties right. The only thing I can think of, is the possibility that the rendering app has applied any kind of lens distortion to the image. Have you heard of anything like that? And if yes, what kind? I hope I posted this in the right place... Thanks in advance
Advertisement
There's a fairly high probability that there's some high-level lens distortion occuring in the image. If it's a high-end program using fancy ray-tracing and things then it's not too unlikely that intrinsic camera parameters (such as radial and tangential distortions of the lens) will be modelled into the ray projections.

If this is the case then you can model them to some degree by doing a post-processing stage on your render to distort it in the same way (but this will require Shader3.0) and you'll also have to move your camera position and FoV to emulate the affects of the lens. (I'll go into more detail on this later if it works out this is the case).

Or for a cheaper solution, you could inverse distort the background image and render as normal :)

... OR, the program may not be adding a lens affect and it may be a completely different problem - it's hard to say. Try providing some screenshots and I'll see if I can tell what's going on.
Quote:Original post by DarkImp
If it's a high-end program using fancy ray-tracing and things [...]

yes, that kind of program...

The problem is that I can't tell for sure whether there is distortion on not, and what kind. I know that there are plugins out there that can correct the image, but I thing they're aimed towards major distortions like those in panoramic lenses, etc.
I got the scene and the bitmap to align perfectly in focal distances above 1~2 meters, but closer to the lens, it just won't match! I was hoping for pixel perfect alignment because some of the objects in the bitmap would have z-imprints on the z-buffer in order to occlude some 3d objects in my scene and look like they're part of it. I guees I can kiss that idea goodbye...

Btw, has anyone ever come around that infamous 'camera matching algorithm'? You're supposed to assign 3d positions to a number of points in your bitmap and it constructs a camera to match your bitmap's projection. It uses iterative methods.
Google doesn't seem to be much of help with this one...

Thanks anyway, though



Perhaps it would be easier to do it vice-versa?

Align the camera in-game as closely as you can, take a screenshot and then mimic the scene in the 3d program...?

Sounds like a long (and slow) way, but I guess you don't need this super-special-deluxe lens effect in every scene, do you? :)

Does the same problem exist if you take a more normal FOV, i.e 45 degrees or something?

Also, do you really need to get closer than 2 meters? :)

Hope you get it fixed!

Peace!!
"Game Maker For Life, probably never professional thou." =)
Oh, and btw...

Does your image contain z-values? if so, do the characters etc really need to be 100% perspectivly correct?
I remember the old "Alone in the dark" games, they where far from correct, and that didn't affect gameplay at all. :)
"Game Maker For Life, probably never professional thou." =)
If it's only the closer objects that are wrong then you've probably got the camera position wrong. Remember that a lens can alter the FoV of an image and hence force the focal point backwards in space. In many cases the focal point in world space is well outside the bounds of the camera (which is how zoom works).

I'm guessing you'll need to work out the internal focal point of the application camera and alter your render camera's position and FoV accordingly.

I'd do a diagram to explain this better but I've nowhere to host it. If it makes no sense then say so, and I'll find a way :)

Hope this helps.
You can solve for the projection matrix. You have to unproject the points to immediately after the projection and prior to the perspective division. You have 16 unknowns so you need 16 equations. The product of two 4x4 matrices producing a 4x4 matrix gives you those 16 equations. You can solve that a column at a time.

The biggest problem is the data isn't exact and is rather to the nearest pixel. That's pretty substantial rounding. Selecting from 4 billion numbers with floats can be a problem and you're selecting from perhaps 2k at best. So you're going to have to use least squares which means solve ATAx=ATb rather than Ax=b. The later may not have a solution but the former will. Symbolically you would just need four points. I believe a non-degenerate triangle and a non-coplanar point would do it. Your data isn't very accurate though.

Use four points and those four points might be correct, but the rest may actually be worse. I suppose with a lot of analysis it would be possible to say how many points and what positions would be optimum, but I don't feel up to it. I would suggest just experimenting. More points should tend to be more accurate so you might start with doing massive number of points to get an idea what the right answer is then experiment with how many points and what distribution gets you reasonably close.

A measure of the closeness is the product of the 1-norm and infinity norm for the differance of the two matrices. That's an upper bound for the square of the 2-norm which tells you how much a vector transformed by one could differ from the same vector transformed by the other.

An alternative approach is once you have a reasonable estimate of the actual matrix experiment and see if you can guess how it is built. You know the parameters that when to build it. You know it is a perspective projection matrix. That leaves room for variation, but there is a limit to how those parameters could be used. If you can guess how the matrices are defined then you can just translate between them and not have to hassle with solving anything in the actual application. That can get rather frustrating though since it is pretty much trial and error.
Keys to success: Ability, ambition and opportunity.
Sorry I hadn't had the chance to check this yesterday... I'll try to address each one.

@Rasmadrak
I'm not quite sure what you mean by taking a screenshot and trying to replicate it in the 3d app... Can you elaborate a little more on that please?
The bitmap was created with a 35mm lens, its fovs are :Horiz 54deg, Vertical: 44deg, Diag:66deg
And yes, I need pixel perfect alignment, because I have 3d geometry in direct3d for objects which only exist in the bitmap, and since I know their exact positions/orientations I can render them directly in the empty z-buffer before drawing my scene (turning off D3DRD_COLORWRITEENABLE) so to force my objects in the 3d scene to be occluded where that is needed.
My image does not contain z channel, although I can do it in no time. I wasn't sure I could blit to the z-buffer like to any regular d3d surface and didn't opt for that.

@DarkImp
Quote:
If it's only the closer objects that are wrong then you've probably got the camera position wrong.

I didn't mean that my scenes align at a focal depth of 2 miles! (lol)
The entire scene is no more than 4 meters in camera local-z. You're absolutely right though. It didn't even cross my mind that the point of view is being pushed behind the lens by 35mm, so to take it into account. But I had to compensate for a lot more than that in d3d's camera to get it to look at least natural (not pixel perfect), so I guess it's not just this.
I looked everywhere, I'm pretty sure I can't get the 3d aoo to give me the view/projection matrices.

@LiBudyWizer
I agree that this is the most straight-forward / accurate way. I spent an entire day trying to match a camera! I would have solved that system by hand in an entire day! (lol)
I have the mathematical background to do what you suggest and I must admit this solution is in my mind ever since this came up. I've worked with camera matching utilities and from that experience I learned that the algorithm they use is based on iterative solution (since you define max number of iterations) and they give an estimate of the error (if they can't get the residual norm smaller than the desired precision) This definitely implies numerical solution of system of equations.
The catch is though, and the reason I didn't try this yet, that you need to be certain that there is nothing else applied after the projection matrix... I don't even know if the 3d app tweaks my view matrix to begin with! (witch it probably does, as DarkImp suggested) If it tweaks the view matrix, and also applies other effects after the projection, I would still be trying to get that numerical system to converge!
These algorithms work with 5 points from which at least 2 must be linearly indepent. And the system should be a lot smaller than 16x16 because there are only 5 non zero members in my projection, and one of those is not an independent variable. So I guess this is the only thing left to try...

Anyway, thanks for the input, I really appreciate it.
I'll keep you posted

This topic is closed to new replies.

Advertisement