If you find this article contains errors or problems rendering it unreadable (missing images or files, mangled code, improper text formatting, etc) please contact the editor so corrections can be made. Thank you for helping us improve this resource |
Foreword
I believe in sharing ideas and knowledge whenever I can. This, I believe is probably the only way to learn effectively and efficiently. Presenting an idea or an example is similar to passing your ideas through people's head and letting them do the thinking for you; more often than not, the feedback and comments are much more valuable then the original material presented. What better way to improve yourself? Unless you have "million dollar" research budgets to get top-notch research staff, you are probably better off sharing and learning on open grounds. But sometimes this is not possible due to company policies, lack of time and many other silly reasons. I'm trying my best so if you have any comments or ideas that you would want to share, please drop me a mail here.
If you believe I had unknowingly trespassed on copyrights of other individuals/organizations in the data files or materials, please let me know!!!
I'm just a coder; I am not Sweeney, Carmack or God. Hence I tend to make mistakes and I tend to make a lot of them. If there is any misinformation that I have written, please let me know by email here and I'll be glad to update the documents in question.
Disclaimer
The following information is provided "as is", with ABSOLUTELY NO warranties. Any sample source code, compiled binaries or data files are tested and developed in the test environment described. They are not guaranteed to work on all machines.
Target Audience
Beginner to Intermediate. The method discussed for this topic is totally universal, but the example is coded in MS DirectX8.1. A basic understanding of basic vector math, render targets and world coordinate systems would be helpful in understanding this paper. The sample program makes use of DirectX's common files framework, so if you know the framework, the source code should be very easy to understand. But if you are just beginning to do graphics programming, don't fret, I did not have much of a head start either, so do give it a shot.
Tools
DX8.1 SDK, VisualStudio.net, MilkShape3D, AMD Duron 800Mhz, GeForce4 Ti4600, some free textures and lots of time.
Sample Apps
Yes, there is a sample app. All source, models, data or textures are either free from the web or my own scrappy creations. You can get the sample app here. You are free to do anything with the source and use it anywhere you wish. But I would also appreciate it very much if you could drop me a mail if you use any of the materials here in any project or websites, that's all I ask for. Thanks.
Author
Hun Yen Kwoon
http://geocities.com/codeman_net/
ykhun@PacketOfMilk.com
Introduction
This discussion on the VCP reflection technique comes with a sample program, which I implemented to demonstrate the method discussed. I will not refer much to the source code of the sample program (maybe only some important functions or notes), instead I will concentrate on the theory behind this whole VCP thingy. The reason for doing this is that the sample source is not really long and it had quite a load of comments thrown in, you should be able to follow it easily after reading this paper. Source code is supposed to be self-explanatory, at least to coders.
I'm actually typing this introduction last and had actually just come out of a debugging session where I found a special case in the calculation of the rotation matrix, which I did not take care of previously. The model for the sample program, which I constructed in MilkShape3D, had a brick wall that I wanted to paste the mirror onto. This brick wall somehow happens to face the exact direction that triggers the special case I did not take care of. The point here is, if I did not set out to write this paper in the first place, I would never have found out about this "slip of mind" case, well maybe not never, but probably not as soon as I would like to. As Michael Abrash of Id Software had so pragmatically put it in this article: Learn now pay forward! Let's begin!
I will be describing the technique in DirectX's convention using the Left Hand coordinate system. For OpenGL, which uses the Right Hand coordinate system, appropriate reminders will be given at different point throughout this paper. The virtual camera position is one of the many environment-mapping techniques employed in 3D graphics. I call it Virtual Camera Position (VCP) because I really couldn't come up with any other meaningful names and there seems to be no standard naming conventions for this kind of things. Someone pointed out to me that this form of reflection generation technique could be term under "portal rendering", but there are probably dozens of books with a dozen different names to call. I could call it Virtual Eye Point, but what's wrong with Virtual Camera Position? :-) This technique is perhaps the most intuitive for doing reflections in 3D. Check out any junior school physics textbook and chances are, you would come across the following figure:
Figure 1: Light Ray Reflection Over Mirror Plane
For any reflective surfaces in nature, the incident angle of a light ray will be equal to the reflection angle made by the reflection light ray. This is the hard and fast law of reflection. Noticed that we are talking about a single point on the reflective surface. The incident angle and reflection angle varies across the entirety of the reflective surface.
The idea is simple: What the mirror shows is similar to what is seen in the direction of the reflected ray. This means that we simply need to render the scene in the direction of the reflected ray from the virtual camera position (hence the name) in order to get the correct reflection. This turns out to be a "pixel perfect" method of doing mirror reflections as long as the texture size used is equal to or above the pixel width and height of the rendered mirror. For example, if you are rendering at a screen resolution of 800x600 and choose to use a 256x256 texture for the reflection, the reflection quality would deteriorate when the camera goes really close to the mirror. The deterioration is due the mirror covering a large portion of the 800x600 screen area with only a 256x256 resolution texture.
Motivation
I recently read a description of an Image Of The Day (here) posted at flipcode. It's a nice demonstration of how mirrors could be done in a game scene. The technique, in brief terms, talks about attaching a camera to a mirror plane and rendering from that camera position behind the mirror. This approach has several limitations such as a no-objects-behind-mirror constraint and inaccurate reflections at most positions (Its meant to simulate reflection, not accurately produce them). This wasn't the motivation for doing a research on robust 3D reflection since I was already actively working on 3D reflections at that time. But that IOTD's method can be seen as a toned down method of the Virtual Camera Position that I am describing here. By the way, the real motivation was the mirror reflections I saw in the Doom3 video clip showed during QuakeCon 2002. Let's get to the real stuff!
Simple Light Ray
Referring back to Figure 1 on basic incident and reflection angle representation when a ray of light hits a reflective surface, we could get to the following 2D representation of a VCP projection mechanism:
Figure 2: Projection of Scene from VCP
For every frame rendered, the mirror could show a physically correct and accurate reflection simply by rendering the reflection from the corresponding VCP. The VCP changes whenever the camera translates in world space. The main thing to be done to render the reflection correctly from the VCP is to get the correct projection and view matrix (Sorry OpenGL people, Direct3D have a view matrix!). This concept borders on the verge of simplicity and perfect physical sense, but probably not on the ease of implementation frontier, not unless you have a good understanding on 3D vector space and how projection matrices are calculated (I don't!).
VCP Mirror Theory
The following are the list of steps to be done to implement such a VCP reflection properly:
The reflection matrix in step 2 can be easily calculated using the D3DXMatrixReflect function in Direct3D. This matrix is really handy as you could easily get the VCP by multiplying it with the current camera world space position. The multiplication has the same effect of flipping the point at the current camera position to the VCP in world space. The reflection matrix simply inverts the coordinate system on the other side of the mirroring plane. It's a variation of the common identity matrix. So if the y-z plane were the mirroring plane, then the reflection matrix would have a diagonal of {1, -1, 1}, which inverts the y-axis appropriately. The following is the reflection matrix for a mirroring plane on the y-z plane:
Step 4 computes the perpendicular distance of the camera from the mirroring plane in world space. We need a vector from one of the mirror vertex to the camera position lets call it a. We also need another vector representing the normal of the mirroring plane lets call it n. Normal n could be calculated easily by the cross product of any of the mirror's two edges. A vector projection of vectors a on n will then yield the perpendicular distance we need.
Figure 3: Vector Projection to Find Perpendicular Distance
Figure 3 shows the vector projection to find the perpendicular distance. In mathematical notation, the vector projection would be as follows:
This is just the dot product of vector a with normalized n. Alternatively, the VCP calculated in step 3 could be used to replace the camera position when calculating a but you would need to be careful with the signage of the resultant scalar distance. If the distance is negative, the camera is behind the mirror; hence steps 5 to 11 can be skipped.
Steps 5 to 7 are meant for converting the mirror and the virtual camera (at VCP) from world space to camera space (eye space for OpenGL jargon). In camera space, the virtual camera, should sit on the origin. Step 5 involves standard vector dot product arithmetic and some vertex rotations. We need to rotate the mirror vertices twice: once about the x-axis and once about the y-axis. The reason for doing this is to align the mirror's vertices with the x-y plane. The angle to rotate about the x-axis comes from the dot product of the mirror up vector and the y-axis. The angle to rotate about the y-axis comes from the dot product of the mirror's "Left-To-Right" vector with the x-axis. The "Left-To-Right" vector can be computed using the lower left mirror vertex and the lower right mirror vertex. The two rotations are combined into a rotation matrix so that we can just multiply the mirror vertices with it to effect the combined rotation. We have to be very careful when it comes to computing the rotation about the y-axis. We need to ensure that after rotation, the virtual camera would be facing the +ve z direction; if you are using a Right-Hand coordinate system as in OpenGL then it's the -ve z direction. This is a must because the projection matrix calculation in step 9 would not work if the virtual camera does not face the +ve z direction. In fact, all projection matrix calculation for Left Hand coordinate system assumes that the camera is looking at the +ve z direction! Again the assumption for Right Hand coordinate system is –ve z direction. We can make sure that our virtual camera, when rotated, faces the +ve z direction by testing the z coordinate value of the mirror's normal after it had been rotated by the rotation matrix calculated earlier on. The mirror's normal is the same as the direction of the virtual camera. We multiply it with the computed rotation matrix and check the rotated normal. The signage of the z coordinate of the rotated normal tells you whether the virtual camera will be facing +ve or –ve z direction. If the z coordinate of the rotated normal is –ve (facing –ve z direction), we would need to re-compute the rotation matrix by negating the angle to rotate about the y-axis. The following figure sums up this paragraph, I hoped.
Figure 4: Ensuring the Virtual Camera Faces +ve z Direction After Rotation
Step 6 uses the previous rotation matrix to rotate the virtual camera at VCP by the same amount as the mirror vertices. Remember that the rotated virtual camera must face +ve z for Left Hand coordinate system (DirectX) and –ve z for Right Hand coordinate system (OpenGL).
Step 7 involves negating the x, y and z components of the rotated VCP and then translating the mirror vertices (already parallel with the x-y plane) using the negated values. This has the effect of translating the mirror vertices by the same magnitude needed to move the VCP to the origin. The conversion from world space to camera space is now complete and we can start computing the projection matrix. Figure 5 shows the mirror vertices being rotated for alignment with the x-y plane and then translated (to camera space).
Figure 5: Transforming Mirror From World Space to Camera Space
Step 8 basically grabs the min/max x and y coordinates from the mirror vertices in camera space (e.g. after the rotation and translation).
Step 9 is where we formulate a matrix to represent the projection that we want in camera space. The values needed are the min/max x and y values, the near clip and the far clip. The near clip is the perpendicular distance calculated in step 4. The far clip would depend on how far, or how "deep", you want the reflection to be. Normally this should be a large value or at least matching the far clip value of any custom view frustum. You could compute the projection matrix with the above ingredients using formulas or you could simply use the D3DXMatrixPerspectiveOffCenterLH function in DirectX. Figure 6 below shows the graphical depiction of such a projection in camera space.
Figure 6: Projection in Camera Space
Step 10 computes the view matrix for the virtual camera. The purpose of the view matrix in DirectX is to collate the orientation and position of the viewer. Our viewer in this case is the virtual camera at the VCP. To compute the correct view matrix for the virtual camera, simply "point" the virtual camera at the original camera in world space. Take a look at Figure 7 below, which shows the top view of the off center projection and the view of the virtual camera.
Figure 7: Top View of Projection and View of Virtual Camera
Again, you have the option of applying the full formulas to compute the view matrix. But it can be done easily with the D3DXMatrixLookAtLH function, which is again meant for left hand coordinate system. Use the D3DXMatrixLookAtRH function if you are using right hand coordinate system. A point to note here is that the up vector required by the D3DXMatrixLookAtLH function should be the world up vector, which is usually the y-axis. Do not use the up vector of the camera. This is because when the camera rotates, the reflection in the mirror should not be rotating, it should be fixed. Hence the use of the world up vector, which should be fixed all the time, I hoped.
Lastly set the world and view matrices (e.g. pDevice->SetTransform(D3DTS_WORLD, mat) in DirectX) and render the scene to the texture surface that we had prepared in step 1. You had just completed the 1^{st} rendering pass! The 2^{nd} pass is just drawing the entire scene including the mirror, which can now be textured with the nice reflection texture you rendered previously. :-)
The following are some screen shots of incorrect and correct reflections generated using the VCP technique. The reasons for the inaccuracies will be explained later.
Figure 8: Incorrect Reflection Generate Using VCP
http://images.gamedev.net/features/programming/vcp/image011.jpg
Figure 9: Obvious Inaccuracies in Reflection Generated Using VCP
Figure 8 and Figure 9 show two screenshots during the implementation of the VCP sample. Figure 8 seems to be a correct reflection but is actually wrong. With the mirror standing perpendicularly and bottom touching the floor, the orientation and mismatching of the floor textures gives a good hint that the reflection is wrong. Figure 9 shows a much more obvious screen capture of the same implementation. The floor reflection is totally wrong and the scene appears a little skewed towards the bottom of the mirror. The errors were traced to the incorrect calculation of the projection matrix. A wrong projection matrix would easily cause an incorrect reflection as shown in Figure 10 below and it is not easy to trace such an error if the projection is just off marginally.
http://images.gamedev.net/features/programming/vcp/image012.gif
Figure 10: Incorrect Off Center Projection Calculation
Now lets take a look at some of the screen captures of correct reflections generated in the sample.
http://images.gamedev.net/features/programming/vcp/image013.jpg
Figure 11: Correct Perspective VCP Reflection
Notice the correct reflection of the floor immediately in front of the mirror. Since the mirror's bottom edge is touching the floor, the reflection should be as depicted in Figure 12 above instead of the one in Figure 9. The next screen captures shows another accurate reflection generated from an elevated viewing position.
http://images.gamedev.net/features/programming/vcp/image014.jpg
Figure 12: Correct VCP Reflection on Floor Textures
Ok! That's all for VCP reflections. Fire up your favorite development environment or editor and start coding!
Reference
I believe in sharing ideas and knowledge whenever I can. This, I believe is probably the only way to learn effectively and efficiently. Presenting an idea or an example is similar to passing your ideas through people's head and letting them do the thinking for you; more often than not, the feedback and comments are much more valuable then the original material presented. What better way to improve yourself? Unless you have "million dollar" research budgets to get top-notch research staff, you are probably better off sharing and learning on open grounds. But sometimes this is not possible due to company policies, lack of time and many other silly reasons. I'm trying my best so if you have any comments or ideas that you would want to share, please drop me a mail here.
If you believe I had unknowingly trespassed on copyrights of other individuals/organizations in the data files or materials, please let me know!!!
I'm just a coder; I am not Sweeney, Carmack or God. Hence I tend to make mistakes and I tend to make a lot of them. If there is any misinformation that I have written, please let me know by email here and I'll be glad to update the documents in question.
Disclaimer
The following information is provided "as is", with ABSOLUTELY NO warranties. Any sample source code, compiled binaries or data files are tested and developed in the test environment described. They are not guaranteed to work on all machines.
Target Audience
Beginner to Intermediate. The method discussed for this topic is totally universal, but the example is coded in MS DirectX8.1. A basic understanding of basic vector math, render targets and world coordinate systems would be helpful in understanding this paper. The sample program makes use of DirectX's common files framework, so if you know the framework, the source code should be very easy to understand. But if you are just beginning to do graphics programming, don't fret, I did not have much of a head start either, so do give it a shot.
Tools
DX8.1 SDK, VisualStudio.net, MilkShape3D, AMD Duron 800Mhz, GeForce4 Ti4600, some free textures and lots of time.
Sample Apps
Yes, there is a sample app. All source, models, data or textures are either free from the web or my own scrappy creations. You can get the sample app here. You are free to do anything with the source and use it anywhere you wish. But I would also appreciate it very much if you could drop me a mail if you use any of the materials here in any project or websites, that's all I ask for. Thanks.
Author
Hun Yen Kwoon
http://geocities.com/codeman_net/
ykhun@PacketOfMilk.com
Introduction
This discussion on the VCP reflection technique comes with a sample program, which I implemented to demonstrate the method discussed. I will not refer much to the source code of the sample program (maybe only some important functions or notes), instead I will concentrate on the theory behind this whole VCP thingy. The reason for doing this is that the sample source is not really long and it had quite a load of comments thrown in, you should be able to follow it easily after reading this paper. Source code is supposed to be self-explanatory, at least to coders.
I'm actually typing this introduction last and had actually just come out of a debugging session where I found a special case in the calculation of the rotation matrix, which I did not take care of previously. The model for the sample program, which I constructed in MilkShape3D, had a brick wall that I wanted to paste the mirror onto. This brick wall somehow happens to face the exact direction that triggers the special case I did not take care of. The point here is, if I did not set out to write this paper in the first place, I would never have found out about this "slip of mind" case, well maybe not never, but probably not as soon as I would like to. As Michael Abrash of Id Software had so pragmatically put it in this article: Learn now pay forward! Let's begin!
I will be describing the technique in DirectX's convention using the Left Hand coordinate system. For OpenGL, which uses the Right Hand coordinate system, appropriate reminders will be given at different point throughout this paper. The virtual camera position is one of the many environment-mapping techniques employed in 3D graphics. I call it Virtual Camera Position (VCP) because I really couldn't come up with any other meaningful names and there seems to be no standard naming conventions for this kind of things. Someone pointed out to me that this form of reflection generation technique could be term under "portal rendering", but there are probably dozens of books with a dozen different names to call. I could call it Virtual Eye Point, but what's wrong with Virtual Camera Position? :-) This technique is perhaps the most intuitive for doing reflections in 3D. Check out any junior school physics textbook and chances are, you would come across the following figure:
Figure 1: Light Ray Reflection Over Mirror Plane
For any reflective surfaces in nature, the incident angle of a light ray will be equal to the reflection angle made by the reflection light ray. This is the hard and fast law of reflection. Noticed that we are talking about a single point on the reflective surface. The incident angle and reflection angle varies across the entirety of the reflective surface.
The idea is simple: What the mirror shows is similar to what is seen in the direction of the reflected ray. This means that we simply need to render the scene in the direction of the reflected ray from the virtual camera position (hence the name) in order to get the correct reflection. This turns out to be a "pixel perfect" method of doing mirror reflections as long as the texture size used is equal to or above the pixel width and height of the rendered mirror. For example, if you are rendering at a screen resolution of 800x600 and choose to use a 256x256 texture for the reflection, the reflection quality would deteriorate when the camera goes really close to the mirror. The deterioration is due the mirror covering a large portion of the 800x600 screen area with only a 256x256 resolution texture.
Motivation
I recently read a description of an Image Of The Day (here) posted at flipcode. It's a nice demonstration of how mirrors could be done in a game scene. The technique, in brief terms, talks about attaching a camera to a mirror plane and rendering from that camera position behind the mirror. This approach has several limitations such as a no-objects-behind-mirror constraint and inaccurate reflections at most positions (Its meant to simulate reflection, not accurately produce them). This wasn't the motivation for doing a research on robust 3D reflection since I was already actively working on 3D reflections at that time. But that IOTD's method can be seen as a toned down method of the Virtual Camera Position that I am describing here. By the way, the real motivation was the mirror reflections I saw in the Doom3 video clip showed during QuakeCon 2002. Let's get to the real stuff!
Simple Light Ray
Referring back to Figure 1 on basic incident and reflection angle representation when a ray of light hits a reflective surface, we could get to the following 2D representation of a VCP projection mechanism:
Figure 2: Projection of Scene from VCP
For every frame rendered, the mirror could show a physically correct and accurate reflection simply by rendering the reflection from the corresponding VCP. The VCP changes whenever the camera translates in world space. The main thing to be done to render the reflection correctly from the VCP is to get the correct projection and view matrix (Sorry OpenGL people, Direct3D have a view matrix!). This concept borders on the verge of simplicity and perfect physical sense, but probably not on the ease of implementation frontier, not unless you have a good understanding on 3D vector space and how projection matrices are calculated (I don't!).
VCP Mirror Theory
The following are the list of steps to be done to implement such a VCP reflection properly:
- Set up a render target. E.g. a texture
- Calculate the reflection matrix
- Compute the VCP from the reflection matrix
- Compute the perpendicular distance of the camera from the mirroring plane in world space
- Compute rotation matrix to rotate the mirror plane so that it aligns with the x-y plane (e.g. parallel with x-y plane)
- Use previous rotation matrix to rotate VCP by the same amount
- Translate mirror vertices by the magnitude of the VCP –x, –y and –z values
- Compute resultant min/max x and y mirror vertices coordinates
- Set up projection matrix using the results from step 4 and step 8
- Point virtual camera from original VCP to world space coordinate of the real camera, and compute the view matrix
- Render the scene using the virtual camera's position, projection and view matrix
The reflection matrix in step 2 can be easily calculated using the D3DXMatrixReflect function in Direct3D. This matrix is really handy as you could easily get the VCP by multiplying it with the current camera world space position. The multiplication has the same effect of flipping the point at the current camera position to the VCP in world space. The reflection matrix simply inverts the coordinate system on the other side of the mirroring plane. It's a variation of the common identity matrix. So if the y-z plane were the mirroring plane, then the reflection matrix would have a diagonal of {1, -1, 1}, which inverts the y-axis appropriately. The following is the reflection matrix for a mirroring plane on the y-z plane:
Step 4 computes the perpendicular distance of the camera from the mirroring plane in world space. We need a vector from one of the mirror vertex to the camera position lets call it a. We also need another vector representing the normal of the mirroring plane lets call it n. Normal n could be calculated easily by the cross product of any of the mirror's two edges. A vector projection of vectors a on n will then yield the perpendicular distance we need.
Figure 3: Vector Projection to Find Perpendicular Distance
Figure 3 shows the vector projection to find the perpendicular distance. In mathematical notation, the vector projection would be as follows:
This is just the dot product of vector a with normalized n. Alternatively, the VCP calculated in step 3 could be used to replace the camera position when calculating a but you would need to be careful with the signage of the resultant scalar distance. If the distance is negative, the camera is behind the mirror; hence steps 5 to 11 can be skipped.
Steps 5 to 7 are meant for converting the mirror and the virtual camera (at VCP) from world space to camera space (eye space for OpenGL jargon). In camera space, the virtual camera, should sit on the origin. Step 5 involves standard vector dot product arithmetic and some vertex rotations. We need to rotate the mirror vertices twice: once about the x-axis and once about the y-axis. The reason for doing this is to align the mirror's vertices with the x-y plane. The angle to rotate about the x-axis comes from the dot product of the mirror up vector and the y-axis. The angle to rotate about the y-axis comes from the dot product of the mirror's "Left-To-Right" vector with the x-axis. The "Left-To-Right" vector can be computed using the lower left mirror vertex and the lower right mirror vertex. The two rotations are combined into a rotation matrix so that we can just multiply the mirror vertices with it to effect the combined rotation. We have to be very careful when it comes to computing the rotation about the y-axis. We need to ensure that after rotation, the virtual camera would be facing the +ve z direction; if you are using a Right-Hand coordinate system as in OpenGL then it's the -ve z direction. This is a must because the projection matrix calculation in step 9 would not work if the virtual camera does not face the +ve z direction. In fact, all projection matrix calculation for Left Hand coordinate system assumes that the camera is looking at the +ve z direction! Again the assumption for Right Hand coordinate system is –ve z direction. We can make sure that our virtual camera, when rotated, faces the +ve z direction by testing the z coordinate value of the mirror's normal after it had been rotated by the rotation matrix calculated earlier on. The mirror's normal is the same as the direction of the virtual camera. We multiply it with the computed rotation matrix and check the rotated normal. The signage of the z coordinate of the rotated normal tells you whether the virtual camera will be facing +ve or –ve z direction. If the z coordinate of the rotated normal is –ve (facing –ve z direction), we would need to re-compute the rotation matrix by negating the angle to rotate about the y-axis. The following figure sums up this paragraph, I hoped.
Figure 4: Ensuring the Virtual Camera Faces +ve z Direction After Rotation
Step 6 uses the previous rotation matrix to rotate the virtual camera at VCP by the same amount as the mirror vertices. Remember that the rotated virtual camera must face +ve z for Left Hand coordinate system (DirectX) and –ve z for Right Hand coordinate system (OpenGL).
Step 7 involves negating the x, y and z components of the rotated VCP and then translating the mirror vertices (already parallel with the x-y plane) using the negated values. This has the effect of translating the mirror vertices by the same magnitude needed to move the VCP to the origin. The conversion from world space to camera space is now complete and we can start computing the projection matrix. Figure 5 shows the mirror vertices being rotated for alignment with the x-y plane and then translated (to camera space).
Figure 5: Transforming Mirror From World Space to Camera Space
Step 8 basically grabs the min/max x and y coordinates from the mirror vertices in camera space (e.g. after the rotation and translation).
Step 9 is where we formulate a matrix to represent the projection that we want in camera space. The values needed are the min/max x and y values, the near clip and the far clip. The near clip is the perpendicular distance calculated in step 4. The far clip would depend on how far, or how "deep", you want the reflection to be. Normally this should be a large value or at least matching the far clip value of any custom view frustum. You could compute the projection matrix with the above ingredients using formulas or you could simply use the D3DXMatrixPerspectiveOffCenterLH function in DirectX. Figure 6 below shows the graphical depiction of such a projection in camera space.
Figure 6: Projection in Camera Space
Step 10 computes the view matrix for the virtual camera. The purpose of the view matrix in DirectX is to collate the orientation and position of the viewer. Our viewer in this case is the virtual camera at the VCP. To compute the correct view matrix for the virtual camera, simply "point" the virtual camera at the original camera in world space. Take a look at Figure 7 below, which shows the top view of the off center projection and the view of the virtual camera.
Figure 7: Top View of Projection and View of Virtual Camera
Again, you have the option of applying the full formulas to compute the view matrix. But it can be done easily with the D3DXMatrixLookAtLH function, which is again meant for left hand coordinate system. Use the D3DXMatrixLookAtRH function if you are using right hand coordinate system. A point to note here is that the up vector required by the D3DXMatrixLookAtLH function should be the world up vector, which is usually the y-axis. Do not use the up vector of the camera. This is because when the camera rotates, the reflection in the mirror should not be rotating, it should be fixed. Hence the use of the world up vector, which should be fixed all the time, I hoped.
Lastly set the world and view matrices (e.g. pDevice->SetTransform(D3DTS_WORLD, mat) in DirectX) and render the scene to the texture surface that we had prepared in step 1. You had just completed the 1^{st} rendering pass! The 2^{nd} pass is just drawing the entire scene including the mirror, which can now be textured with the nice reflection texture you rendered previously. :-)
The following are some screen shots of incorrect and correct reflections generated using the VCP technique. The reasons for the inaccuracies will be explained later.
Figure 8: Incorrect Reflection Generate Using VCP
http://images.gamedev.net/features/programming/vcp/image011.jpg
Figure 9: Obvious Inaccuracies in Reflection Generated Using VCP
Figure 8 and Figure 9 show two screenshots during the implementation of the VCP sample. Figure 8 seems to be a correct reflection but is actually wrong. With the mirror standing perpendicularly and bottom touching the floor, the orientation and mismatching of the floor textures gives a good hint that the reflection is wrong. Figure 9 shows a much more obvious screen capture of the same implementation. The floor reflection is totally wrong and the scene appears a little skewed towards the bottom of the mirror. The errors were traced to the incorrect calculation of the projection matrix. A wrong projection matrix would easily cause an incorrect reflection as shown in Figure 10 below and it is not easy to trace such an error if the projection is just off marginally.
http://images.gamedev.net/features/programming/vcp/image012.gif
Figure 10: Incorrect Off Center Projection Calculation
Now lets take a look at some of the screen captures of correct reflections generated in the sample.
http://images.gamedev.net/features/programming/vcp/image013.jpg
Figure 11: Correct Perspective VCP Reflection
Notice the correct reflection of the floor immediately in front of the mirror. Since the mirror's bottom edge is touching the floor, the reflection should be as depicted in Figure 12 above instead of the one in Figure 9. The next screen captures shows another accurate reflection generated from an elevated viewing position.
http://images.gamedev.net/features/programming/vcp/image014.jpg
Figure 12: Correct VCP Reflection on Floor Textures
Ok! That's all for VCP reflections. Fire up your favorite development environment or editor and start coding!
Reference
- DirectX SDK8.1 Documentation
- Beginning Direct3D Game Programming, Premier Press, 2001, ISBN: 0-7615-3191-2, Wolfgang F. Engel, Amir Geva and Andre LaMothe.
- Direct3D ShaderX Vertex and Pixel Shader Tips and Tricks, Wordware Publishing Inc, 2002, Wolfgang F. Engel.
- Realtime Rendering, 1^{st} Edition, A K Peters Ltd, 1999, ISBN: 1-56881-101-2, Tomas Moller, Eric Haines.
- Flipcode IOTD 1^{st} September 2002: <a href="http://www.flipcode.com/cgi-bin/msg.cgi?showThread=09-01-2002&forum=iotd&id=-1">http://www.flipcode.com/cgi-bin/msg.cgi?showThread=09-01-2002&forum=iotd&id=-1
- Robsite for MilkShape 3D tutorials: http://www.robsite.de/tutorials.php?tut=milkshape
- 3D Total website for the textures: http://newsite.3dtotal.com/
- Philip Taylor's Exploring D3DX part2: Textures, Microsoft Corp, http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dndrive/html/directx08202002.asp
- Texture Mapping brief, http://www.bol.ucla.edu/~dremba/3dg_texture_mapping.htm