Okay, so I want to make a small project on DirectX requiring the rendering of the camera automatically.
It's a big thing to look forward to completing, but it shouldn't be too hard if I got some help...
Okay, so first, I initialized Direct3D with DirectX on Code::Blocks, I'm using Windows API and C++, and everything is set up. 3D is enabled...
I want to draw a cube(I know how)all the way hidden from the immediate viewing perspective of the camera when the program runs(somewhat know how to do that)and what I
want is the camera to automatically position itself behind the cube, as if it were rendering from the sky and curving behind the cube, such as a camera would in a game.
I KNOW I shouldn't ask this if I have almost no idea how to do it, but it's very simple initialization and such, the only problem being that I don't know how to control and move the camera around much at all...
So, even if little, tips would help me...
Again, I want the camera to be above the cube and automatically render itself down and curve behind the cube, such as in a 3D space(no textures or anything just the cube and the camera, that's all).
Anyone have any idea on what DirectX headers/includes I may need or what kind of logical establishments/workings I should use to perform this task?
Slightly confusing question.....?
Treat the camera as any other object with a matrix transform. When you want to get the view from the camera, invert the camera matrix and use that as the "view" transform for the rest of the pipeline.
Treat the camera as any other object with a matrix transform. When you want to get the view from the camera, invert the camera matrix and use that as the "view" transform for the rest of the pipeline.
Um...I don't know how or where to invert the views, how the views are adjusted or the matrices.....I get what you're saying but I just don't have the clear idea barely....
Elaborate a pinch.... further??
Step one, brush up on your Linear Algebra. Specifically the part about Affine Transformations. Now, sorry I can't be DirectX specific (I didn't take note of the forum, and I'm an OpenGL guy). The standard bare-bones vertex shader will include a line like:
out_vertex = mat_perspective * mat_view * mat_model * in_vertex;
Lets, break that down, taking note that iirc DirectX has a few utility functions to create these matrix for you. You'd specify mat_perspective as either a Orthographic Projection ( XNA's Matrix.CreateOrthographic) or Perspective Projection( XNA's Matrix.CreatePerspective). You then create a view matrix, and this is where your camera comes in. You create a Look-At matrix ( XNA's Matrix.CreateLookAt) for your camera to position it at some location, and point it at your target. The view matrix is then the matrix inverse (XNA's Matrix.Invert) of the camera's matrix. The mat_model is your matrix that describes where the cube is (this can also be created as a look at matrix).
You then set all those matricies before rendering your cube (who's verticies should be centered at the origin, as the other matrix will move it in the world).
out_vertex = mat_perspective * mat_view * mat_model * in_vertex;
Lets, break that down, taking note that iirc DirectX has a few utility functions to create these matrix for you. You'd specify mat_perspective as either a Orthographic Projection ( XNA's Matrix.CreateOrthographic) or Perspective Projection( XNA's Matrix.CreatePerspective). You then create a view matrix, and this is where your camera comes in. You create a Look-At matrix ( XNA's Matrix.CreateLookAt) for your camera to position it at some location, and point it at your target. The view matrix is then the matrix inverse (XNA's Matrix.Invert) of the camera's matrix. The mat_model is your matrix that describes where the cube is (this can also be created as a look at matrix).
You then set all those matricies before rendering your cube (who's verticies should be centered at the origin, as the other matrix will move it in the world).
Step one, brush up on your Linear Algebra. Specifically the part about Affine Transformations. Now, sorry I can't be DirectX specific (I didn't take note of the forum, and I'm an OpenGL guy). The standard bare-bones vertex shader will include a line like:
out_vertex = mat_perspective * mat_view * mat_model * in_vertex;
Lets, break that down, taking note that iirc DirectX has a few utility functions to create these matrix for you. You'd specify mat_perspective as either a Orthographic Projection ( XNA's Matrix.CreateOrthographic) or Perspective Projection( XNA's Matrix.CreatePerspective). You then create a view matrix, and this is where your camera comes in. You create a Look-At matrix ( XNA's Matrix.CreateLookAt) for your camera to position it at some location, and point it at your target. The view matrix is then the matrix inverse (XNA's Matrix.Invert) of the camera's matrix. The mat_model is your matrix that describes where the cube is (this can also be created as a look at matrix).
You then set all those matricies before rendering your cube (who's verticies should be centered at the origin, as the other matrix will move it in the world).
That's undoubtedly very complex and very hard and not specific at all.....Tough to really understand...Also, linear algebra has no exact effect on a camera's adjustment simply because each API requires different functions and such that I don't really know how to call on them/what to do when I call on them and how I get them to really do anything...
But thanks for help(even though I still have no real idea on how to do any of what I wanted).
Also, linear algebra has too many which ways it can go or be understood because I'd have to read countless resources and connections to it to know exactly how it implements 100% and how that implementation can be transformed to a 100% useful working code in such a project I will be using it in.
So, hence, linear algebra is seemingly not the issue here.
When DirectX draws a vertex to the screen, it transforms it using one or more matrices. Usually, one of these matrices will be the "view" matrix (in any API), which contains inverse transformation of the camera.
A camera transformation matrix generally contains a 3x3 rotation part (specifying which direction it's looking in) and a 3x1 position vector (specifying where it is in space).
To get a camera to rotate around a cube, you would take the cube's translation/rotation matrix, an offset translation matrix (containing the distance you want the camera to be from the cube) and a rotation matrix (containing the angle you want to look at the cube from), and then multiply them all together in the right order. Then you'd take the inverse of the result and use it as your view matrix.
Alternatively you'd use a helper function, like the above-mentioned "[font="Lucida Console"]CreateLookAt[/font]" to do these steps for you.
All of this matrix stuff is "linear algebra", so yes, if you want to use matrices to transform vertices onto a screen, it's the issue.
This use of linear algebra to create a view matrix is the same in every low-level graphics API.
Alternatively, if you don't want to learn about linear algebra, you could use something higher-level than DirectX, where you're given a "Camera" object instead of a "Matrix" object....
A camera transformation matrix generally contains a 3x3 rotation part (specifying which direction it's looking in) and a 3x1 position vector (specifying where it is in space).
To get a camera to rotate around a cube, you would take the cube's translation/rotation matrix, an offset translation matrix (containing the distance you want the camera to be from the cube) and a rotation matrix (containing the angle you want to look at the cube from), and then multiply them all together in the right order. Then you'd take the inverse of the result and use it as your view matrix.
Alternatively you'd use a helper function, like the above-mentioned "[font="Lucida Console"]CreateLookAt[/font]" to do these steps for you.
All of this matrix stuff is "linear algebra", so yes, if you want to use matrices to transform vertices onto a screen, it's the issue.
This use of linear algebra to create a view matrix is the same in every low-level graphics API.
Alternatively, if you don't want to learn about linear algebra, you could use something higher-level than DirectX, where you're given a "Camera" object instead of a "Matrix" object....
When DirectX draws a vertex to the screen, it transforms it using one or more matrices. Usually, one of these matrices will be the "view" matrix (in any API), which contains inverse transformation of the camera.
A camera transformation matrix generally contains a 3x3 rotation part (specifying which direction it's looking in) and a 3x1 position vector (specifying where it is in space).
To get a camera to rotate around a cube, you would take the cube's translation/rotation matrix, an offset translation matrix (containing the distance you want the camera to be from the cube) and a rotation matrix (containing the angle you want to look at the cube from), and then multiply them all together in the right order. Then you'd take the inverse of the result and use it as your view matrix.
Alternatively you'd use a helper function, like the above-mentioned "[font="Lucida Console"]CreateLookAt[/font]" to do these steps for you.
All of this matrix stuff is "linear algebra", so yes, if you want to use matrices to transform vertices onto a screen, it's the issue.
This use of linear algebra to create a view matrix is the same in every low-level graphics API.
Alternatively, if you don't want to learn about linear algebra, you could use something higher-level than DirectX, where you're given a "Camera" object instead of a "Matrix" object....
Okay, so why didn't you just tell me linear algebra was that perfectly necessary in the first place?
Okay, so why didn't you just tell me linear algebra was that perfectly necessary in the first place?
For starters, it wasn't at all clear that the lack of linear algebra knowledge was your stumbling block. Most DirectX/OpenGL tutorials cover some of the basics on how the graphics pipeline uses matrices to move stuff around.
I suggest either jumping into something like Unity that will take care of a lot of the lower level code, or start with some more text based games to get a good grasp on other topics you've posted about, like AI.
[quote name='BB1995' timestamp='1302629181' post='4797571']
Okay, so why didn't you just tell me linear algebra was that perfectly necessary in the first place?
For starters, it wasn't at all clear that the lack of linear algebra knowledge was your stumbling block. Most DirectX/OpenGL tutorials cover some of the basics on how the graphics pipeline uses matrices to move stuff around.
I suggest either jumping into something like Unity that will take care of a lot of the lower level code, or start with some more text based games to get a good grasp on other topics you've posted about, like AI.
[/quote]
That's insulting to tell me to go back to console applications, seriously.
Also, depressing...
I don't think it's insulting. You've either got to do the tough grind to get the basics of this stuff down, or use something else where the hard work has already been done for you, or side-step the issue entirely by selecting an option where it's not required. All are valid choices.
The D3D SDK has some introductory material on this stuff by the way; may be worth a read. Any primer on 3D graphics should also cover it.
The D3D SDK has some introductory material on this stuff by the way; may be worth a read. Any primer on 3D graphics should also cover it.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement