Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


Don't forget to read Tuesday's email newsletter for your chance to win a free copy of Construct 2!


Suspense

Member Since 30 Sep 2007
Offline Last Active Mar 29 2014 10:50 AM

Topics I've Started

camera problem

06 May 2013 - 04:37 PM

Hi everyone, I'm working on a 3D globe on Android, and it works great except for a camera problem I haven't been able to figure out yet.  I'm rotating the camera around the globe in a timer:

 

double time = (System.currentTimeMillis() & 0x0000ffff) / 1000.0;

camera.eye = new Vector3D(Math.sin(time), Math.cos(time), 0);

 

I expect the camera to orbit at a constant distance from the globe.  Instead, it seems to wobble back and forth a bit.  Also, there's some clipping that comes in and out on either side of the globe as it rotates.  Check this video:

http://www.filedropper.com/globe

 

I'm hoping that someone has seen this kind of newbie mistake before and can give me some idea where my problem is.  Any ideas?


need to find corner points of view area

23 May 2012 - 04:47 PM

I'm interfacing with a video camera mounted on a plane. The camera can tell me its GPS position (latitude, longitude, altitude) and the GPS position of its view area's center point (called Line of Sight). I also know the camera's horizontal field of view and aspect ratio. I can also acquire the plane's heading, pitch and roll, and the heading and pitch angles of the camera. The library I'm using (like Google Earth in Java) can convert between GPS positions and Cartesian coordinates used by OpenGL and can also do a ray cast against Earth geometry and return the point of collision.

I want to find the four corners of the camera's viewing area as GPS positions so I can draw the camera's view on the globe. As I said, if I can get the Cartesian coords then I can do the conversion to GPS. Here's what I have so far.
  • Convert camera position and line of sight to Cartesian coords.
  • Calculate the camera's direction vector: losPosition - cameraPosition
  • For each corner, rotate the direction vector by half the field of view in each direction
  • Use the camera point and rotated direction vector as a ray; do a raycast into earth geometry
  • Convert collision point back into GPS position
When I draw the resulting bounding box, it's in the right area on the globe, but the shape is totally wrong. My guess is that I need to rotate the rays around the plane's up/forward/right axes instead of the world axes. Am I on the right track? If so, is there an easy way to find those axes? Can you think of an easier, or even just different, approach to this problem?

how to rotate camera to new orientation?

18 September 2011 - 05:14 PM

Hi all, I've been playing with a new idea that I had and I need help with the math, as I only have a basic understanding of linear algebra. I have a first-person camera in a hollow cube. I can adjust the gravity so that a wall becomes the new floor, and the camera falls appropriately. I can orient the camera to any side of the cube and the camera rotates correctly with mouse movements. Now I want a smooth transition from one orientation to another and I haven't quite figured it out. For example, if the current orientation is (0, 1, 0) = up, how do I get a smooth camera rotation if it changes to (-1, 0 0) = up? Is there a general solution to this problem or will I have to handle special cases?

Currently I accumulate the mouse's X and Y movements in yaw and pitch variables. Every frame I use yaw and pitch to build a quaternion which is then used to transform the camera's up and forward vectors for the view matrix. Is this sufficient for what I'm trying to accomplish or should I be doing something different?

Here is an example I found of what I'm trying to accomplish. The relevant part starts about 1:20.

[SOLVED] Effects and shaders in MDX

23 September 2008 - 02:02 AM

I'm trying to get started with shaders using C# and DirectX 9. I have a shader that's based on a tutorial (I've tried several shaders actually.) I finally got it to run without errors but now it's not drawing anything. Here's the code:
			string errors = string.Empty;
			try
			{
				string constants = "WORLDVIEWPROJ";
				Effect effect = Effect.FromFile(d3dDevice, @"shader1.fx", null,
					constants, ShaderFlags.None, null, out errors);
				effect.Technique = "simple";
				int passes = effect.Begin(0);

				for (int i = 0; i < passes; i++)
				{
					Matrix mat = Matrix.Multiply(Matrix.RotationX(model.RotX),
						Matrix.RotationY(model.RotY));
					mat = Matrix.Multiply(mat, Matrix.RotationZ(model.RotZ));
					effect.SetValue("wordViewProj", mat);

					effect.BeginPass(i);

//					d3dDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, model.FirstVert,
//						model.FirstIndex, model.NumVerts, model.FirstIndex, model.NumPolys);
					d3dDevice.DrawPrimitives(PrimitiveType.TriangleList, model.FirstVert, model.NumPolys);

					effect.EndPass();
				}

				effect.End();
				return true;
			}

			catch (Direct3DXException exc)
			{
				string str = exc.ErrorString;
				MessageBox.Show(str);
				return false;
			}


And here's the effect file:
float4x4 worldViewProj : WORLDVIEWPROJ; //our world view projection matrix

//application to vertex structure
struct a2v
{ 
    float4 position : POSITION0;
};

//vertex to pixel processing structure
struct v2p
{
    float4 position : POSITION0;
};

// pixel shader output
struct outcolor
{
	float4 color : COLOR;
};

//VERTEX SHADER
void vs( in a2v IN, out v2p OUT ) 
{
    //transforming our position from object space to screen space.
    OUT.position = mul(IN.position, worldViewProj);
}

// PIXEL SHADER
void ps(out outcolor mycolor)
{
	mycolor.color = (0.5, 0.0, 0.0, 1.0);
}

technique simple
{
    pass p0
    {
        vertexshader = compile vs_2_0 vs();
        pixelshader = compile ps_2_0 ps();
    }
}


The effect file compiles with no errors in fxc. This has been a very frustrating experience for me because the directx documentation is a nightmare to interpret. I basically used the example code (which took forever to find) for the Effect class. But the code in the docs is missing a parameter for Effect.FromFile! At first I was getting an INVALIDCALL exception, but the errors string was empty. I played around with the parameters until I finally got it to run through. Is it just the shader that's wrong? [Edited by - Spencer Bowers on September 23, 2008 11:10:28 AM]

[SOLVED] weird colors, loading .x files (MDX in C#)

09 September 2008 - 12:18 PM

I'm writing a .x file loader. The files are being exported from milkshape 3d. Once I got it done all my models are coming out with strange colors and bad texture placement. While trying to figure out where the weirdness is coming from I've changed my CustomVertex format to PositionOnly and my models are still getting rendered in random colors. The only format that seems to work as expected is PositionColoredTextured. If I try to use normals then everything goes screwy, including the texture coordinates. So my question is, is there a certain render state or something I have to use that I'm not aware of in order to use normals? Or is there something I have to do with the normals themselves? [Edited by - Spencer Bowers on September 14, 2008 10:43:00 AM]

PARTNERS