• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Mort

Members
  • Content count

    65
  • Joined

  • Last visited

Community Reputation

147 Neutral

About Mort

  • Rank
    Member
  1. I found the solution to the problem, myself. I found out that to use hardware instancing, you also need to include a bit of high level shader language (HLSL) code, that will be sent to GPU. This was the bit of information that I failed to extract from the DirectX sample and other articles regarding hardware instancing. Fortunately I found a great set of articles describing how to use HLSL on this website: [url="http://www.riemers.net/eng/Tutorials/DirectX/Csharp/series3.php"]http://www.riemers.n...arp/series3.php[/url]. The HLSL tutorial found there doesn't describe how to do instancing, but it provides the set of information needed to get started on HLSL and using it to perform a simple rendering.This information combined with the Instancing sample from the DirectX SDK allowed me to create a simple shader which worked with the sample of code that I previously posted. The reason the missing piece of HLSL eluded me was that even though I didn't have any HLSL code myself, the default implementation used by DirectX would still render a single instance of the model provided. To allow others to learn from my mistakes, here is the final bit of code that I needed to get instancing to work: [code] #region Set up vertex declaration int vertexSize = 24; int instanceSize = 64; VertexElement []vertexElementsArray = new VertexElement[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Position, 0), new VertexElement(0, 12, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Normal, 0), new VertexElement(1, 0, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 0), new VertexElement(1, 16, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 1), new VertexElement(1, 32, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 2), new VertexElement(1, 48, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 3), }; // Create a vertex declaration based on the vertex elements. device.VertexDeclaration = new VertexDeclaration(device, vertexElementsArray); #endregion #region Set up vertex buffer for vertices Vector []vectors = new Vector[] { new Vector(-1, 0, 0), new Vector( 0, 1, 0), new Vector( 1, 0, 0) }; Vector normal = new Vector(0, 0, -1); VertexBuffer vertexBuffer = new VertexBuffer(device, vertexSize*vectors.Length, Usage.None, VertexFormat.None, Direct3D.ResourcePool); DataStream vertexBufferStream = vertexBuffer.Lock(0, vertexSize*vectors.Length, LockFlags.None); // Copy the vertex data to the Vertex Buffer memory block. for(int nCount=0; nCount<vectors.Length; nCount++) { vertexBufferStream.Write<float>(vectors[nCount].X); vertexBufferStream.Write<float>(vectors[nCount].Y); vertexBufferStream.Write<float>(vectors[nCount].Z); vertexBufferStream.Write<float>(normal.X); vertexBufferStream.Write<float>(normal.Y); vertexBufferStream.Write<float>(normal.Z); } // Unlock the Vertex Buffer again, to allow rendering of the Vertex Buffer data. vertexBuffer.Unlock(); #endregion #region Set up vertex buffer for instances int numberOfObjects = 10; // Create the interleaved Vertex Buffer. VertexBuffer instanceBuffer = new VertexBuffer(device, instanceSize*numberOfObjects, Usage.None, VertexFormat.None, Direct3D.ResourcePool); DataStream instanceBufferStream = instanceBuffer.Lock(0, instanceSize*numberOfObjects, LockFlags.None); // Create identity matrix. Math3D.Matrix matrix = new Math3D.Matrix(); // Copy the matrix data to the Vertex Buffer memory block. for(int count=0; count<numberOfObjects; count++) { // Translate the matrix along the X axis. matrix._41 = (float) count; matrix._42 = (float) 0; matrix._43 = (float) 10; instanceBufferStream.Write<float>(matrix[0, 0]); instanceBufferStream.Write<float>(matrix[0, 1]); instanceBufferStream.Write<float>(matrix[0, 2]); instanceBufferStream.Write<float>(matrix[0, 3]); instanceBufferStream.Write<float>(matrix[1, 0]); instanceBufferStream.Write<float>(matrix[1, 1]); instanceBufferStream.Write<float>(matrix[1, 2]); instanceBufferStream.Write<float>(matrix[1, 3]); instanceBufferStream.Write<float>(matrix[2, 0]); instanceBufferStream.Write<float>(matrix[2, 1]); instanceBufferStream.Write<float>(matrix[2, 2]); instanceBufferStream.Write<float>(matrix[2, 3]); instanceBufferStream.Write<float>(matrix[3, 0]); instanceBufferStream.Write<float>(matrix[3, 1]); instanceBufferStream.Write<float>(matrix[3, 2]); instanceBufferStream.Write<float>(matrix[3, 3]); } // Unlock the Vertex Buffer again, to allow rendering of the Vertex Buffer data. instanceBuffer.Unlock(); #endregion #region Set up index buffer int numberOfSurfaces = 1; IndexBuffer indexBuffer = new IndexBuffer(device, numberOfSurfaces*sizeof(uint)*3, Usage.None, Direct3D.ResourcePool, false); // Lock the buffer, so that we can access the data. DataStream indexBufferStream = indexBuffer.Lock(0, numberOfSurfaces*sizeof(uint)*3, LockFlags.None); indexBufferStream.Write<uint>(0); indexBufferStream.Write<uint>(1); indexBufferStream.Write<uint>(2); // Unlock the stream again, committing all changes. indexBuffer.Unlock(); device.Indices = indexBuffer; #endregion // Read a text file from an included resource, called "Instancing.fx" System.Reflection.Assembly assembly = System.Reflection.Assembly.GetExecutingAssembly(); System.IO.Stream stream = assembly.GetManifestResourceStream("Instancing.fx"); byte[] instancingEffects = new byte[stream.Length]; string compilationErrors; // Multiply projection and view matrix here to provide a view-projection matrix. Right here, the view is an identity matrix, so only projection matrix is used. SlimDX.Matrix viewProjectionMatrix = MyProjectionMatrix; Effect effect = null; stream.Read(instancingEffects, 0, instancingEffects.Length); try { // Compile the bit of HLSL. effect = Effect.FromMemory(device, instancingEffects, null, null, null, ShaderFlags.None, null, out compilationErrors); } catch(Exception exception) { string message = exception.ToString(); } effect.Technique = "Instance0Textures"; effect.SetValue("xViewProjection", viewProjectionMatrix); #region Render the scene device.SetStreamSource(0, vertexBuffer, 0, vertexSize); device.SetStreamSource(1, instanceBuffer, 0, instanceSize); // Specify how many times the vertex stream source and the instance stream source should be rendered. device.SetStreamSourceFrequency(0, 10, StreamSource.IndexedData); device.SetStreamSourceFrequency(1, 1, StreamSource.InstanceData); int numberOfPasses = effect.Begin(0); for(int pass=0; pass<numberOfPasses; pass++) { effect.BeginPass(pass); device.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 3, 0, 1); effect.EndPass(); } effect.End(); // Reset the stream source frequence to it's default values, before exiting. device.ResetStreamSourceFrequency(0); device.ResetStreamSourceFrequency(1); #endregion [/code] And here is the bit of HLSL that made it work (The "instancing.fx" text file, included in the project): [code] float4x4 xViewProjection; void InstancingWith0Textures(float4 position : POSITION, float4 tex0 : TEXCOORD0, float4 tex1 : TEXCOORD1, float4 tex2 : TEXCOORD2, float4 tex3 : TEXCOORD3, out float4 transformedPosition : POSITION) { // Use the values from the 4 texture coordinates to compose a transformation matrix. float4x4 transformation = {tex0, tex1, tex2, tex3}; // Transform the vertex into world coordinates. transformedPosition = mul(position, transformation); // Transform the vertex from world coordinates into screen coordinates. transformedPosition = mul(transformedPosition, xViewProjection); } technique Instance0Textures { pass Pass0 { VertexShader = compile vs_2_0 InstancingWith0Textures(); PixelShader = NULL; } } [/code]
  2. Yes I tried using PIX. It showed me the scene as it was being rendered, but didn't tell me why the scene wasn't rendering more than one instance of the model.
  3. The code I have posted should contain everything necessary to draw one polygon and render it 10 times using instancing. I have cut away textures, lighting and more complex polygons, since they don't serve to demonstrate the problem.The code that is ommitted sets up the Device (DrawingDevice), handles windows, matrices, models, vectors and a lot of other stuff that shouldn't be relevant to the problem. I'm able to render my models successfully without instancing and everything works fine if doing so. My problem is that when I try to improve performance by using instancing instead of rendering each vertex buffer individually, I don't get the result I expect and the transformation matrix I tried to apply using instancing seems to be ignored.
  4. I'm having problem with getting instancing to work, in my Graphics Engine. It seems that no matter what I do, I can only get DirectX to render a single instance of my model and I'm not able to apply a transformation matrix to that one instance. I had a look at the DirectX SDK's instancing sample and documentation, but unfortunately that sample isn't particularly well written (Storing instancing parameters as a color and 4 bytes for position and rotation of the instance). Even when trying to duplicate the results from this sample, I can't get more than one instance. I've tried to include all my relevant code, in a cut-down form, to keep it as simple as possible to read. Can anyone please tell me what I'm doing wrong here ? [code] #region Set up vertex declaration int vertexSize = 24; int instanceSize = 64; VertexElement []vertexElementsArray = new VertexElement[] { new VertexElement(0, 0, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Position, 0), new VertexElement(0, 12, DeclarationType.Float3, DeclarationMethod.Default, DeclarationUsage.Normal, 0), new VertexElement(1, 0, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 0), new VertexElement(1, 16, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 1), new VertexElement(1, 32, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 2), new VertexElement(1, 48, DeclarationType.Float4, DeclarationMethod.Default, DeclarationUsage.TextureCoordinate, 3), }; // Create a vertex declaration based on the vertex elements. DrawingDevice.VertexDeclaration = new VertexDeclaration(DrawingDevice, vertexElementsArray); #endregion #region Set up vertex buffer for vertices Vector []vectors = new Vector[] { new Vector(-1, 0, 0), new Vector( 0, 1, 0), new Vector( 1, 0, 0) }; Vector normal = new Vector(0, 0, -1); VertexBuffer vertexBuffer = new VertexBuffer(DrawingDevice, vertexSize*vectors.Length, Usage.None, VertexFormat.None, Pool.Managed); DataStream vertexBufferStream = vertexBuffer.Lock(0, vertexSize*vectors.Length, LockFlags.None); // Copy the vertex data to the Vertex Buffer memory block. for(int nCount=0; nCount<vectors.Length; nCount++) { vertexBufferStream.Write<float>(vectors[nCount].X); vertexBufferStream.Write<float>(vectors[nCount].Y); vertexBufferStream.Write<float>(vectors[nCount].Z); vertexBufferStream.Write<float>(normal.X); vertexBufferStream.Write<float>(normal.Y); vertexBufferStream.Write<float>(normal.Z); } // Unlock the Vertex Buffer again, to allow rendering of the Vertex Buffer data. vertexBuffer.Unlock(); #endregion #region Set up vertex buffer for instances int numberOfObjects = 10; // Create the interleaved Vertex Buffer. VertexBuffer instanceBuffer = new VertexBuffer(DrawingDevice, instanceSize*numberOfObjects, Usage.None, VertexFormat.None, Pool.Managed); DataStream instanceBufferStream = instanceBuffer.Lock(0, instanceSize*numberOfObjects, LockFlags.None); // Create identity matrix. Math3D.Matrix matrix = new Math3D.Matrix(); // Copy the matrix data to the Vertex Buffer memory block. for(int count=0; count<numberOfObjects; count++) { // Translate the matrix along the X axis. matrix._41 = count; instanceBufferStream.Write<float>(matrix[0, 0]); instanceBufferStream.Write<float>(matrix[0, 1]); instanceBufferStream.Write<float>(matrix[0, 2]); instanceBufferStream.Write<float>(matrix[0, 3]); instanceBufferStream.Write<float>(matrix[1, 0]); instanceBufferStream.Write<float>(matrix[1, 1]); instanceBufferStream.Write<float>(matrix[1, 2]); instanceBufferStream.Write<float>(matrix[1, 3]); instanceBufferStream.Write<float>(matrix[2, 0]); instanceBufferStream.Write<float>(matrix[2, 1]); instanceBufferStream.Write<float>(matrix[2, 2]); instanceBufferStream.Write<float>(matrix[2, 3]); instanceBufferStream.Write<float>(matrix[3, 0]); instanceBufferStream.Write<float>(matrix[3, 1]); instanceBufferStream.Write<float>(matrix[3, 2]); instanceBufferStream.Write<float>(matrix[3, 3]); } // Unlock the Vertex Buffer again, to allow rendering of the Vertex Buffer data. instanceBuffer.Unlock(); #endregion #region Set up index buffer int numberOfSurfaces = 1; IndexBuffer indexBuffer = new IndexBuffer(DrawingDevice, numberOfSurfaces*sizeof(uint)*3, Usage.None, Pool.Default, false); // Lock the buffer, so that we can access the data. DataStream indexBufferStream = indexBuffer.Lock(0, numberOfSurfaces*sizeof(uint)*3, LockFlags.None); indexBufferStream.Write<uint>(0); indexBufferStream.Write<uint>(1); indexBufferStream.Write<uint>(2); // Unlock the stream again, committing all changes. indexBuffer.Unlock(); Device.Indices = indexBuffer; #endregion #region Render the scene DrawingDevice.SetStreamSource(0, vertexBuffer, 0, vertexSize); DrawingDevice.SetStreamSource(1, instanceBuffer, 0, instanceSize); // Specify how many times the vertex stream source and the instance stream source should be rendered. DrawingDevice.SetStreamSourceFrequency(0, 10, StreamSource.IndexedData); DrawingDevice.SetStreamSourceFrequency(1, 1, StreamSource.InstanceData); DrawingDevice.DrawIndexedPrimitives(PrimitiveType.TriangleList, 0, 0, 3, 0, 1); // Reset the stream source frequence to it's default values, before exiting. DrawingDevice.ResetStreamSourceFrequency(0); DrawingDevice.ResetStreamSourceFrequency(1); #endregion [/code]
  5. If you want the camera (Or any other object that uses a Matrix for position and orientation) to follow the same position and direction as another object, you need to multiply the matrices together. MatrixObject = MatrixObjectTranslation*MatrixObjectRotation*MatrixObjectScaling // Contains the position, rotation and scaling of the object MatrixCameraOffset = MatrixCameraOffsetTranslation // Contains the distance from the object to the camera MatrixCamera = MatrixObject*MatricCameraOffset // Contains the position of the camera
  6. I recently started using PhysX to add better physics simulations to my graphics engine. However I'm experiencing some strange problems when rotating an actor, by calling MyNxActor.setGlobalOrientationQuat(). The call appears to work fine with certain quaternions, but with other quaternions the result becomes corrupted. I will show a snippet of my code first: WCHAR sBuffer[200]; // I read some quaternion values from a 3D object in my graphics engine and apply the values to a NxQuat PhysX quaternion. NxQuat Quaternion; Quaternion.setXYZW(m_pEngineObject->QuaternionX, m_pEngineObject->QuaternionY, m_pEngineObject->QuaternionZ, m_pEngineObject->QuaternionAngle); swprintf_s(sBuffer, 200, L"1) X: %2.3f Y: %2.3f Z: %2.3f W: %2.3f\n", Quaternion.x, Quaternion.y, Quaternion.z, Quaternion.w); OutputDebugString(sBuffer); // The quaternion is applied to my NxActor ( m_pPhysicsActor is of type NxActor* ). m_pPhysicsActor->setGlobalOrientationQuat(Quaternion); // I then retrieve the newly applied quaternion again to check if it has the same value as previously. NxQuat RetrievedQuaternion = m_pPhysicsActor->getGlobalOrientationQuat(); swprintf_s(sBuffer, 200, L"2) X: %2.3f Y: %2.3f Z: %2.3f W: %2.3f\n", RetrievedQuaternion.x, RetrievedQuaternion.y, RetrievedQuaternion.z, RetrievedQuaternion.w); OutputDebugString(sBuffer); The code has been enriched with some debug statements that allow me to monitor the quaternion values I put into the NxActor and the quaternion values I retrieve from the actor again. I will show the debug output below, to demonstrate the values put into and read again from the NxActor. Notice how the values appear correct to begin with and then suddenly become incorrect. If you are very sharp at math, you may see that what is happening between each iteration is that the actor is being rotated 1 degree around the X axis. The error begins ocurring after the actor has been rotated to -90 degrees around the X axis (The second sample below here shows -90 degrees). Debug output values: 1) X: -0.694 Y: 0.000 Z: 0.000 W: 0.720 2) X: -0.694 Y: 0.000 Z: 0.000 W: 0.720 3) X: -0.694 Y: 0.000 Z: 0.000 W: 0.720 1) X: -0.700 Y: 0.000 Z: 0.000 W: 0.714 2) X: -0.700 Y: 0.000 Z: 0.000 W: 0.714 3) X: -0.700 Y: 0.000 Z: 0.000 W: 0.714 1) X: -0.706 Y: 0.000 Z: 0.000 W: 0.708 2) X: -0.706 Y: 0.000 Z: 0.000 W: 0.708 3) X: -0.704 Y: 0.000 Z: 0.000 W: 0.709 1) X: -0.710 Y: 0.000 Z: 0.000 W: 0.703 2) X: -0.710 Y: 0.000 Z: 0.000 W: 0.704 3) X: -0.707 Y: 0.000 Z: 0.000 W: 0.706 1) X: -0.713 Y: 0.000 Z: 0.000 W: 0.700 2) X: -0.711 Y: 0.000 Z: 0.000 W: 0.702 3) X: -0.704 Y: 0.000 Z: 0.000 W: 0.707 1) X: -0.710 Y: 0.000 Z: 0.000 W: 0.701 2) X: -0.706 Y: 0.000 Z: 0.000 W: 0.705 3) X: -0.688 Y: 0.000 Z: 0.000 W: 0.718 1) X: -0.694 Y: 0.000 Z: 0.000 W: 0.712 2) X: -0.687 Y: 0.000 Z: 0.000 W: 0.720 3) X: -0.649 Y: 0.000 Z: 0.000 W: 0.749 1) X: -0.655 Y: 0.000 Z: 0.000 W: 0.743 2) X: -0.644 Y: 0.000 Z: 0.000 W: 0.755 3) X: -0.576 Y: 0.000 Z: 0.000 W: 0.805 1) X: -0.583 Y: 0.000 Z: 0.000 W: 0.799 2) X: -0.574 Y: 0.000 Z: 0.000 W: 0.812 3) X: -0.477 Y: 0.000 Z: 0.000 W: 0.872 You may have noticed that although the previously shown code only shows debug lines 1 and 2, debug line 3 is shown too. Line 3 shows the actor quaternion value, after running a scene simulation. The actor itself is a NxBox without a body and should not be affected by the scene simulation. Can anyone tell me if I am doing anything wrong or if the MyNxActor.setGlobalOrientationQuat() does indeed contain an error ?
  7. I created a 3D engine some years ago and have been improving on it on and off during the last years. A while ago I ran into a problem with my engine design that made me consider whether my engine design was flawed. My engine contains a collection of 3D objects, each of these objects contain 3 matrices: The translation, rotation, and scaling matrix. The complete transformation is always calculated in the order: Translation * rotation * scaling. Because each 3D object always uses these 3 matrixes, it limits my flexibility of calculating new matrixes. For instance it is not be possible to insert another matrix into the transformation calculation, if I needed to. I have tried to replace my 3 matrices with one combined transformation matrix, however with only one matrix I am no longer able to retrieve reliable position, angles and scaling parameters from my objects because these parameters have been combined into one matrix (For example: Rotation and scaling both occur on the _11, _22 and _33 positions of the 4x4 matrix). This is where I would like to draw from your experiences: How many matrixes do you maintain in each 3D element in your 3D engine and why ?
  8. Actually I took the inverse Translation, inverse Rotation and inverse Scaling matrix and applied to the child. The child then combines these in the order Scaling * Rotation * Translation * Parent (The Parent matrix consist of it's Scaling * Rotation * Translation too).
  9. That was the same formula I tried to use, but it is not giving me the correct result. When calculating the combined matrix, rotation and scaling is identical to the new child matrix (B in your sample), but translation coordinates are different. I'm thinking it's because the translation part of the new child matrix is being translated in the wrong direction due to the rotation part of the parent matrix (A in your sample).
  10. I've built at graphics engine where each object in the engine has its own 4x4 transformation matrix (Translation, rotation and scaling). Objects can be grouped together in a parent/child relationship, so that moving an arm will also move its hand along. This is done using matrix stacks to multiply each child matrix with its parent matrix in order to produce a combined matrix. Now the problem I have now is that when I attach a child object to its parent, I want the combined matrix to become the same result as the child matrix itself is. This is done to maintain the position, rotation and scaling of the child object, so that these parameters do not change when the child is attached to its parent. In order to do this, I need to derive a new matrix for the child object, so that the new child matrix multiplied with the parent matrix will equal the old child matrix. Can anyone tell me how to derive such a matrix ?
  11. Ahh great, thanks a lot jyk. I found the source code I was interested in, at geometrictools.com in a class called Matrix3 and its written in a language even I understand (C++ code, opposed to math) :). I haven't had the time to try it out myself yet, but it sure looks promising !
  12. Im not sure if I should post this question here or in the DirectX forum, but I'll try to put it here and if I get flamed, I'll move it to the DirectX forum ;). I'm looking for the C++ source code to convert either a Quaternion or a 4x4 matrix into XYZ euler angles. When googling, the closest I was able to find was either a mathematical description of how it could be done (Unfortunately I don't speak math, so that made little sense to me) or a Java sample (Which was not working for certain angles). I'm sure I'm far from the first person needing this, and all my attempts to solve the equation myself using sine/cosine relations has failed (Works fine when Im only rotating one of the axis, but thats not quite enough :)) Can anyone direct me to where I can find C++ source code for doing the conversion ?
  13. Thanks for your help solving this problem. I went with the method that Christer Ericson provided, mainly because it takes into account that the ray has a starting position but no stopping position (Sorry Christer, I don't know how to express that in correct mathematical terms). I tried to understand what happened in the formular, but math is not my strong side so after trying to understand every part of it for some hours, I ended up just using the code sample you provided (Thanks for providing that too).
  14. I have been trying to find a formular for a distance from a point to a vector, but have been out of luck. There are several formular examples of distance from point to line, but what I have is a vector starting position, a vector direction and a point in 3D space. Can anyone point me to a formular for this calculation ?