Jump to content

  • Log In with Google      Sign In   
  • Create Account

We're offering banner ads on our site from just $5!

1. Details HERE. 2. GDNet+ Subscriptions HERE. 3. Ad upload HERE.


BlackJoker

Member Since 28 Feb 2013
Offline Last Active Today, 12:02 PM

#5192720 Corrupting of the view if far from world center

Posted by BlackJoker on 13 November 2014 - 02:54 PM

Hello to all. 

I have a strange issue related (I think so) with view matrix of my camera class.

 

When camera is far from center of the world, it begins corrupting the view. 

Far - its starting from ~ 1.2*10^5-6.

 

I checked already my world matrix - it is correct, because if I put model back to the zero coordinates, it start looking correctly again.

Also view corrupting even if I don`t move model, but move camera.

Results on the screenshots attached to this post.

 

Here is my Camera class code. Maybe someone who wants help to solve this issue could find error, because I don`t see it (at least now).

enum CamType
    {
        Free = 0,
        FirstPerson = 1,
        ThirdPerson = 2,
        ThirdPersonAlt = 3
    }

    struct ViewMatrixDecomposeData
    {
        public Vector3 scale;
        public Quaternion rotation;
        public Vector3 translation;
    }

    class Camera
    {
        public Vector3 position;
        private Vector3 lookAt;
        private Vector3 center;
        private float radius;
        private Vector3 baseUp;
        private float fovX;
        private float fovY;
        private float aspectRatio;
        private float zNear;
        private float zFar;
        private Vector3 xAxis;
        private Vector3 yAxis;
        private Vector3 zAxis;
        private Quaternion qRotation;
        private CamType cameraType;
        private ViewMatrixDecomposeData viewMatrixDecomposeData;

        private Matrix viewMatrix;
        private Matrix projectionMatrix;

        public float ZNear
        {
            get { return zNear; }
            set { zNear = value; }
        }

        public float ZFar
        {
            get { return zFar; }
            set { zFar = value; }
        }

        public Matrix WorldMatrix { get; set; }

        public Matrix ViewMatrix
        {
            get { return viewMatrix; }
            set { viewMatrix = value; }
        }

        public Matrix ProjectionMatrix
        {
            get { return projectionMatrix; }
            set { projectionMatrix = value; }
        }

        public CamType CameraType
        {
            get
            {
                return cameraType;
            }

            set
            {
                if (cameraType != value)
                {
                    if (((cameraType == CamType.ThirdPerson) || (cameraType == CamType.ThirdPersonAlt)) && (value == CamType.Free))
                    {
                        // this is for the case of swithching from 3rd person to Free camera
                        GetViewMatrixRotation();

                        qRotation = viewMatrixDecomposeData.rotation;
                    }

                    cameraType = value;
                }
            }
        }

        public Camera(Vector3 _position, Vector3 _lookAt, Vector3 _up)
        {
            // free camera construstor
            SetFreeCamera(_position, _lookAt, _up);
        }

        public Camera(Vector3 _position, Quaternion _objectRotation, float _face_distance)
        {
            // 1st person camera construstor
            SetFirstPersonCamera(_position, _objectRotation, _face_distance);
        }

        public Camera(Vector3 _center, Quaternion _objectRotation, Vector3 _initialRelRotation, float _radius, Vector3 _lookAt, bool isAlternative)
        {
            // 3rd person camera constructor
            SetThirdPersonCamera(_center, _objectRotation, _initialRelRotation, _radius, _lookAt, isAlternative);
        }

        public void BuildPerspectiveForFovX(float _fovX, float _aspect, float _zNear, float _zFar)
        {
            fovX = _fovX;
            aspectRatio = _aspect;
            zNear = _zNear;
            zFar = _zFar;

            float e = 1.0f/(float) Math.Tan(MathUtil.DegreesToRadians(fovX/2.0f));
            float aspectInv = 1.0f/aspectRatio;
            fovY = 2.0f*(float) Math.Atan(aspectInv/e);
            float xScale = 1.0f/(float) Math.Tan(0.5f*fovY);
            float yScale = xScale/aspectInv;

            Matrix temp = ProjectionMatrix;

            temp.M11 = xScale;
            temp.M21 = 0.0f;
            temp.M31 = 0.0f;
            temp.M41 = 0.0f;

            temp.M12 = 0.0f;
            temp.M22 = yScale;
            temp.M32 = 0.0f;
            temp.M42 = 0.0f;

            temp.M13 = 0.0f;
            temp.M23 = 0.0f;
            temp.M33 = zFar/(zFar - zNear);
            temp.M43 = -zNear*zFar/(zFar - zNear);

            temp.M14 = 0.0f;
            temp.M24 = 0.0f;
            temp.M34 = 1.0f;
            temp.M44 = 0.0f;

            ProjectionMatrix = temp;
            //ProjectionMatrix = Matrix.PerspectiveFovLH(_fovX, _aspect, zNear, zFar);
        }

        public Matrix BuildPerspectiveForFovX2 (float _fovX, float _aspect, float _zNear, float _zFar)
        {
            fovX = _fovX;
            aspectRatio = _aspect;
            zNear = _zNear;
            zFar = _zFar;

            float e = 1.0f / (float)Math.Tan(MathUtil.DegreesToRadians(fovX / 2.0f));
            float aspectInv = 1.0f / aspectRatio;
            fovY = 2.0f * (float)Math.Atan(aspectInv / e);
            float xScale = 1.0f / (float)Math.Tan(0.5f * fovY);
            float yScale = xScale / aspectInv;

            Matrix temp = new Matrix();

            temp.M11 = xScale;
            temp.M21 = 0.0f;
            temp.M31 = 0.0f;
            temp.M41 = 0.0f;

            temp.M12 = 0.0f;
            temp.M22 = yScale;
            temp.M32 = 0.0f;
            temp.M42 = 0.0f;

            temp.M13 = 0.0f;
            temp.M23 = 0.0f;
            temp.M33 = zFar / (zFar - zNear);
            temp.M43 = -zNear * zFar / (zFar - zNear);

            temp.M14 = 0.0f;
            temp.M24 = 0.0f;
            temp.M34 = 1.0f;
            temp.M44 = 0.0f;

            return temp;
        }

        public void BuildPerspectiveForFovY(float _fovY, float _aspect, float _zNear, float _zFar)
        {
            ZNear = _zNear;
            ZFar = _zFar;
            ProjectionMatrix = Matrix.PerspectiveFovLH(_fovY, _aspect, zNear, zFar);
        }

        private void GetViewMatrixRotation()
        {
            ViewMatrix.Decompose(out viewMatrixDecomposeData.scale, out viewMatrixDecomposeData.rotation, out viewMatrixDecomposeData.translation);
            viewMatrixDecomposeData.rotation = Quaternion.Normalize(viewMatrixDecomposeData.rotation);
        }

        private void SetAxisFromViewMatrix()
        {
            xAxis = new Vector3(ViewMatrix.M11, ViewMatrix.M21, ViewMatrix.M31);
            yAxis = new Vector3(ViewMatrix.M12, ViewMatrix.M22, ViewMatrix.M32);
            zAxis = new Vector3(ViewMatrix.M13, ViewMatrix.M23, ViewMatrix.M33);
        }

        private void UpdateViewMatrix()
        {
            qRotation = Quaternion.Normalize(qRotation);

            if ((CameraType == CamType.ThirdPerson) || (CameraType == CamType.ThirdPersonAlt))
            {
                ViewMatrix = Matrix.Translation(Vector3.Negate(center))*Matrix.RotationQuaternion(qRotation);

                position = center - new Vector3(ViewMatrix.M13, ViewMatrix.M23, ViewMatrix.M33)*radius;

                ViewMatrix = Matrix.LookAtLH(position, lookAt,
                    new Vector3(ViewMatrix.M12, ViewMatrix.M22, ViewMatrix.M32));
            }
            else
            {
                ViewMatrix = Matrix.Translation(Vector3.Negate(position))*Matrix.RotationQuaternion(qRotation);

                if (CameraType == CamType.FirstPerson)
                {
                    ViewMatrix *= Matrix.Translation(new Vector3(0, 0, -radius));
                }
            }
        }

        public void RotateX(float _degree_angle)
        {
            if ((cameraType == CamType.ThirdPerson) || (cameraType == CamType.ThirdPersonAlt))
            {
                _degree_angle = -_degree_angle;
            }

            qRotation = Quaternion.Multiply(Quaternion.RotationAxis(Vector3.UnitX, MathUtil.DegreesToRadians(_degree_angle)), qRotation);
            UpdateViewMatrix();
        }

        public void RotateY(float _degree_angle)
        {
            if ((cameraType == CamType.ThirdPerson) || (cameraType == CamType.ThirdPersonAlt))
            {
                _degree_angle = -_degree_angle;
            }

            if (cameraType == CamType.ThirdPersonAlt)
            {
                qRotation = Quaternion.Multiply(qRotation, Quaternion.RotationAxis(baseUp, MathUtil.DegreesToRadians(_degree_angle)));
            }
            else
            {
                qRotation = Quaternion.Multiply(Quaternion.RotationAxis(Vector3.UnitY, MathUtil.DegreesToRadians(_degree_angle)), qRotation);
            }
            UpdateViewMatrix();
        }

        public void RotateZ(float _degree_angle)
        {
            qRotation = Quaternion.Multiply(Quaternion.RotationAxis(Vector3.UnitZ, MathUtil.DegreesToRadians(_degree_angle)), qRotation);
            UpdateViewMatrix();
        }

        public void MoveRelX(float _rel_x)
        {
            if (cameraType == CamType.Free)
            {
                SetAxisFromViewMatrix();

                position += xAxis * _rel_x;

                UpdateViewMatrix();
            }
        }

        public void MoveRelY(float _rel_y)
        {
            if (cameraType == CamType.Free)
            {
                SetAxisFromViewMatrix();

                position += yAxis * _rel_y;

                UpdateViewMatrix();
            }
        }

        public void MoveRelZ(float _rel_z)
        {
            if ((cameraType == CamType.ThirdPerson) || (cameraType == CamType.ThirdPersonAlt))
            {
                radius -= _rel_z;

                UpdateViewMatrix();
            }
            else
            {
                SetAxisFromViewMatrix();

                position += zAxis * _rel_z;

                UpdateViewMatrix();
            }
        }

        public void SetFreePosition(Vector3 _position)
        {
            position = _position;

            UpdateViewMatrix();
        }

        public void SetFreeLookAt(Vector3 _lookAt)
        {
            lookAt = _lookAt;

            Vector3 up = new Vector3(ViewMatrix.M12, ViewMatrix.M22, ViewMatrix.M32);

            ViewMatrix = Matrix.LookAtLH(position, lookAt, up);

            GetViewMatrixRotation();

            qRotation = viewMatrixDecomposeData.rotation;

            UpdateViewMatrix();
        }

        public void SetFreeCamera(Vector3 _position, Vector3 _lookAt, Vector3 _up)
        {
            cameraType = CamType.Free;    
            
            position = _position;
            lookAt = _lookAt;
            
            ViewMatrix = Matrix.LookAtLH(position, lookAt, _up);
            
            GetViewMatrixRotation();

            qRotation = viewMatrixDecomposeData.rotation;

            UpdateViewMatrix();
        }

        // SetRadius is both for 1st and 3rd person camera
        public void SetRadius(float _radius)
        {
            radius = _radius;

            UpdateViewMatrix();
        }

        public void SetFirstPersonPositionRotation(Vector3 _position, Quaternion _objectRotation)
        {
            position = _position;
            qRotation = _objectRotation;

            UpdateViewMatrix();
        }

        public void SetFirstPersonCamera(Vector3 _position , Quaternion _objectRotation, float _face_distance)
        {
            cameraType = CamType.FirstPerson;

            position = _position;
            radius = _face_distance;
            qRotation = _objectRotation;

            UpdateViewMatrix();
        }
        
        public void SetThirdPersonCenterLookAt(Vector3 _center, Vector3 _lookAt)
        {
            center = _center;
            lookAt = _lookAt;

            UpdateViewMatrix();
        }

        public void SetThirdPersonCamera(Vector3 _center, Quaternion _objectRotation, Vector3 _initialRelRotation, float _radius, Vector3 _lookAt, bool isAlternative)
        {
            if (isAlternative == true)
            {
                cameraType = CamType.ThirdPersonAlt;
            }
            else
            {
                cameraType = CamType.ThirdPerson;    
            }
            
            center = _center;
            radius = _radius;
            lookAt = _lookAt;

            qRotation = _objectRotation;

            baseUp = Matrix.RotationQuaternion(_objectRotation).Up;

            qRotation = Quaternion.Multiply(Quaternion.RotationAxis(Vector3.UnitX, MathUtil.DegreesToRadians(_initialRelRotation.X)), qRotation);
            qRotation = Quaternion.Multiply(qRotation, Quaternion.RotationAxis(baseUp, MathUtil.DegreesToRadians(_initialRelRotation.Y)));
            qRotation = Quaternion.Multiply(Quaternion.RotationAxis(Vector3.UnitZ, MathUtil.DegreesToRadians(_initialRelRotation.Z)), qRotation);

            UpdateViewMatrix();
        }
    }

Attached Thumbnails

  • Not corruptrd.png
  • Corrupted.png



#5183667 Calculating smooth normales

Posted by BlackJoker on 29 September 2014 - 12:36 AM


One way (I'm sure there are others) is to average the un-normalized crossproducts before normalizing the result. The crossproduct is the area of the triangle and, therefore, weights the normals proportional to the areas of the triangles that border it. I.e., they will "lean" toward the larger triangles. If your posted method (revised as suggested) doesn't look right to you, you might try that. BTW, I'm not familiar with 3ds so can't help you with regard to what/why 3ds does.

 

Do you mean something like this:

Vector3[] normals = new Vector3[geometryData.Positions.Count];
            Int32[] count = new Int32[geometryData.Positions.Count];
            for (int i = 0; i < geometryData.IndexBuffer.Count; i+=3)
            {
                Vector3 v0 = geometryData.Positions[geometryData.IndexBuffer[i + 1]].Position - geometryData.Positions[geometryData.IndexBuffer[i]].Position;
                Vector3 v1 = geometryData.Positions[geometryData.IndexBuffer[i+2]].Position - geometryData.Positions[geometryData.IndexBuffer[i]].Position;
                
                Vector3 normal = Vector3.Cross(v0, v1);

                count[geometryData.IndexBuffer[i]] ++;
                count[geometryData.IndexBuffer[i+1]]++;
                count[geometryData.IndexBuffer[i+2]]++;

                normals[geometryData.IndexBuffer[i]] += normal;
                normals[geometryData.IndexBuffer[i + 1]] += normal;
                normals[geometryData.IndexBuffer[i + 2]] += normal;
                
            }

            for (int i = 0; i < normals.Length; i++)
            {
                normals[i] = Vector3.Normalize(normals[i]/count[i]);
            }

I create additional array to store count of normal for each index. Then I divide each normal on that count and after that normalize normal only once. Is that what you mean?




#5179705 Enable multisampling for WinRT (Windows 8.1 + SharpDX)

Posted by BlackJoker on 11 September 2014 - 04:03 PM

Well, I solved my issue for sharpdx. If anyone will face such issue, do the following:

Create all needed resources when Device created. This is: MSAA depthbuffer, depthstencilview, RenderTarget2D and set all this to device. Then in Draw method you need to set targets each time and resolve subresource after drawing. Here is my code:

//Initialization

DepthStencilStateDescription depthStencilDescription = new DepthStencilStateDescription();
            depthStencilDescription.IsDepthEnabled = true;
            depthStencilDescription.DepthWriteMask = DepthWriteMask.All;
            depthStencilDescription.DepthComparison = Comparison.Less;

            depthStencilDescription.IsStencilEnabled = true;
            depthStencilDescription.StencilReadMask = 0xFF;
            depthStencilDescription.StencilWriteMask = 0xFF;

            //Шаблонные операции если пиксель впереди
            depthStencilDescription.FrontFace.FailOperation = StencilOperation.Keep;
            depthStencilDescription.FrontFace.DepthFailOperation = StencilOperation.Increment;
            depthStencilDescription.FrontFace.PassOperation = StencilOperation.Keep;
            depthStencilDescription.FrontFace.Comparison = Comparison.Always;

            //Шаблонные операции, если пиксель на обратной стороне
            depthStencilDescription.BackFace.FailOperation = StencilOperation.Keep;
            depthStencilDescription.BackFace.DepthFailOperation = StencilOperation.Decrement;
            depthStencilDescription.BackFace.PassOperation = StencilOperation.Keep;
            depthStencilDescription.BackFace.Comparison = Comparison.Always;
            
            
            DepthStencilViewDescription depthStencilViewDescription = new DepthStencilViewDescription();
            depthStencilViewDescription.Format = Format.D32_Float;
            depthStencilViewDescription.Dimension = DepthStencilViewDimension.Texture2DMultisampled;
            depthStencilViewDescription.Flags = DepthStencilViewFlags.None;
            depthStencilViewDescription.Texture2DMS = new DepthStencilViewDescription.Texture2DMultisampledResource();

            TextureDescription rendertoTextureDescription = new TextureDescription();
            rendertoTextureDescription.Width = (int)SwapChainPanel.ActualWidth;
            rendertoTextureDescription.Height = (int)SwapChainPanel.ActualHeight;
            rendertoTextureDescription.MipLevels = 1;
            rendertoTextureDescription.ArraySize = 1;
            rendertoTextureDescription.Format = Format.R8G8B8A8_UNorm;
            rendertoTextureDescription.SampleDescription.Count = 8;
            rendertoTextureDescription.SampleDescription.Quality = 0;
            rendertoTextureDescription.Usage = ResourceUsage.Default;
            rendertoTextureDescription.BindFlags = BindFlags.RenderTarget|BindFlags.ShaderResource;
            rendertoTextureDescription.OptionFlags = ResourceOptionFlags.Shared;
            rendertoTextureDescription.CpuAccessFlags = CpuAccessFlags.None;

            backbuffertexture = ToDispose(Texture2D.New(GraphicsDevice, rendertoTextureDescription));
            
            rt = ToDispose(RenderTarget2D.New(GraphicsDevice, (SharpDX.Direct3D11.Texture2D)backbuffertexture));
            
            depthStencilState =
                ToDispose(SharpDX.Toolkit.Graphics.DepthStencilState.New(GraphicsDevice, depthStencilDescription));

            depthStencilBuffer = ToDispose(DepthStencilBuffer.New(GraphicsDevice, (int)SwapChainPanel.ActualWidth, (int)SwapChainPanel.ActualHeight, MSAALevel.X8, DepthFormat.Depth32));
            depthStencilView = ToDispose(new DepthStencilView(GraphicsDevice, GraphicsDevice.DepthStencilBuffer, depthStencilViewDescription));

            device = GraphicsDevice;
            GraphicsDevice.SetRenderTargets(depthStencilBuffer, rt);
            GraphicsDevice.SetViewports(new ViewportF(0,0, (int)SwapChainPanel.ActualWidth, (int)SwapChainPanel.ActualHeight));

...
//Drawing

DepthStencilView oldDepth;
                    RenderTargetView[] oldTargets = GraphicsDevice.GetRenderTargets(out oldDepth);
                    GraphicsDevice.SetRenderTargets(depthStencilBuffer,rt);
                    GraphicsDevice.Clear(Color.CornflowerBlue);
//Draw content
...
device.ResolveSubresource(rt, 0, GraphicsDevice.BackBuffer, 0, GraphicsDevice.BackBuffer.Format);

If you want to use Spritebatch, you need to set old targets back after resolving. So, do:

GraphicsDevice.SetRenderTargets(oldDepth, oldTargets);

                        spriteBatch.Begin();
                        spriteBatch.Draw(rt, new RectangleF((int)SwapChainPanel.ActualWidth - 500, (int)SwapChainPanel.ActualHeight - 350, 500, 350), new Rectangle(0, 0, 500, 350), Color.White, 0.0f, new Vector2(100, 100), SpriteEffects.None, 0.0f);
                        spriteBatch.End();

Thats all :)




#5175285 Best way to render text with DirectX11

Posted by BlackJoker on 21 August 2014 - 08:55 AM

I am cofirm that Win 7 support D2D with D3D11 now, because I implement this feature in my engine more than a year ago.




#5174982 How to get rid of camera z-axis rotation

Posted by BlackJoker on 20 August 2014 - 06:31 AM

Thanks, but side and front is not using in XMMatrixLookAtLH(...) or D3DXMatrixLookAtLH(...). Where and when I need to apply them?




#5173132 Using SharpDX.Toolkit.Effect classes as shader replacement

Posted by BlackJoker on 12 August 2014 - 12:48 PM

Thanks for real help to all *sarcasm*

I found this site http://rbwhitaker.wikidot.com/hlsl-tutorials useful as it desribed Effect API and shaders which are almost identical to SharpDX Effect API.




#5166928 Change GraphicsDeviceManager`s GraphicDevice in SharpDX

Posted by BlackJoker on 15 July 2014 - 01:41 AM

Problem is Solved. You need to subscribe to the PreparingDeviceSettings event where you can override various settings including adapter via its event arguments




#5124673 SwapChain resize questions

Posted by BlackJoker on 18 January 2014 - 10:57 AM

I have few questions regarding swapchain resizing:

 

First question:

I made resizing just like in MSDN article, but seems it works not fully correct.

After resizing back of my model begin to be visible from front. You can look this at screenshots

before_res.png

after_res.png

result.png

 

first screenshot  - before resizing

second - after

third - how it displays in another angle.

 

Looks like stencil operation not working. Here is the code of my swapchain resizing:

void D3D11::ResizeSwapChain()
{
	if (swapChain)
	{
		DeinitializeD2D11();
		RECT rc;
		GetClientRect(hwnd, &rc);
		UINT width = rc.right - rc.left;
		UINT height = rc.bottom - rc.top;

		d3d11_DeviceContext->OMSetRenderTargets(0, 0, 0);

		// Release all outstanding references to the swap chain's buffers.
		d3d11_RenderTargetView->Release();

		HRESULT hr;
		// Preserve the existing buffer count and format.
		// Automatically choose the width and height to match the client rect for HWNDs.
		hr = swapChain->ResizeBuffers(0, 0, 0, DXGI_FORMAT_UNKNOWN, 0);


		// Perform error handling here!

		// Get buffer and create a render-target-view.
		ID3D11Texture2D* pBuffer;
		hr = swapChain->GetBuffer(0, __uuidof(ID3D11Texture2D),
			(void**)&pBuffer);
		// Perform error handling here!

		hr = d3d11_Device->CreateRenderTargetView(pBuffer, NULL,
			&d3d11_RenderTargetView);
		// Perform error handling here!
		pBuffer->Release();

		d3d11_DeviceContext->OMSetRenderTargets(1, &d3d11_RenderTargetView, NULL);

		// Set up the viewport.
		D3D11_VIEWPORT viewPort;

		viewPort.Width = (float)width;
		viewPort.Height = (float)height;
		viewPort.MinDepth = 0.0f;
		viewPort.MaxDepth = 1.0f;
		viewPort.TopLeftX = 0.0f;
		viewPort.TopLeftY = 0.0f;
		d3d11_DeviceContext->RSSetViewports(1, &viewPort);

		InitializeD2D11();
	}
}

And second question:

 

if I created swapchain with 2 or more buffers, is this code correct in this case or I need do some changes to correctly resize all buffers?




#5118726 How to make tile texture

Posted by BlackJoker on 22 December 2013 - 09:37 AM

Is this the only way to do this?




#5061702 Best way to render complex model

Posted by BlackJoker on 14 May 2013 - 12:58 AM

Hello,

 

I have an interesting question on my opinion and I don`t know an answer yet.

 

I want to hold in memory and render a models which consist from big amount of different meshes (for ex. a big space ship) It has a lot of decks and dirrerent stuff inside and outside. Of course I want to have possibility to remove and add any item to/from this ship in any time. 

Each mesh has it`s own matrix. So, when I start moving a ship I had to multiply all matrices to move all objects correctly. But If I have a lot of matrices I could not do everything in time because multiplying all matrices could take a lot of time and in that case I will have lags.

 

If I merge all meshes in 3ds max I loose possibility to move separate objects.

 

So, my question is: what is the effective render algorithm if I want to have models consists from huge amount of other objects?

 

I don`t want to have a monolithic ship or any other object. I want to have possibility to remove any part away in any time.

 

Who has the same problem, please, say which way you choose to render meshes!

 




#5043911 Loaded scene form COLLADA is inverted in DirectX

Posted by BlackJoker on 17 March 2013 - 04:19 AM

Hello, 

 

I use COLLADA to load mesh data to my engine and I use my own converter. I know that matrix from COLLADA for each mesh should be transposed for using in DirectX, I also do this, but I faced with the following problem: scene or single object that I load to my engine after conversion to binary file is inverted along Z and Y axis. I multiply one value from each mesh matrix to -1 during transposition and after that position along Y axis is correct, but not along Z axis. I don`t understand how to translate mesh matrices that all of meshes in scene display correctly. For more details please look to attach screen shots from 3ds max and my engine.  







outputMatrixValues[12] = (float)matrixValues[3] ;
outputMatrixValues[13] = (float)matrixValues[7];
outputMatrixValues[14] = (float)matrixValues[11] * (-1);
outputMatrixValues[15] = (float)matrixValues[15] ;

 

 

On screen shot you can see that scene in 3ds mas is in front position, so the position of both cameras (in my engine and 3ds) is correct. I completely confused with this problem. Help me please.

Attached Thumbnails

  • comparison.png



#5042070 Artefacts using big model

Posted by BlackJoker on 11 March 2013 - 04:05 PM

Hm, looks like this was the reason. I set 

screenDepth = 60000.0f;
screenNear = 1.0f;

and everything seems OK, but I cannot understand how in that case render a big/huge maps, for example, for a hundred of thousands km of space? Because of screen depth it cannot be rendered. What I must do in that case?




PARTNERS