Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Nov 2010
Offline Last Active Jan 28 2015 12:59 AM

Topics I've Started

Mesh Picking Accuracy Problem

04 December 2012 - 07:57 PM


I'm having some accuracy issues with my mesh picking code. The problem im getting is that sometimes a mesh behind another mesh will be selected, instead of the front-most mesh.

Just some quick background as to how my meshes have been created and rendered. My application essentially renders a bunch of beams and columns in a 3D model (CAD type application). The beams and columns are my meshes. The mesh is created at the world original with a standard length of 1000mm. I store a transformation matrix for each beam mesh (that i call the world transform matrix) that is passed to the shader. The shader uses the world transformation matrix to scale, rotate and translate the mesh into the correct world position.

Since my meshes vertices are not defined in the correct world positions (remember that each mesh is created at the origin with a standard length), I had to do something a bit different for my picking calculations. I essenitally multiplied my beam world transform matrix by the camera world-view-projection (WVP) matrix. I then use this combined matrix in my Vector3.Unproject method to determien the ray position and direction.

For the most part, my code works really well. However, there are a few beams that are not selected correctly. I have uploaded some images showing the problem. The image showing the picking working as it should can be found here (correctIMG), the image showing incorrect behaviour can be found here (incorrectIMG). A beam that has been highlighted/selected is drawn in white. The white cross-hair denotes my mouse cursor location. Also see my picking code below (VB.Net).

Any help would be great. Thanks.

Private Sub HighlightBeam(ByVal e As Point)
If D3Ddev Is Nothing Then
		End If

		'local variables
		Dim beam As elemental.Beam = Nothing
		Dim prop As elemental.BeamProperty = Nothing
		Dim selectedBeamID As Integer = Nothing
		Dim IsHit As Boolean = False
		Dim hitSuccess As Boolean = False
		Dim WVP As Matrix = Nothing
		Dim vp As Viewport = Me.Model.Camera.Viewport
		Dim octNode As elemental.OctNode = Nothing
		Dim beamMesh As Mesh = Nothing
		Dim hitDistance As Single = 0.0F
		Dim prevHitDistance As Single = Single.MaxValue

		'perform hit tests for all objects contained in the model bounding box
		For Each octNodeID In Me.Model.Octree.SelectedOctNodeList
		 'select the current octnode
			octNode = Me.Model.Octree.OctNodeTable(octNodeID)
		 'do nothing if the octnode contains no beams
			If octNode.BeamIDList Is Nothing OrElse octNode.BeamIDList.Count = 0 Then
				Continue For
			End If
		'loop through beams contained in this octnode
			For Each beamID In octNode.BeamIDList
				'retrieve beam
				beam = Me.Model.BeamTable(beamID)
				prop = Me.Model.BeamPropertyTable(beam.BeamPropertyID)
				'perform no action if beam is hidden
				If Not beam.IsVisible Then
					Continue For
				End If
			  'get the beam mesh template.
				If prop.SectionType = elemental.BeamSectionType.ISection Then
					beamMesh = Me.Model.GetISectionMesh(prop.ID, 1)
				ElseIf prop.SectionType = elemental.BeamSectionType.CircularHollow Then
					beamMesh = Me.Model.GetCHSMesh(prop.ID, Me.Model.Settings.Beams.CircleFacets, Me.Model.Settings.Beams.CurveSegments)
				ElseIf prop.SectionType = elemental.BeamSectionType.LipChannel Then
					beamMesh = Me.Model.GetLipChannelMesh(prop.ID, 1)
				ElseIf prop.SectionType = elemental.BeamSectionType.LSection Then
					beamMesh = Me.Model.GetLSectionMesh(prop.ID, 1)
				ElseIf prop.SectionType = elemental.BeamSectionType.CircularSolid Then
					beamMesh = Me.Model.GetCircularSolidMesh(prop.ID, Me.Model.Settings.Beams.CircleFacets, Me.Model.Settings.Beams.CurveSegments)
					'dispose unmanaged resources
				End If

				'direction of the ray.
				Dim rayPos As New Vector3(e.X, e.Y, 0.0F)
				Dim rayDir As New Vector3(e.X, e.Y, 1.0F)

				'world-view-projection transform
				WVP = beam.WorldTransform * Me.Model.Camera.WVP

				'ray data
				rayPos = Vector3.Unproject(rayPos, vp.X, vp.Y, vp.Width, vp.Height, vp.MinZ, vp.MaxZ, WVP)
				rayDir = Vector3.Unproject(rayDir, vp.X, vp.Y, vp.Width, vp.Height, vp.MinZ, vp.MaxZ, WVP)
				rayDir = Vector3.Subtract(rayDir, rayPos)
				rayDir = Vector3.Normalize(rayDir)

				'hit test
				IsHit = beamMesh.Intersects(New Ray(rayPos, rayDir), hitDistance)
				'on hit occurence, record the selected beam ID
				If IsHit Then

					'record that a hit was observed
					hitSuccess = True

					'if beam is closer than the previous hit success, select this beam instead
					If hitDistance < prevHitDistance Then

						'track hit distance for all selected members to determine which one is in front of the other
						prevHitDistance = hitDistance
						selectedBeamID = beam.ID
					End If
				End If

				'dispose unmanaged resources

			Next beamID

		Next octNodeID

		'on hit success, update selected beams
		If hitSuccess Then

			'unhighlight the last highlighted beam
			If Highlight_LastHighlightedBeamID <> 0 Then
				Me.Model.BeamTable(Highlight_LastHighlightedBeamID).IsHighlighted = False
			End If

			'update selected beam table
			If Not Me.Model.BeamTable(selectedBeamID).IsHighlighted Then
				Highlight_LastHighlightedBeamID = selectedBeamID
				Me.Model.BeamTable(selectedBeamID).IsHighlighted = True
			End If

			'unhighlight the last highlighted beam
			If Highlight_LastHighlightedBeamID <> 0 Then
				Me.Model.BeamTable(Highlight_LastHighlightedBeamID).IsHighlighted = False
				Highlight_LastHighlightedBeamID = 0
			End If
		End If
	End Sub

World Space to Screen Space Conversions

04 December 2012 - 12:38 AM


I'm trying to work with transformed quads, that is to say, quads that have there vertices defined in screen coordinates. Basically im trying to shift those quads to different points on the screen. The trouble is, those points are defined in world coordinates.

I'm attempting to write a shader that will accept transformed vertices (from a 2D quad), and shift those vertices to a point on the screen (lets call the point 'p') which is defined in world coordinates. Point 'p' is passed to the shader along with the quad vertex.

So basically, i need to convert my point 'p' to screen coordinates using the shader, and then translate my quad coordinates (defined in screen space) to 'p'.

I have a base quad that has the vertex coordinates: (0,1,0,1), (0,-1,0,1), (1,-1,0,1), (1,1,0,1). I then scale and shift the quad in the shader as appropriate. For this example, i scaled the x component of the vertices by 100, and the y component by 2. So you basically end up with a 4pixel high quad x 100pixels wide. The quad is built around the screen origin (top-left corner) so all i have to do is shift all the vertices is the quad by 'p' to place the quad in the right spot.

I can get my quads to move around to roughly the right relative positions from each other, but not the correct positions on the screen. It seems the magnitude of the translations is not right (only guessing).

Main shader code is posted below. Any help would be much appreciated.

struct VSOutput
float4 Pos_ws;
float4 Pos_ps;
float4 Color;

VSOutput ComputeVSOutput2DShift(float4 position, float4 col, float4 p)

VSOutput vout;

// convert point to clip coordinates
float4 pCLIP = mul(p, WVP);  

// convert point to screen space
float3 pNDC = pCLIP.xyz / pCLIP.w;

// adjust height
position.y *= 2;

// adjust width
position.x *= 100;
// convert vertex to screen space
position.xy /= ViewportSize;
position.xy *= float2(2, -2);
position.xy -= float2(1, -1);

// translate vertex
position.x += pNDC.x;
position.y += pNDC.y;

// set output data
vout.Pos_ps = position;
vout.Color = col;
return vout;


Hardware Instancing - Optimization Query

15 November 2012 - 07:08 PM

Hi all,

I am writing a structural CAD/Modelling type of application that utilizies hardware instancing extensively for rendering 3D models.

Basically there are typically thousands of beams/girders to draw each frame. These beams are generally comprised of standard structural sections, but for the purpose of this post, lets say that all my beams sections are made of I-sections (for the non-structural savvy folks, just picture a steel beam that is shaped like the capital letter 'I')

Now at first glance, it may seem that hardware instancing is a relatively straightforward choice when you are dealing with thousands of meshes that are geometrically very similar. As such, thats the approach i adopted. My application is performing fairly well for the most part, but when im dealing with large models that have hundreds of different sections, i run into performance issues.

Basically because there a lots of different types of I sections that exist. Each section has differences in flange width and thickness, as well as web depth and thickness. I am having to create a new mesh for each different type of I section, each frame. The reason i am doing it each frame is that i am concerned about the memory cost of storing hundreds of meshes, not to mention having to recreate them every time the graphics device needs to be reset. Having said that i have a feeling thats the way im going to have to go, unless someone more knowledgable than me can help me out with an alternative solution. Which brings me to my question...

Can you locally scale different components of a mesh? When i create my mesh, I'm basically retrieving the cross-section geometry data from a database, and then creating the mesh using that. The mesh has a standard length of 1 metre. When it comes to rendering the meshs, i use a world transform to 'stretch' the mesh to the right length. If i can somehow do something similar on a local scale, i could adjust things like flange thickness, width etc without having to create a new mesh for each type of I section. According to PIX, all my performance issues are steming from the constant locking and unlocking of buffers when im creating my meshes each frame, which is very understandable!

Can anyone suggest a better way to do what i want in a more efficient manner?

Thanks in advance.


Creating a custom vertex for use in Vertex Shader

14 March 2012 - 09:10 PM


Basically i'm trying to create a custom vertex and use it in my vertex shader. The vertex is simply a standard position colored vertex, but i want to add an integer value as well. I cant seem to figure out what vertexformat and vertex semantic to use for this integer value. I've tried using the pointsize and blendindices formats/semantics, but the shader doesnt seem to recognize any value a set for scale.

How can i use an integer value in my custom vertex?

Heres an example of my vertex structure as defined in my program (VB.Net):

<System.Runtime.InteropServices.StructLayout(Runtime.InteropServices.LayoutKind.Sequential)> _
			Public Structure PositionColoredScale
				Public Position As Vector3
				Public Color As Integer
				Public Scale As Integer
				Public Shared ReadOnly Format As VertexFormat = VertexFormat.Position Or VertexFormat.Diffuse Or VertexFormat.None
				Public Shared ReadOnly Property SizeInBytes() As Integer
						Return System.Runtime.InteropServices.Marshal.SizeOf(GetType(PositionColoredScale))
					End Get
				End Property
			End Structure

Here's the structure in my shader:

struct VSInputVcScl
float4 Position : POSITION;
float4 Color  : COLOR;
uint Scale : BLENDINDICES0;

Thanks guys.

D3DX.GetVectors Function

04 March 2012 - 11:51 PM


I'm trying to extract position and normal vectors from a standard cube mesh. I can extract the position vectors okay, as they form the first 3 floats in the sequence, but im having trouble trying to retrieve the normal vectors. Can anyone tell me how to do this?

I would appreciate a VB.Net example if possible. Cheers.

Here's some example code:

Private Sub ExtractVectors()

				'local variables
				Dim cube As Mesh = Mesh.CreateBox(gf.Graphics.Device, 100, 100, 100)
				Dim stream As DataStream = Nothing
				Dim positions() As Vector3 = Nothing
				Dim normals() As Vector3 = Nothing

				stream = cube.VertexBuffer.Lock(0, 0, LockFlags.None)

				'retrieve position vectors
				positions = SlimDX.Direct3D9.D3DX.GetVectors(stream, cube.VertexCount, cube.BytesPerVertex)

				'retrieve normal vectors

				'close datastream

				'dispose unmanaged resources
End Sub