• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

120 Neutral

About porters

  • Rank
  1. Mesh Picking Accuracy Problem

    Found the problem, although i don't really understand why it was a problem. Basically i deleted the line : rayDir = Vector3.Normalize(rayDir) So i don't normalized the ray direction at all anymore. Now it works perfectly. Go figure.
  2. Hi, I'm having some accuracy issues with my mesh picking code. The problem im getting is that sometimes a mesh behind another mesh will be selected, instead of the front-most mesh. Just some quick background as to how my meshes have been created and rendered. My application essentially renders a bunch of beams and columns in a 3D model (CAD type application). The beams and columns are my meshes. The mesh is created at the world original with a standard length of 1000mm. I store a transformation matrix for each beam mesh (that i call the world transform matrix) that is passed to the shader. The shader uses the world transformation matrix to scale, rotate and translate the mesh into the correct world position. Since my meshes vertices are not defined in the correct world positions (remember that each mesh is created at the origin with a standard length), I had to do something a bit different for my picking calculations. I essenitally multiplied my beam world transform matrix by the camera world-view-projection (WVP) matrix. I then use this combined matrix in my Vector3.Unproject method to determien the ray position and direction. For the most part, my code works really well. However, there are a few beams that are not selected correctly. I have uploaded some images showing the problem. The image showing the picking working as it should can be found here ([url="http://img209.imageshack.us/img209/5971/correct.png"]correctIMG[/url]), the image showing incorrect behaviour can be found here ([url="http://img18.imageshack.us/img18/2397/incorrectm.png"]incorrectIMG[/url]). A beam that has been highlighted/selected is drawn in white. The white cross-hair denotes my mouse cursor location. Also see my picking code below (VB.Net). Any help would be great. Thanks. [CODE] Private Sub HighlightBeam(ByVal e As Point) If D3Ddev Is Nothing Then Return End If 'local variables Dim beam As elemental.Beam = Nothing Dim prop As elemental.BeamProperty = Nothing Dim selectedBeamID As Integer = Nothing Dim IsHit As Boolean = False Dim hitSuccess As Boolean = False Dim WVP As Matrix = Nothing Dim vp As Viewport = Me.Model.Camera.Viewport Dim octNode As elemental.OctNode = Nothing Dim beamMesh As Mesh = Nothing Dim hitDistance As Single = 0.0F Dim prevHitDistance As Single = Single.MaxValue 'perform hit tests for all objects contained in the model bounding box For Each octNodeID In Me.Model.Octree.SelectedOctNodeList 'select the current octnode octNode = Me.Model.Octree.OctNodeTable(octNodeID) 'do nothing if the octnode contains no beams If octNode.BeamIDList Is Nothing OrElse octNode.BeamIDList.Count = 0 Then Continue For End If 'loop through beams contained in this octnode For Each beamID In octNode.BeamIDList 'retrieve beam beam = Me.Model.BeamTable(beamID) prop = Me.Model.BeamPropertyTable(beam.BeamPropertyID) 'perform no action if beam is hidden If Not beam.IsVisible Then Continue For End If 'get the beam mesh template. If prop.SectionType = elemental.BeamSectionType.ISection Then beamMesh = Me.Model.GetISectionMesh(prop.ID, 1) ElseIf prop.SectionType = elemental.BeamSectionType.CircularHollow Then beamMesh = Me.Model.GetCHSMesh(prop.ID, Me.Model.Settings.Beams.CircleFacets, Me.Model.Settings.Beams.CurveSegments) ElseIf prop.SectionType = elemental.BeamSectionType.LipChannel Then beamMesh = Me.Model.GetLipChannelMesh(prop.ID, 1) ElseIf prop.SectionType = elemental.BeamSectionType.LSection Then beamMesh = Me.Model.GetLSectionMesh(prop.ID, 1) ElseIf prop.SectionType = elemental.BeamSectionType.CircularSolid Then beamMesh = Me.Model.GetCircularSolidMesh(prop.ID, Me.Model.Settings.Beams.CircleFacets, Me.Model.Settings.Beams.CurveSegments) Else 'dispose unmanaged resources beamMesh.Dispose() Return End If 'direction of the ray. Dim rayPos As New Vector3(e.X, e.Y, 0.0F) Dim rayDir As New Vector3(e.X, e.Y, 1.0F) 'world-view-projection transform WVP = beam.WorldTransform * Me.Model.Camera.WVP 'ray data rayPos = Vector3.Unproject(rayPos, vp.X, vp.Y, vp.Width, vp.Height, vp.MinZ, vp.MaxZ, WVP) rayDir = Vector3.Unproject(rayDir, vp.X, vp.Y, vp.Width, vp.Height, vp.MinZ, vp.MaxZ, WVP) rayDir = Vector3.Subtract(rayDir, rayPos) rayDir = Vector3.Normalize(rayDir) 'hit test IsHit = beamMesh.Intersects(New Ray(rayPos, rayDir), hitDistance) 'on hit occurence, record the selected beam ID If IsHit Then 'record that a hit was observed hitSuccess = True 'if beam is closer than the previous hit success, select this beam instead If hitDistance < prevHitDistance Then 'track hit distance for all selected members to determine which one is in front of the other prevHitDistance = hitDistance selectedBeamID = beam.ID End If End If 'dispose unmanaged resources beamMesh.Dispose() Next beamID Next octNodeID 'on hit success, update selected beams If hitSuccess Then 'unhighlight the last highlighted beam If Highlight_LastHighlightedBeamID <> 0 Then Me.Model.BeamTable(Highlight_LastHighlightedBeamID).IsHighlighted = False End If 'update selected beam table If Not Me.Model.BeamTable(selectedBeamID).IsHighlighted Then Highlight_LastHighlightedBeamID = selectedBeamID Me.Model.BeamTable(selectedBeamID).IsHighlighted = True End If Else 'unhighlight the last highlighted beam If Highlight_LastHighlightedBeamID <> 0 Then Me.Model.BeamTable(Highlight_LastHighlightedBeamID).IsHighlighted = False Highlight_LastHighlightedBeamID = 0 End If End If End Sub [/CODE]
  3. Well to anyone who was interested, I figured out what the problem was. I just needed to make a minor adjustment to my translate vertex code (see below). The problem was that i had not taken into account that the origin of the NDC system is the centre of the viewport. Since the NDC system ranges from -1 to 1, i just had to add/subtract one in the vertex translation code. [CODE] // translate vertex position.x += spNDC.x + 1; position.y += spNDC.y - 1; [/CODE]
  4. Hi, I'm trying to work with transformed quads, that is to say, quads that have there vertices defined in screen coordinates. Basically im trying to shift those quads to different points on the screen. The trouble is, those points are defined in world coordinates. I'm attempting to write a shader that will accept transformed vertices (from a 2D quad), and shift those vertices to a point on the screen (lets call the point 'p') which is defined in world coordinates. Point 'p' is passed to the shader along with the quad vertex. So basically, i need to convert my point 'p' to screen coordinates using the shader, and then translate my quad coordinates (defined in screen space) to 'p'. I have a base quad that has the vertex coordinates: (0,1,0,1), (0,-1,0,1), (1,-1,0,1), (1,1,0,1). I then scale and shift the quad in the shader as appropriate. For this example, i scaled the x component of the vertices by 100, and the y component by 2. So you basically end up with a 4pixel high quad x 100pixels wide. The quad is built around the screen origin (top-left corner) so all i have to do is shift all the vertices is the quad by 'p' to place the quad in the right spot. I can get my quads to move around to roughly the right relative positions from each other, but not the correct positions on the screen. It seems the magnitude of the translations is not right (only guessing). Main shader code is posted below. Any help would be much appreciated. [CODE] struct VSOutput { float4 Pos_ws; float4 Pos_ps; float4 Color; }; VSOutput ComputeVSOutput2DShift(float4 position, float4 col, float4 p) { VSOutput vout; // convert point to clip coordinates float4 pCLIP = mul(p, WVP); // convert point to screen space float3 pNDC = pCLIP.xyz / pCLIP.w; // adjust height position.y *= 2; // adjust width position.x *= 100; // convert vertex to screen space position.xy /= ViewportSize; position.xy *= float2(2, -2); position.xy -= float2(1, -1); // translate vertex position.x += pNDC.x; position.y += pNDC.y; // set output data vout.Pos_ps = position; vout.Color = col; return vout; } [/CODE]
  5. Thanks guys. Yeah I had a think about it last night and it seems logical to just have a base beam mesh and pass the actual cross section geometry parameters to the shader, and let the shader do the work. Saves me having to create a mesh for each beam size, and also recreate the meshes each frame. @kuana - yes I realize that's where the performance problems are steming from, as I mentioned in my earlier post. Thanks for the response.
  6. I'm not very familar with skinning, i don't use it in my application. I think i know what your saying though. I am thinking an integer vertex attribute that can be either 0,1 or 2. 0 indicating that all vertices are effected by scaling, 1 indicating that only relevant web vertices are effected by scaling, and 2 indicating that only relevant flange vertices are effected by scaling. I would then have to pass two matrices as part of my instance data. One matrix being the 'global' scaling matrix, and the other being the 'local' scaling matrix. The shader will apply the global scaling matrix to all vertices flagged as 0. I can then make the local transformation matrix only contain a scaling in the x and y direction. So my shader will apply the x scaling component to my vertices flagged as 1 (web), and apply the y scaling component to my vertices flagged as 2 (flange). Is that essentially what your saying? I've attached a pic below showing the scaling operations. I'm a bit reluctant to post code, but i dont think the code will tell you anything anyway. I'm just passing a transformation matrix to my shader at the moment, on a per-instance basis. [img]http://imageshack.us/a/img96/6867/isectionscaling.png[/img]
  7. Thanks for the reply Jason. I think i understand what your saying, although it may be hard to implement in my case. Basically every vertex has to be scaled. Every type of I-section has different depths, widths, flange thinkness, and web thicknesses. Scaling the height, width and length of the mesh can be done easily with a scaling matrix, its more the flange and web thicknesses that i am struggling with. So really i would have to do something like this: 1. Scale mesh with a scaling matrix to give correct height, width and length. 2. Flag all vertices save for a few to not be affected by further scaling. 3. Apply additional vertical scale to adjust flange thickness 4. Flag all vertices save for a few to not be affected by further scaling. 3. Apply additional horizontal scaling matrix that will adjust web thickness. Is this possible with your suggestion?
  8. Hi all, I am writing a structural CAD/Modelling type of application that utilizies hardware instancing extensively for rendering 3D models. Basically there are typically thousands of beams/girders to draw each frame. These beams are generally comprised of standard structural sections, but for the purpose of this post, lets say that all my beams sections are made of I-sections (for the non-structural savvy folks, just picture a steel beam that is shaped like the capital letter 'I') Now at first glance, it may seem that hardware instancing is a relatively straightforward choice when you are dealing with thousands of meshes that are geometrically very similar. As such, thats the approach i adopted. My application is performing fairly well for the most part, but when im dealing with large models that have hundreds of different sections, i run into performance issues. Basically because there a lots of different types of I sections that exist. Each section has differences in flange width and thickness, as well as web depth and thickness. I am having to create a new mesh for each different type of I section, each frame. The reason i am doing it each frame is that i am concerned about the memory cost of storing hundreds of meshes, not to mention having to recreate them every time the graphics device needs to be reset. Having said that i have a feeling thats the way im going to have to go, unless someone more knowledgable than me can help me out with an alternative solution. Which brings me to my question... Can you locally scale different components of a mesh? When i create my mesh, I'm basically retrieving the cross-section geometry data from a database, and then creating the mesh using that. The mesh has a standard length of 1 metre. When it comes to rendering the meshs, i use a world transform to 'stretch' the mesh to the right length. If i can somehow do something similar on a local scale, i could adjust things like flange thickness, width etc without having to create a new mesh for each type of I section. According to PIX, all my performance issues are steming from the constant locking and unlocking of buffers when im creating my meshes each frame, which is very understandable! Can anyone suggest a better way to do what i want in a more efficient manner? Thanks in advance. Aaron.
  9. I could do that, but i dont see how that solves my problem. I'm after the correct semantic/vertex format to use for passing an arbitary integer (or a float) value to the shader. I want the GPU to ignore this value, it will only be used by me in some calculations.
  10. Hi, Basically i'm trying to create a custom vertex and use it in my vertex shader. The vertex is simply a standard position colored vertex, but i want to add an integer value as well. I cant seem to figure out what vertexformat and vertex semantic to use for this integer value. I've tried using the pointsize and blendindices formats/semantics, but the shader doesnt seem to recognize any value a set for scale. How can i use an integer value in my custom vertex? Heres an example of my vertex structure as defined in my program (VB.Net): [CODE] <System.Runtime.InteropServices.StructLayout(Runtime.InteropServices.LayoutKind.Sequential)> _ Public Structure PositionColoredScale Public Position As Vector3 Public Color As Integer Public Scale As Integer Public Shared ReadOnly Format As VertexFormat = VertexFormat.Position Or VertexFormat.Diffuse Or VertexFormat.None Public Shared ReadOnly Property SizeInBytes() As Integer Get Return System.Runtime.InteropServices.Marshal.SizeOf(GetType(PositionColoredScale)) End Get End Property End Structure [/CODE] Here's the structure in my shader: [CODE] struct VSInputVcScl { float4 Position : POSITION; float4 Color : COLOR; uint Scale : BLENDINDICES0; }; [/CODE] Thanks guys.
  11. Hi, I'm trying to extract position and normal vectors from a standard cube mesh. I can extract the position vectors okay, as they form the first 3 floats in the sequence, but im having trouble trying to retrieve the normal vectors. Can anyone tell me how to do this? I would appreciate a VB.Net example if possible. Cheers. Here's some example code: [CODE] Private Sub ExtractVectors() 'local variables Dim cube As Mesh = Mesh.CreateBox(gf.Graphics.Device, 100, 100, 100) Dim stream As DataStream = Nothing Dim positions() As Vector3 = Nothing Dim normals() As Vector3 = Nothing 'datastream stream = cube.VertexBuffer.Lock(0, 0, LockFlags.None) 'retrieve position vectors positions = SlimDX.Direct3D9.D3DX.GetVectors(stream, cube.VertexCount, cube.BytesPerVertex) 'retrieve normal vectors '???? 'close datastream cube.VertexBuffer.Unlock() 'dispose unmanaged resources stream.Dispose() cube.Dispose() End Sub [/CODE]
  12. Ok guys problem solved. I performed a clean install the lastest Nvidia drivers for my card and its running fine now. Perhaps there were remnants of the old driver causing problems. Thanks guys for help. Much appreciated.
  13. Thanks for the suggestions guys. Will have to wait a few more hours before im home to try some of these solutions, but i can answer a few questions now. @mhagain - I'll check power saving settings when im home. I only get this problem specifically when im using hardware instancing, otherwise framerate is fine, or perhaps what im drawing outside of the intancing cases isnt intensive enough for me to notice an issue. I have already tried specifically creating the device with HARDWARE_VERTEXPROCESSING as i thought that might be an issue as well. I can confirm that the device is successfully created with these flags, so i can only assume this isnt the problem. @Adam_42 : 1. I've tried completely disabling antialiasing to no effect. When you say "Make sure you are not using a debug build", what do you mean exactly? Do you mean make sure i have set the DirectX Control Panel settings to use the retail version rather than debug? If so i can confirm i have set this to the retail version. 2. I can't swap the graphics card back as the reason i upgraded my hardware is the GTX470 died and needed to be replaced. I shouldnt need to do this anyway because, as i mentioned, my application runs fine on my work PC with the exact same code. 3. I'll download PerfHUD when im home and see what that tells me. The only things i can think of that may be causing problems are: 1. Something to do with Windows or VS2010 settings (i performed a fresh install of both so any previous settings i had would have been wiped). 2. DirectX9 or Graphics card settings. 2. Graphics card struggling with DirectX9 / Shader Model 3.0 for some reason?
  14. [sup]Hi Sneezy,[/sup] [sup]Drivers are already up to date. [/sup][sup]I should mention that the same code works fine on my work PC, which runs a NVIDIA Quadro FX 1800. It is only my home PC (the one i upgraded) where i am experiencing this issue.[/sup] [sup]I had noticed that i was getting a lot of warnings from Direct3D9. This is the warning i am getting "Direct3D9: (Warn): Ignoring redundant SetSamplerState Sampler: 0, State:##". This is occuring numerours times per frame. I did notice this before, so i set directX to retail mode, which helped a lot on my work PC, but has no effect on my home PC.[/sup] [sup]I'm curious about this warning. Is it referring to shader variables that are being set? I dont understand why it would have such a dramatic effect on my home PC, but no effect on my work PC, if this is the problem at all.[/sup]
  15. Hi all, I recently upgraded my PC which included a new SSD, CPU, graphics card (Nvidia GTX570). I also did a fresh install of Windows7 and Visual Studio 2010. After i upgraded my PC, i continued working on a SlimDX based application i'm working on in VS2010. I've noticed that the FPS of my rendered scences is horrific now (around 10FPS), whereas the exact same code prior to upgrade was running my application at around 90FPS. My previous graphics card was a GTX470. The FPS seems to drop dramatically when rendering polys using hardware instancing. I'm using shader model 3. Any idea what is going on? Any help would be much appreciated. Regards, Aaron
  • Advertisement