• Content count

  • Joined

  • Last visited

Community Reputation

99 Neutral

About fkhan

  • Rank
  1. I can't figure out why the app is having trouble locking the vertexbuffer. Here is my code snippet. [code] // is this necessary? if (vertexBuffer != null) { if (!vertexBuffer.Disposed) { vertexBuffer.Dispose(); } } vertexBuffer = new VertexBuffer(Device, VertexPositionNormalTextureColorTangent.SizeInBytes * vertexCount, Usage.WriteOnly, SlimDX.Direct3D9.VertexFormat.None, Pool.Default); [color="#FF0000"] DataStream dataStream = vertexBuffer.Lock(0, 0, LockFlags.None);[/color] // exception is raised while attempting to lock vertexbuffer foreach (List<VertexPositionNormalTextureColor> vlist in verticesNmTxCollst) { foreach (VertexPositionNormalTextureColor vertex in vlist) { dataStream.Write<VertexPositionNormalTextureColor>(vertex); } } vertexBuffer.Unlock();[/code] I have plenty of system memory available, this happens randomly when loading models. there are no d3d leaks or warnings showing.
  2. sure enough the solution was to simply create my HWnD handle from the MainWindow class. Now thats what I call two birds with one stone! no more outofvideomemory exception and no more multithreaded warnings.
  3. I found out the cause of the out of video memory exception. It is because I am creating my device using CreateFlags.Multithreaded. My viewport is actually a control that is initilized inside a wpf app. If I remove the Multithreaded flag, I start getting Direct3D9: (WARN) :Device that was created without D3DCREATE_MULTITHREADED is being used by a thread other than the creation thread. I recall reading somewhere that the device must be created by parent window. going to try creating the d3ddevice from my wpf app next...
  4. [quote name='unbird' timestamp='1305020608' post='4808897'] [quote]Can I delete/clear the vertexbuffer before each call?[/quote] Yes. And I think that's actually your leak problem. Explicitly call Dispose() on your vertexBuffer. DirectX resources won't get freed by the garbage collector automatically, you have to do it manually (in the current SlimDX version always with a Dispose() call). To further reduce payload you could consider using different types for your vertex elements (e.g. do you really need a full Color4 for the diffuse ?). I agree with mhagain: I wonder if your LOH approach is helping here, especially since the resources are finally bound either by the DX runtime or by the driver/graphics card memory. Never played with LOH problems, but this is CLR stuff as far as I understand. Still: Would be interesting to know if you get any benefit from that approach. [/quote] well one reason for the LOH approach is because I was getting out of memory exceptions while iterating through the large VB. Managed apps have a memory cap of ~1.3gb after which they crash. but the concern is the video memory leak. I am calling dispose after each model load. like this [code] public void ResetScene() { Device.VertexDeclaration = null; if (sceneModels != null) { foreach (ColladaModel cm in sceneModels) { if (cm.DiffuseTexture != null) cm.DiffuseTexture.Dispose(); if (cm.NormalTexture != null) cm.NormalTexture.Dispose(); if (cm.SpecularTexture != null) cm.SpecularTexture.Dispose(); if (cm.ReflectionTexture != null) cm.ReflectionTexture.Dispose(); if (!cm.VertexBuffer.Disposed) cm.VertexBuffer.Dispose(); if (!cm.VertexDeclaration.Disposed) cm.VertexDeclaration.Dispose(); } sceneModels.Clear(); Device.SetStreamSource(0, null, 0, 0); } } [/code] The vertexbuffer should be cleared I would assume. but somehow its keeps using up video memory until it runs out of it... I can't get rid of Color4 since I want transparency and reflection.
  5. I get an out of video memory D3D exception when calling vertexBuffer = new VertexBuffer(Device, VertexPositionNormalTextureColorTangent.SizeInBytes * vertexCount, Usage.WriteOnly, SlimDX.Direct3D9.VertexFormat.None, Pool.Default); Can I delete/clear the vertexbuffer before each call? I have about 2GB of video memory according to DXdiag. I have DXdiag debugging turned on and break on memory leaks but I dont see any messages in the output window. Also in the code below, I am trying to avoid creating arrays on the large object heap so as to avoid fragmentation. But I am having second thoughts on my approach. I calculate the maxloh array size based on the LOH limit of 85k, im using 80 to be safe. I calculate the capacity of each array and then create new ones when the max size is reached. int sizeNmTxTn = maxloh / VertexPositionNormalTextureColorTangent.SizeInBytes; I check if size of the list is greater than or equal to the maximum size allowed to avoid arrays being sent to LOH and add new ones as necessary. if (verticesNmTxTn.Count >= sizeNmTxTn) { verticesNmTxTn = new List<VertexPositionNormalTextureColorTangent>(sizeNmTxTn); verticesNmTxTxlst.Add(verticesNmTxTn); } What do you guys think of this approach? For managed d3d apps (slimdx), It is a pain to have to break down arrays into smaller ones to avoid them being created on LOH. Are there better approaches to creating large vertexbuffers and not create arrays on LOH? [code] public void LoadVerticesNmTxTn() { Geometry geometry = mesh.Geometry; Matrix meshRootTransform = getModelTransformationMatrix(); List<Vertex> vertexlst = geometry.Vertices; vertexCount = vertexlst.Count; int maxloh = 80000; int sizeNmTxTn = maxloh / VertexPositionNormalTextureColorTangent.SizeInBytes; List<List<VertexPositionNormalTextureColorTangent>> verticesNmTxTxlst = new List<List<VertexPositionNormalTextureColorTangent>>(); verticesNmTxTn = new List<VertexPositionNormalTextureColorTangent>(sizeNmTxTn); verticesNmTxTxlst.Add(verticesNmTxTn); foreach (Vertex vertexData in vertexlst) { float x = vertexData.Position.X; float y = vertexData.Position.Y; float z = vertexData.Position.Z; Vector3 position = new Vector3(x, y, z); position = Vector3.TransformCoordinate(position, meshRootTransform); float nX = vertexData.Normal.X; float nY = vertexData.Normal.Y; float nZ = vertexData.Normal.Z; Vector3 normalPosition = new Vector3(nX, nY, nZ); //normalPosition = Vector3.TransformCoordinate(normalPosition, meshRootTransform); float u = vertexData.TextureCoordinate.U; float v = -(vertexData.TextureCoordinate.V - 1); float w = vertexData.TextureCoordinate.W; Vector2 textureCoordinate = new Vector2(u, v); float tX = vertexData.Tangent.X; float tY = vertexData.Tangent.Y; float tZ = vertexData.Tangent.Z; Vector3 tangentPosition = new Vector3(tX, tY, tZ); float R = diffuseColor.X; float G = diffuseColor.Y; float B = diffuseColor.Z; float A = diffuseColor.W; Color4 diffuse = new Color4(A, R, G, B); if (verticesNmTxTn.Count >= sizeNmTxTn) { verticesNmTxTn = new List<VertexPositionNormalTextureColorTangent>(sizeNmTxTn); verticesNmTxTxlst.Add(verticesNmTxTn); } verticesNmTxTn.Add(new VertexPositionNormalTextureColorTangent(position, normalPosition, textureCoordinate, diffuse, tangentPosition)); } if (VertexPositionNormalTextureColorTangent.SizeInBytes * vertexCount > Device.AvailableTextureMemory) { MessageBox.Show("Not enough texture memory available!"); } vertexBuffer = new VertexBuffer(Device, VertexPositionNormalTextureColorTangent.SizeInBytes * vertexCount, Usage.WriteOnly, SlimDX.Direct3D9.VertexFormat.None, Pool.Default); DataStream dataStream = vertexBuffer.Lock(0, 0, LockFlags.None); foreach (List<VertexPositionNormalTextureColorTangent> vlist in verticesNmTxTxlst) { foreach (VertexPositionNormalTextureColorTangent vertex in vlist) { dataStream.Write<VertexPositionNormalTextureColorTangent>(vertex); } } vertexBuffer.Unlock(); } [/code]
  6. How can we load large objects in verteexbuffer without using the large object heap? I have a collada model ~80mb. 3dsmax takes about 550mb to load and display the model where as my collada viewer takes at 550mb to load and parse the model and then another 500mb to display it. I have tried not using the Large object heap altogether by tracking my array sizes when parsing collada files, I keep track of the array sizes and whenever it gets close to 85000 I allocate a new array. Should this not prevent objects from being thrown on the LOH? Another thing I am considering is to use a fast XMLTextReader class instead of the XMLDocument. There are additional functionalities in the XMLDocument class that I never use like writing to the doc. The final challenge is once I have created the necessary objects (geomerties, material, effects etc) I have to put the vertices in the vertexbuffer, once again to make sure the array being sent to the VB is not greater than 85000. My initial approach was to use a 2d array and then creating chunks of 85000. Is there a better approach? This will get almost unmanagable when dealing with animations.
  7. Well part of the problem was my Device backbuffer format. It was set to X8R8G8B8 which does not support transparency, thanks to nice output interface in to point that out. I have created a transparent background for use with D3DImage control that works with WPF. I render my models on top of the background and then expose a property to retrieve the rendered surface for use in my d3dimage control. works great but sometimes immediately when it loads it flickers and shows a white background. I did like to know if there are better approaches to achieving transparency in WPF using D3DImage.
  8. I am trying to render transparent background and setting Color to transparent like this, Device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.Transparent, 1.0f, 0); But the background in the wpf app shows up white. Is there a simpler way to do this in wpf?
  9. I have been writing Collada model viewer using a managed library from mogware. The library handles all the parsing of xml nodes and transformation matrices. Almost every model loads just fine but only certain models when loaded have their meshes offset from their original positions. One way to fix this is export the collada model (dae) into obj and then reimport in max and then re export in dae. This positions every meshes axis to origin and breaks down every node out of its group. Upon exporting to obj and then re importing to dae I noticed the actual vertex positions have changed from the original dae. Is the collada max exporter plugin to blame here? I am looking for a programmatic solution rather than simply manipulating model formats.
  10. [SlimDX] Shaders

    Can you please post your shader code?
  11. In Collada, there is a model transform that defines the model position in the world. After loading the vertex info, I was transforming the vertices to their respective positions in the world. I can't think why I decided to transform the normals as well. This resulted in a silhouette model.
  12. I got it! and the reason is so silly that I won't bother going into it.
  13. Any opinions? I have tried all that I can think of...
  14. Quote:Original post by Erik Rufelt So do you use something like EyePosition - pin.PosWorldr then? Perhaps I'm misunderstanding your code, but pin.Position will be in projected screen-space, so whether you use view or world space you need to pass another position to the pixel-shader apart from the projected position. I'll repost my shader code after I switched to world space. struct VSInputNmVc { float4 Position : POSITION; float3 Normal : NORMAL; float4 Diffuse : COLOR0; }; struct PixelLightingVSOutputVc { float4 Position : POSITION; float3 PositionWS : TEXCOORD0; float3 NormalWS : TEXCOORD1; float4 Diffuse : COLOR0; float3 Reflect : TEXCOORD2; }; PixelLightingVSOutputVc VSBasicPixelLightingNmVc(VSInputNmVc vin) { PixelLightingVSOutputVc output; // Transform vertex position into projection space output.Position = mul(mul(mul(vin.Position, World), View), Projection); output.Diffuse = vin.Diffuse; float3 wNormal = mul(vin.Normal, World); float3 PosWorldr = mul(vin.Position, World); float3 ViewDirection = PosWorldr - EyePosition; output.NormalWS = wNormal; output.PositionWS = PosWorldr; output.Reflect = reflect(normalize(ViewDirection), normalize(wNormal)); return output; } float4 PSBasicPixelLightingVc(PixelLightingVSOutputVc pin) : COLOR { float3 posToEye = pin.PositionWS - EyePosition ; float3 N = normalize(pin.NormalWS); float3 E = normalize(posToEye); ColorPair lightResult = ComputePerPixelLights(E, N); float4 diffuseColor = pin.Diffuse; float4 reflectMat = ReflectionColor; //AmbientColor is ambience of model float4 ambient= float4(AmbientColor * AmbientLightColor,1); float4 diffuse = diffuseColor * reflectMat * float4(lightResult.Diffuse *, 1); float4 color = ambient + diffuse + float4(lightResult.Specular, 0); return color; } // helper method to compute per pixel lightning ColorPair ComputePerPixelLights(float3 E, float3 N) { ColorPair result; result.Diffuse = AmbientLightColor; result.Specular = 0; // Using Blinb-Phong illumination model // Light0 float3 L = -DirLight0Direction; float3 H = normalize(E + L); float dt = max(0,dot(L,N)); // lamberfactor result.Diffuse += DirLight0DiffuseColor * dt; if (dt != 0) result.Specular += DirLight0SpecularColor * pow(max(0,dot(H,N)), SpecularPower); // Light1 L = -DirLight1Direction; H = normalize(E + L); dt = max(0,dot(L,N)); result.Diffuse += DirLight1DiffuseColor * dt; if (dt != 0) result.Specular += DirLight1SpecularColor * pow(max(0,dot(H,N)), SpecularPower); // Light2 L = -DirLight2Direction; H = normalize(E + L); dt = max(0,dot(L,N)); result.Diffuse += DirLight2DiffuseColor * dt; if (dt != 0) result.Specular += DirLight2SpecularColor * pow(max(0,dot(H,N)), SpecularPower); // SpotLight0 L = -SpotLightDirection; H = normalize(E + L); dt = max(0,dot(L,N)); //dt = acos(dot(L,N)); result.Diffuse += SpotLightDiffuseColor * dt; if (dt > 0) { float spotEffect = dot(SpotLightDirection, L); if (spotEffect > cos(Theta)) // dot product needs to be greater than cosine of inner cone. { spotEffect = pow(spotEffect, FallOff); float att = spotEffect / (SpotLightAttenuation0 + SpotLightAttenuation1 * Range + SpotLightAttenuation2 * Range * Range); result.Diffuse += att * (SpotLightDiffuseColor * spotEffect) ; } result.Specular += SpotLightSpecularColor * pow(max(0,dot(H,N)), SpecularPower); } result.Diffuse += EmissiveColor; result.Specular *= SpecularColor; return result; } Hopefully it's not too much code to go through. ScreenShot