Jump to content
Posted 19 December 2012 - 05:41 AM
Posted 19 December 2012 - 06:48 AM
output.faceShear = dot(input.normal,gEyePosition - input.position);
if (input.faceShear >= 0.0) return float4(input.colorA,1); else return float4(input.colorB,1);
Posted 19 December 2012 - 12:52 PM
Posted 19 December 2012 - 03:11 PM
Posted 20 December 2012 - 07:20 AM
For shader model 3 there's also the VFACE semantic, automatically available in the pixel shader (just define a float face : VFACE parameter in your shader function).
Or are you using the fixed function pipeline ?
@Gavin: Hmmm, does this really work ? E.g. normals don't need to coincide with the face normals.
Posted 20 December 2012 - 07:40 AM
Posted 20 December 2012 - 08:31 AM
If you're not using shaders then neither of our suggestions are going to be of any use. Instead you will have to provide two triangles, one for each side, and turn back-face culling on, so that they don't interfere with each other when rendering.
Posted 20 December 2012 - 09:35 AM
Posted 20 December 2012 - 06:27 PM
Edited by BornToCode, 20 December 2012 - 06:29 PM.
Posted 21 December 2012 - 07:03 AM
device.SetRenderState(RenderState.SourceBlend, Blend.SourceAlpha); device.SetRenderState(RenderState.DestinationBlend, Blend.InverseSourceAlpha); device.SetRenderState(RenderState.BlendOperation, BlendOperation.Add);
Edited by unbird, 21 December 2012 - 07:08 AM.
Posted 21 December 2012 - 09:50 AM
Posted 21 December 2012 - 06:58 PM
Posted 24 December 2012 - 08:14 AM
That is quite big. Let us look at the numbers:
Your current vertex size is 20 (4 floats, 4 bytes each for position, 4 bytes for the color)
10 million vertices.
This alone gives you approx 200 Mb of vertex data.
With a naive setup (add quads/tris as they come) this number will be scaled by factor of 6 or 3 respectively, so you've probably already blown the GPU memory limit of an average consumer card (or the limit the driver gives you). As you say: One draw call impossible. True, duplicating vertices further (or adding additional vertex data) is undesirable.
Sure you can now massage the API to make it work , but with these numbers I'm inclined to suggest an non-realtime approach (you could still use D3D for rendering though). Have you tried a GDI-setup (with low render quality) ?
Anyway, here some suggestions for D3D optimizations.
But always know: no API-trick (or even using a "faster" API like D3D) is a silver bullet. You can setup a complex system reducing e.g. payload and the thing still renders slower.
- No need to use a Vector4 for position if your w is always 1. The D3D runtime will do that automatically for Vector3 (D3DDECLTYPE_FLOAT3 )
- Reduce precision of your vertex elements. Half-floats/shorts for positions ? Scalar for the color ? With shaders you can even do sophisticated packing (custom format).
- Why rely on big memory or insist on few draw calls ? Split your mesh into parts the API/GPU can cope with one at a time. You can still get good performance with that. It's sort of streaming.
- Indexing, as Dynamo_Maestro suggested. To get familiar with start with something real simple (triangle, quad, two adjacent quads, grid, a sphere...).
- Shaders: As mentioned, greater flexibility.
- Instancing. Is a way to render "the same" geometry multiple times with different position/color/whatever. In your case this would be the quads for the geometry and the position and color for the instances. Consider this advanced D3D API, it needs SM 3.
Do some research and chose wisely which path you wanna go because I think you have a challenging problem.