Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 04 Jan 2010
Offline Last Active Private

#5297857 D3D11 - Pretransformed vertices? (GUI)

Posted by unbird on 24 June 2016 - 08:09 AM

Yup, changing the viewport needs adjusted transformations, as Hodgman shows.


You could use scissors (ID3D11DeviceContext::RSSetScissorRects) for axis aligned clipping and keep the viewport as is. Probably the simplest to implement.


For completeness: There are other ways to clip:

- Look into SV_ClipDistance semantic (VS output)

- Similarily: Manually clip in the pixel shader with clip/discard

- For rectangles you could "clip" in the vertex shader (rect intersection calculation, adjusting tex coords, too) (*)

- Hint: SV_Position as input to the pixel shader delivers pixel coordinates (with a 0.5 offset), this can come in handy.


Edit: (*) Can be even a good idea to do this CPU-side. Instead of lots of pipeline state changes, one can batch everything.

#5295816 Question about HLSL matrix multiplication

Posted by unbird on 09 June 2016 - 10:17 AM

Because that is not a matrix multiplication but a component-wise one (aka modulate). This is:
matrix vp = mul(gProjMatrix, gViewMatrix);

Edit: When you can then use a sequence of matrix-vector mults instead of matrix-matrix. E.g. in this case the instruction count difference is quite big (10 vs. 22).

(Even better: Provide the product with the constant buffer already :wink: )

#5292384 When you realize how dumb a bug is...

Posted by unbird on 18 May 2016 - 05:45 PM

The picture below supposed to be a voxelized sphere.


Problem: operator << has lower precedence than +. Should have used | or brackets.


Now I feel really dumb. :wacko:

Attached Thumbnails

  • VoxelsGoneBad.jpg

#5273478 Inverting normals when rendering backfaces

Posted by unbird on 31 January 2016 - 09:44 AM

Use the SV_IsFrontFace system value semantic:

float4 PS(float3 normal: NORMAL, bool face: SV_IsFrontFace): SV_Target
    normal = -normal;

#5272850 Alpha mask dynamically rotated triangle

Posted by unbird on 27 January 2016 - 10:53 AM

Different tex-coords for the mask, provided through the vertices already, transformed or reconstructed (as Servant has shown) are warranted here. Maybe you wouldn't need those triangles at all: If you look at it as a post-process (it sounds like you're after some sort of vignetting) you just draw a screen quad/triangle and do all the magic in the fragment shader.


Here is an example I wrote for D3D11, hopefully providing some food for thought.

#5272847 directional shadow map problem

Posted by unbird on 27 January 2016 - 10:32 AM

This artifact is known as Peter Panning.

#5272251 Texture Rendering

Posted by unbird on 22 January 2016 - 08:47 AM

You should learn how to find stuff yourself wink.png. I mentioned the transliteration for SharpDX. That was a first google hit:

SharpDX for Rastertek tutorials

Also, the DirectX subforum here has a couple of links that should help you.

Edit: Also, gamedev.net member Eric Richards has transliterated Luna's source code (also available online) to SlimDX here

#5272233 Detecting Boundaries in triangles with adjacency

Posted by unbird on 22 January 2016 - 04:23 AM

In D3D11 a index of -1 has a special meaning (see here, last paragraph "Generating Multiple Strips"). IIRC in OpenGL you can define that value arbitrarily. I wonder if you even get something in the geometry shader then, since it operates on primitives. If that is the case, try another number, e.g. -2.


You can grab the index in the vertex shader with SV_VertexID and pass it along. Maybe that helps.

#5272162 Texture Rendering

Posted by unbird on 21 January 2016 - 10:53 AM

Flags and other properties can only be set at creation of resources/textures. Similar applies to views. That's how the API works: You decide what you need and create (usually) everything at app start.

When you say "usage staging" do you mean I set usage flags to = staging or something like that?

Yes, though in this case it's an enum, you can't combine them. This is why you need this copy operation: you can't read back a usage=default resource directly. Again, this is how the API/driver/hardware works, one has to get used to it.

Also, is there a more efficient way to just skip the target texture or backbuffer or swap chain or whatever it's using to put it on the screen and just tell it right off the bat to ignore the screen and use a Texture2D as the output instead? It seems like this would be quicker to process and a bit less messy.

Bacterius answered this already - correctly. You don't need a swap chain, this is only if you want to render to a window. You create a Texture2D and a RendertTargetView thereof and you're ready to go.

I get the impression you're quite confused about all of this. I recommend going through the rastertek tutorial (there are even SharpDX transliterations around IIRC) and/or buy F.Luna's book. Though the latter is C++ I consider it the best book for D3D11 beginners.

#5272147 Texture Rendering

Posted by unbird on 21 January 2016 - 08:27 AM

It's not either, it's both. These are flags, so certain combinations are valid and RenderTarget combined with ShaderResource is a quite common one.


For readback you need an additional resource with usage staging (and cpu read), only these can be actually mapped and read back. After your rendering use context.CopyResource from your render target texture to the staging texture. Then you can map.

#5272008 PCF in cubemap

Posted by unbird on 20 January 2016 - 08:20 AM

That's shadow acne, a typical artifact. Your bias is too small. Increasing it has other downsides though: Welcome to the pain of shadowmapping wink.png


Edit: Hmmm, do you not render front face culled to your shadow map ? Looks like self-shadowing of that plane.

#5271702 Drawing depth buffer in pixel shader

Posted by unbird on 18 January 2016 - 09:12 AM

Check the D3D debug layer output and/or with a graphics debugger if your resources are actually bound correctly. Other bugs aside it's usually a read/write hazard: One has to explicitly unbind resources before binding it elsewhere otherwise the API will nullify such an attempt (but report it in the debug layer).


PS: The debug layer is enabled with D3D11_CREATE_DEVICE_DEBUG at device creation.

#5271695 Drawing depth buffer in pixel shader

Posted by unbird on 18 January 2016 - 07:55 AM

That's the normal behavior of  a visualized depth buffer. The values are distributed hyperbolically, so more values are near 1 (white). If you want better visualization, transform them back to linear depth.
PS: Save yourself some typing: return float4(input.pos.z,input.pos.z,input.pos.z,input.pos.z) is equivalent to return input.pos.zzzz wink.png


Edit: Waaaaait, is that SV_Position ? I don't even know what z means here, never checked myself and the docs are enigmatic. But the effect you're describing sounds like the depth value. Could be worse though, if it's e.g. view space z grey might be even rarer (z can go higher than 1 and will be clamped to one).

#5271094 Vertex to cube using geometry shader

Posted by unbird on 14 January 2016 - 12:28 PM

I'd actually advice against a geometry shader here and use hardware instancing. Not only is it simpler and allows for other geometry without changing the shaders it will also be likely much faster (Edit: Wrong! See MJP's link below, instancing can turn out bad for low vertex count). One can also only do triangle strips and no indexed drawing with a geo shader (at least not without going through other hoops like stream-out).


PS: I'm still curious how you pulled that off with 14 vertices only wink.png

#5269005 Shader Technique Alternative for D3D11 ?

Posted by unbird on 03 January 2016 - 10:00 AM

More up to date than the old SDK source: fx11.codeplex.com.

Edit: Source has been moved to github.