Jump to content

  • Log In with Google      Sign In   
  • Create Account

Jason Z

Member Since 04 Feb 2004
Offline Last Active Sep 03 2016 05:28 AM

#5309289 Noise Recalculation for Ambient Occlusion

Posted by on 03 September 2016 - 05:28 AM

I don't think there is much benefit to recalculating the random texture every frame.  If anything, you could calculate a random vector once per frame, and then rotate your sampled random vectors around, and you will get a similar effect. 


However, if you don't see any artifacts, and you are happy with how it looks, then whey would you be thinking of updating the texture every frame?

#5305209 Resetting States In A Data-Driven Renderer

Posted by on 10 August 2016 - 07:24 PM

I never reset states.


Resetting states at the end of a rendering stage implies that the next rendering stage can make assumptions about what states are when going into it.  That seems very very dangerous.  Instead I have each rendering stage set all states required on entry.  If required I can easily add state filtering to this, so that only states which actually change are set; otherwise this kind of setup also behaves itself properly with deferred contexts in D3D11.

This!  I wrote a small blog post a while back about state monitoring, and you can check it out in Hieroglyph if you want to see a sample implementation.  Don't let your draw calls make any assumptions (it may even be a good idea to do testing where you intentionally set weirdo states...) and you will be happier when you start using deferred contexts and/or multithreaded draw call submission.

#5305204 Compiling Hlsl - Shaders In Vs 2013

Posted by on 10 August 2016 - 06:59 PM

You could create stub files (one for VS and one for PS) that simply include the combined shader.  That would allow you to set the MSBuild properties for each of the stub files accordingly, and would allow you to have control over the naming of each of the compiled output blobs. 


I'm actually investigating to do this with Hieroglyph in the near future, so if there are other solutions I would be happy to hear them as well!

#5303871 Directx And Multi-Platform Games

Posted by on 03 August 2016 - 04:50 PM

I'm not familiar with the Xenko Game Engine, but usually they just include two different renderers - a DirectX based one for Windows, and an OpenGL based one for Mac & Linux.  When they build for each platform, the appropriate renderer is linked against and it simply uses the right one automatically.

#5303198 Physically Based Rendering In Directx

Posted by on 30 July 2016 - 09:30 AM

I think you might have a hard time finding someone who can explain how to implement irradiance calculations (or approximations thereof) to a 3 year old.  If you are serious about implementing your own renderer, then you need to understand how it works.  Have you read any books on the topic?  Are you willing to put in the effort to implement and debug the system?


If not, then why not just use something like UE4 or Unity?  If so, then start digging in to the resources available - there is lots of info on YouTube with explanations, so try to go as far as you can on your own before asking someone to explain the whole thing to you.  If you get stuck on a specific piece, then there are lots of people here to help!

#5300821 you all say oculus rift but why not google glass?

Posted by on 14 July 2016 - 08:41 PM

Google has been investing in Magic Leap, so I guess they will eventually be using their AR technology sooner or later.  With that said, there is lots of room for many different types of interaction models - not just headsets.  You can use a smartphone for AR relatively easily, just like Pokeman...

#5298473 How to get patch id in domain shader.

Posted by on 28 June 2016 - 05:38 PM

That's right, but each vertex would only contain a single index into your structured buffer.  So even if you had to repeat a vertex, it would be relatively low cost.  If you have a way to generate the desired indices when given a sequential value, you could always use an empty vertex type and just generate the vertices on the fly in the vertex shader.  That would be super low cost, and you can easily expand the vertex data as needed throughout the pipeline (i.e. in the VS, DS, and HS).

#5298104 How to get patch id in domain shader.

Posted by on 26 June 2016 - 07:40 AM

Thank you @MJP. That make sense. And actually documentation says that though I had to read it several times to understand.


@Matias is it still true if I have a pass-through vertex shader?


And maybe you can help me with my scenario. I have a big buffer with all control points. A have several index buffers for patches. Some patches should be drawn multiple times with different transformation. So if I want to draw everything in one call I have to duplicate indices. Right now I'm drawing each patch in it's own draw call and for patches that need to be drawn multiple times I'm using instancing. Here's a problem - for each new instance SV_PrimitiveID resets to 0. SV_InstanceID is not available in hull/domain shader and I have to pass it from vertex shader unnecessary duplicating data. Looks like instancing not works very well with tessellation.


Have you considered putting your control point data into a resource that can be read by a shader (i.e. constant buffer or a structured buffer)?  That would allow you to have a very small vertex format (like a single integer offset) and you can just update your vertex buffer to indicate which set of control points you want each instance to use, and then utilize one of the basic draw calls instead of real instancing.  That should keep your primitive ID sequential, while still offering the reuse of most of your data without bloating.


Also, maybe I missed it, but what are you using the primitive ID for?  Is a unique value within the domain/hull shader needed?

#5289403 Use Buffer or Texture, PS or CS for GPU Image Processing?

Posted by on 30 April 2016 - 05:20 AM

I haven't done a direct comparison myself, but you have already stated that it depends on the filter size.  You also mentioned that the PS has access to some texture filtering instructions that aren't available to the CS - but will you make use of filtering operations?  It sounds like you already know quite a bit about the difference between the two shaders, so you just need to apply that to your specific needs and see which one is needed.


By the way, there is a separable bilateral filter implementation available in my Hieroglyph 3 engine in case you want to start out there.  I would be interested to hear what choice you make on this topic!

#5286403 DirectX 11 Volume Rendering Advice Needed

Posted by on 11 April 2016 - 07:14 PM

For #1, you are clipping the geometry that goes behind the near clipping plane of the view frustrum.  If you want to keep that from happening, you can modify your vertex shader to transform your vertices such that their Z component is set to 0 if it has a negative value after the transformation has been applied.  This will make the vertices get pushed back by your camera, and should keep your cube from being cut.


I don't really understand the issue in #2 - can you clarify that a bit more? 

#5284355 Hololens Development Tools

Posted by on 30 March 2016 - 04:26 PM

In case you didn't catch the live stream, the Hololens development tools (including the emulator) are now available: https://www.microsoft.com/microsoft-hololens/en-us/developers


I'm in the process of installing everything, but it would be interesting to hear any impressions you guys have on the tools!

#5277282 How to enable supersampling in DirectX 11?

Posted by on 21 February 2016 - 07:29 AM

@Steven: You have to have a multi-sampled resource to render into.  Once you have that, the actual rendering may or may not take advantage of the system value semantic that you mentioned (its up to you to decide if you need to access subsamples, or if you just let the rasterizer take care of that for you).  After all rendering has been done, you then have to resolve your multisampled resource to a normal one for presentation to a window.

#5260383 d3d12: which debug tools to use?

Posted by on 03 November 2015 - 03:44 PM

Have you checked out the DXCAP tool?  That seems to be an extremely efficient way to capture log files, which you can then later debug.  A general information can be found here:




#5246677 UpdateSubresource on StructuredBuffer

Posted by on 15 August 2015 - 06:48 AM

To follow on MJP's great advice, have you tried using the performance tools in the latest versions of Visual Studio?  They can show you a pretty good representation of the parallelism between the CPU and GPU, and will likely show you some insight into what is costing you time that stacks up in your overall frame time.

#5246674 handling DepthStencil / Blend / Rasterizer State dilemma...

Posted by on 15 August 2015 - 06:40 AM

That is more or less correct.  I have a structure that holds the references to the states (RenderEffect), and a material can reference a number of different RenderEffects for different situations.  The higher level rendering pass is actually controlled in a separate object called a SceneRenderTask.  This object is the one that sets up the pipeline outputs (i.e. render and depth targets) and provides whatever special logic is needed for that particular rendering pass.  In your example of a mirror, the stencil rendering would be done in one pass and the reflected scene would be a second SceneRenderTask. 


If you are interested in seeing how it works more closely, the whole engine is available as open source: Hieroglyph 3