Jason Z

  • Content count

  • Joined

  • Last visited

Community Reputation

6434 Excellent

About Jason Z

  • Rank
    DirectX MVP
  1. I haven't had much of a chance to do the integration on the C++ side, although I have been doing some work with Unity to understand the capabilities of the device.  Hopefully my free time situation will improve soon, so that might change...
  2. Slight side note, you never have to set member pointers to nullptr in a destructor as after the destructor that memory is invalid and should not be accessed by anything!     It is still a good practice to get into, since if you mistakenly still reference that object's former memory location then you would be able to figure out it was already destructed.  Sometimes you can even write a particular value into the pointer value which helps in identifying this situation more easily.
  3. I don't think there is much benefit to recalculating the random texture every frame.  If anything, you could calculate a random vector once per frame, and then rotate your sampled random vectors around, and you will get a similar effect.    However, if you don't see any artifacts, and you are happy with how it looks, then whey would you be thinking of updating the texture every frame?
  4.   It is just an empty file with an include statement.  I haven't used it myself, but it should be a suitable solution for what you are trying to do...
  5. This!  I wrote a small blog post a while back about state monitoring, and you can check it out in Hieroglyph if you want to see a sample implementation.  Don't let your draw calls make any assumptions (it may even be a good idea to do testing where you intentionally set weirdo states...) and you will be happier when you start using deferred contexts and/or multithreaded draw call submission.
  6. You could create stub files (one for VS and one for PS) that simply include the combined shader.  That would allow you to set the MSBuild properties for each of the stub files accordingly, and would allow you to have control over the naming of each of the compiled output blobs.    I'm actually investigating to do this with Hieroglyph in the near future, so if there are other solutions I would be happy to hear them as well!
  7. I'm not familiar with the Xenko Game Engine, but usually they just include two different renderers - a DirectX based one for Windows, and an OpenGL based one for Mac & Linux.  When they build for each platform, the appropriate renderer is linked against and it simply uses the right one automatically.
  8. I think you might have a hard time finding someone who can explain how to implement irradiance calculations (or approximations thereof) to a 3 year old.  If you are serious about implementing your own renderer, then you need to understand how it works.  Have you read any books on the topic?  Are you willing to put in the effort to implement and debug the system?   If not, then why not just use something like UE4 or Unity?  If so, then start digging in to the resources available - there is lots of info on YouTube with explanations, so try to go as far as you can on your own before asking someone to explain the whole thing to you.  If you get stuck on a specific piece, then there are lots of people here to help!
  9. Google has been investing in Magic Leap, so I guess they will eventually be using their AR technology sooner or later.  With that said, there is lots of room for many different types of interaction models - not just headsets.  You can use a smartphone for AR relatively easily, just like Pokeman...
  10. That's right, but each vertex would only contain a single index into your structured buffer.  So even if you had to repeat a vertex, it would be relatively low cost.  If you have a way to generate the desired indices when given a sequential value, you could always use an empty vertex type and just generate the vertices on the fly in the vertex shader.  That would be super low cost, and you can easily expand the vertex data as needed throughout the pipeline (i.e. in the VS, DS, and HS).
  11.   Have you considered putting your control point data into a resource that can be read by a shader (i.e. constant buffer or a structured buffer)?  That would allow you to have a very small vertex format (like a single integer offset) and you can just update your vertex buffer to indicate which set of control points you want each instance to use, and then utilize one of the basic draw calls instead of real instancing.  That should keep your primitive ID sequential, while still offering the reuse of most of your data without bloating.   Also, maybe I missed it, but what are you using the primitive ID for?  Is a unique value within the domain/hull shader needed?
  12. To the OP: Do you understand what the code is supposed to be doing?  If you are trying to create a ray and intersect it with a bounding box, are you sure that both the ray and the box are in the same coordinate space (i.e. view space, world space, projection space, clip space, etc...)?   Try to map out what you are actually trying to accomplish, and break the overall task into smaller pieces that you can verify step by step.  If you try to do the whole task at once, then it is very difficult to debug.  On the other hand, with smaller tasks, you can more easily verify that each of them is doing what you expect.  I find that this often forces me to understand the algorithm I am implementing far better as well.
  13. I haven't done a direct comparison myself, but you have already stated that it depends on the filter size.  You also mentioned that the PS has access to some texture filtering instructions that aren't available to the CS - but will you make use of filtering operations?  It sounds like you already know quite a bit about the difference between the two shaders, so you just need to apply that to your specific needs and see which one is needed.   By the way, there is a separable bilateral filter implementation available in my Hieroglyph 3 engine in case you want to start out there.  I would be interested to hear what choice you make on this topic!
  14. DX11

    I wouldn't use blending, but rather just look up in the cube map where the refraction points to.  That can of course be combined with partial reflection as well, but I wouldn't use alpha blending!
  15. For #1, you are clipping the geometry that goes behind the near clipping plane of the view frustrum.  If you want to keep that from happening, you can modify your vertex shader to transform your vertices such that their Z component is set to 0 if it has a negative value after the transformation has been applied.  This will make the vertices get pushed back by your camera, and should keep your cube from being cut.   I don't really understand the issue in #2 - can you clarify that a bit more?