Jump to content

  • Log In with Google      Sign In   
  • Create Account


Jason Z

Member Since 04 Feb 2004
Offline Last Active Sep 14 2014 06:40 AM

#5160957 Why does this code snippet not work when i take it out of function scope

Posted by Jason Z on 16 June 2014 - 07:24 PM


1. If you need an array of a small number of chars, I would default to putting them on the stack, i.e. "unsigned char buf[4]", instead of using new. Your sizeof would work if you did that.

You could also use std::array for a fixed size array.  That makes it much harder to forget to release the dynamic memory, and sizing should work properly there too.  But I agree, for small stuff like this, stack allocation would probably be fine.




#5160544 What to do now?

Posted by Jason Z on 14 June 2014 - 01:56 PM


Thanks for the advice but I didn't had experience with DirectX Graphics API when I started the project. I was learning plus making the game at same time so it took that much time.

 

You shouldn't have to justify yourself - if you want to take 5 months doing graphics, then that is perfectly fine.  Maybe you only get to work on it once in a while, maybe you had an interruption, or maybe you are just learning how to do it.  It doesn't really matter, but what does matter is that you stay motivated.  If you are looking to learn about sound and network programming, then take another few months and learn about them.  Don't just your progress based only on the game - if you are doing this to learn, then judge yourself by how much you have learned in the project.

 

I like to keep a hand written journal of the work that I am doing, and document some successes and some failures too.  It helps to keep me realistic, and to see how much I have done over the past month or so.  Give it a try and see if it works for you too.




#5160480 [DX11] Why we need sRGB back buffer

Posted by Jason Z on 14 June 2014 - 06:08 AM

Have you read the article "The Importance of Being Linear"?  It does a pretty good job of explaining why you need gamma correction, including the situations when you should use it and when you shouldn't.  I applaud the OP's willingness to experiment, but in this case it seems like you don't get the high level concept just yet - so please try to read through that article and come to a mathematic reasoning for doing this and then the correct operation will be quite clear.




#5159399 C++ resources

Posted by Jason Z on 09 June 2014 - 07:43 PM

You can always check out some of the resources on isocpp.org, which is the official website for the ISO language.  Many of the famous C++ authors have content there, and lots of modern code and examples get posted there.




#5158962 Vertex Shader vs Pixel Shader - Where to do processing?

Posted by Jason Z on 07 June 2014 - 03:01 PM

1. Like the others already said, these will not give you the same mathematic result. 

2. This depends on lots of factors.  In general, you should try to pull calculations towards the beginning of the pipeline for the reason you mentioned.  However, if you rendered object appears small on the screen, it is quite possible for you to have more vertices than pixels, so there is lots of variations there.

3. I think this one is also a maybe.  If you are loading textures in the pixel shader, it is quite possible that a normalization can be nearly hidden due to the latency of the memory access.  In addition, if the pixel shader is not your bottleneck, then making this change shouldn't produce any speed changes at all - it all depends on your system, what else you are doing in the scene, and how much of the system's resources you are using!

 

In short, there are no hard answers to these questions - as always, it depends!




#5157774 [DirectX] Particle Systems - Compute Shader

Posted by Jason Z on 03 June 2014 - 04:41 AM

Looks pretty good - how about a description of your rendering setup?  Are you using append/consume buffers?  Are the particles living forever or are they dynamically created and destroyed?




#5157699 Passing and getting const refs (XMFLOATxxx)

Posted by Jason Z on 02 June 2014 - 07:38 PM

Have you tried out using a static code analyzer?  There is one available in Visual Studio, which can most likely catch it if you try to return a reference to a temporary object.  Other than using something like this to check your code, then you pretty much just have to make sure you don't return a reference unless the data is a member or (gasp) a global variable.

 

The arguments should be passed by constant reference if you don't plan to modify them.  Like you said, the return value should be a value and not a reference, unless you intend for the returned data to be modified outside of the class.




#5157179 Screen-space shadowing

Posted by Jason Z on 31 May 2014 - 12:11 PM

Why not voxels? The idea is not so crazy anymore and certainly used for real-time GI in games (i.e. Crysis 3).

 

It may sound crazy due to the memory requirements. However, for shadow mapping you just need 1 bit.

A 1024x1024x1024 voxel would need 128MB and suddenly starts feeling appealing.

 

Perhaps the biggest block right now is that there is no way to fill this voxel with occlusion data in real-time.

The most efficient way I see would be regular rasterization but where the shader (or the rasterizer) decides on the fly which layer from the 3D texture the pixel should be rendered to, based on its interpolated depth (quantized). However I'm not aware of any API or GPU that has this capability. This would be highly parallel.

 

Geometry shaders allow selecting which RenderTarget should a triangle be rendered to, but there is no way to select which RenderTarget should a pixel be rendered to (which can be fixed function; not necessarily shaders)

 

You could use a variant of the KinectFusion algorithm to build a volumetric representation of the scene.  The basic idea is to get a depth image (or a depth buffer in the rendering case) and then you find the camera location relative to your volume representation.  Then for each pixel of the depth image you trace through the volume, updating each voxel as you go with the distance information you have from the depth image.  The volume representation is the signed distance from a surface at each voxel.  For the next frame, the volume representation is used to find out where the Kinect moved to and the process is repeated.  The distances are updated over a time constant to eliminate the noise from the sensor and to allow for moving objects.

 

This is a little bit of a heavy algorithm to do in addition to all of the other stuff you do to render a scene, but there are key parts of the algorithm that wouldn't be needed anymore.  For example, you don't need to solve for the camera location, but instead you already have it.  That signed distance voxel representation could easily be modified and/or used to calculate occlusion.  That might be worth investigating further to see if it could be used in realtime...




#5157175 Is reference counting the best and proper way to deal with ID3D11Resource ob...

Posted by Jason Z on 31 May 2014 - 11:47 AM

Yeah, if you are sticking to windows only, then WRL::ComPtr is a good solution.  If you plan to go cross platform, then you will need a wrapper around your resource pointer anyways (to abstract the API specific parts) so then you can just use plain smart pointers for the wrapper object.  If your wrapper object acquires the resource pointer in its constructor, and then releases it in its destructor, then the smart pointer will take care of the reference counting and just delete the wrapper when there are no more references to it (which will then release the resource pointer).

 

This is more or less what WRL::ComPtr does, with some additional goodies added in for querying interfaces and things like that.  If you need those, you can always add them to the wrapper class.  In fact, if you make your wrapper class a class template, then you can easily reuse it for any COM based object that you want to use...




#5157042 Is reference counting the best and proper way to deal with ID3D11Resource ob...

Posted by Jason Z on 30 May 2014 - 04:01 PM

In general, I would recommend against rolling your own smart pointers whenever possible.  There are subtle issues with the way that these objects interact with the STL that will give you debugging nightmares.  If you choose to use a smart pointer, use the standard ones - they are reliable and very well tested, and there is no reason to think that you will make them better on your own.

 

Regarding the management of resource lifetimes, I find that using the COM reference counting system is more than sufficient.  Most of the time your engine/game will know in advance what needs to be loaded for a particular level, so non-automated resource handling should be perfectly fine.  If you absolutely must use automatic resource lifetime management, then you should use the WRL::ComPtr which manages the object lifetime with the COM reference counting system.  This gives you the benefits of automatic control, without having to create your own system to support it!




#5156865 Graphics Layers for different for Directx11 and Directx12

Posted by Jason Z on 29 May 2014 - 07:48 PM


To be honest I don't think you should be worrying about this right now. D3D12 is at least a year away, we don't yet know what the interface is going to be like, so trying to pre-empt an unreleased API version doesn't seem like good sense. It would be better to focus on doing a good job with D3D11 instead.

I would echo this caution, and I'll actually be the dissenting voice in this discussion and say that you shouldn't try to abstract away the differences between D3D11 and D3D12.  D3D12 is supposed to be a superset of 11, except with more direct control over low level details so that you can squeeze the last few drops of performance out of your hardware.  If you try to make a common abstraction between 11 & 12, then you will end up muting the benefits of 12 without really gaining anything.  Assuming that D3D11 will be available anywhere that D3D12 is, then there isn't any benefit to supporting both on a common abstraction.

 

For pro studios, this makes sense to allow running your game on multiple platforms and supporting multiple APIs.  But if you can't gain anything from the common abstraction (due to D3D11 and 12 being together everywhere) then this system doesn't make much sense.  Instead, I would suggest that you write your D3D11 renderer now, and keep in mind any pain points you are encountering, then design a new renderer in about a year when D3D12 comes out that incorporates what you learned from the first time around.




#5156611 Tweening Questions

Posted by Jason Z on 28 May 2014 - 08:24 PM

One other point to consider is that tweening requires an enormous amount of vertex memory, since you are essentially creating a keyframe that includes every vertex.  If you make a 10 frame animation, then you have 10x the number of vertices to store!  Skeletal animation drastically reduces the amount of required memory, and trades it for a little bit more runtime computation - but it is still very much worth it.

 

If you aren't at all familiar with skinning, then check out the old Cg Tutorial chapter on it, which is where I first learned about it: http://http.developer.nvidia.com/CgTutorial/cg_tutorial_chapter06.html




#5151950 Does swapchain->Present always stretch to the target?

Posted by Jason Z on 06 May 2014 - 07:44 PM

Have you set the viewport to the area of the client window that you want to render into?  That should let  you choose exactly where you are rendering into - unless I am misinterpreting what you are trying to do...  If so, can you post an image to show what isn't working properly?




#5151864 Direct3D 12 Staging Resources

Posted by Jason Z on 06 May 2014 - 12:20 PM

Over the years since Direct3D 11 was released, I would venture to guess that one of the most common questions / problems that has arisen from new developers was the requirement to have a staging resource when you wanted to have both CPU and GPU access to a resource.  It is counter-intuitive to have this requirement, and if your resources are really big (i.e. for 3D textures) then it is even more of a trouble since you have to keep a huge resource around just for copying data back and forth.  The alternative is to temporarily create a resource just for the transfer event and then release it, but that is against recommended practices.

 

So this post is an open request to the Direct3D 12 developers (of which I know at least one of them is lurking around... Max!).  Please allow the control of staging properties of a resource with the resource barrier objects.  If we can change the CPU / GPU access properties with a barrier transition, then it will let the developer have easier control over his/her resources and reduce the number of API calls needed to copy data back to the CPU.  This should theoretically improve performance (fewer API calls) and it lets the algorithm implementer explicitly show what he is trying to achieve.

 

This functionality may already be possible in the current state of the API (I haven't seen any more than the BUILD 2014 talk) but if it isn't, please consider adding this!

 

If there are other topics like this that the general community sees as relevant or important to change for D3D12, please post those ideas so that the feedback gets to the right people!




#5151825 D3D11 and multiplication order in the GPU

Posted by Jason Z on 06 May 2014 - 09:44 AM

It is just a convention that they use, which can be modified with a compilation flag for your shader.  See the details here.

 

I would also recommend getting very familiar with the matrix classes you are using, and having a solid understanding of how they map to traditional matrix math operations.  Depending on the classes that you use, they may implement operations differently, so there is no guaranteed way to say that using one multiplication order will function in the right way...

 

Spend a day or two making some experiments in a unit testing framework (VS comes with a simple one out of the box) that you can refer to and see how the matrix operations are working and what order you need to apply them in - it will be well worth doing so, both while you are doing it and in the future when you run in to a special case!






PARTNERS