You can use your current view matrix functions, and just modify their input to an appropriate new model. For example, if you want the camera view to rotate around an object, your look at point would be the center of the object. Then the location for your 'look from' point can be calculated with spherical coordinates each frame. This lets you easily increment the values of the spherical coordinates, which then get translated into a position.
Like Buckeye said, there are lots of ways to do it. If you provide a little more detail about what you are currently using for your view matrix generation, we can probably give you a more specific answer.
I don't think it is possible to monitor all memory allocations and their locations for video memory. On modern architectures the GPU memory is virtualized, meaning that more than one process will have GPU memory allocated at any given time (including Windows itself). I believe the switch to DX10+ will help you in this situation, since DX9 was not a virtualized memory model.
What you can do is to monitor your own memory allocations. Anywhere that you allocate an object that resides in video memory, you can track it and keep a record of how much memory you have consumed. Then this can be compared with the physical video memory of the system, which can be acquired through DXGI interfaces.
May I ask how you found out that the crash was caused by allocating too much memory?
That's what I've meant. As soon as Hodgman has mentioned STALKER and raw D3D stuff being out of the picture, no soul seems to dare to reply
I don't have any problem with the STALKER engine - I just don't know how to implement what you are asking without access to the API... Is it really necessary to use the STALKER engine, or could you upgrade to something more open?
I'm not sure about OpenGL, but in D3D11 you have to render to a slice of a 3D texture at a time. If you want to fill the 3D texture will data, I would suggest using compute shaders rather than directly rendering...
You can always create your vertex buffer as a structured buffer or byte address buffer, and then us SV_VertexID to manually unpack your data in the shader. However it's almost certainly not going to save you any performance.
It won't save performance for sure, but it opens lots of doors for you. Indirect draw calls are much easier to do with a structured buffer as the data storage system for example.
If you are using a vector, you will get a contiguous block of memory used for your objects which may be helpful for cache performance and things like that. If you want to find a particular entity, then I would suggest using an index into the vector instead of by string name. You could easily create a helper function to convert to / from index and name, which will ease the process.
I wouldn't see any particular need to use a map in this case, since you don't need to really store the items by a unique ID or key.
I wasn't aware that the Pro version was free - do you have a link showing where that is stated? We have been using the express editions because of the licensing cost, but of course if the Pro SKU is available then I would upgrade to that. I would be really surprised if that isn't a time limited demo for free, otherwise why would they even have the Express SKU???
So now you have the root cause of the issue, right? So no further need for debugging there?
I can’t use D3D11_CREATE_DEVICE_DEBUG without the Windows SDK 8 installed, something I fear doing considering apparently some things have to be done through Visual Studio 2012 (which I do not have).
I guess I'm going off topic here, but I exclusively use VS2012 Express for Windows Desktop at work for my visualization tools. Is there a substantive reason you don't want to upgrade? My applications are targeting Windows 7+, so if you are using D3D11 then that should be ok for you too.
When I made the switch off of the DXSDK, it was essentially to remove my D3DX dependencies and find an alternative way to load textures. If you are working in a cross platform engine, I would assume you have both of these topics covered already, so why not upgrade? Each version of 2012 provides performance and bug fixes for the toolset and libraries, so there is a reason to upgrade. Plus newer C++11/14 features are being added in each release which is also a nice bonus...
This is just a crazy first shot, but is the exception handled somewhere other than your code? I have had cases where the first chance exception occurs, only to be handled in one of the underlying libraries (i.e. the exception is used as a normal part of one of the sub-systems). If you set VS to not break on first chance exceptions then you would never even know it occurred.
Before digging into the nitty gritty details, have you tried that out?
Again, GLSL does not make distinction betweeen column or row vectors in its very code, but order of operations is explicit of course (order is actualy equivalent of majoring).
The difference in the order is whether to multiply the vector first, and have all the other matrixes multiply a vector (reducing the number of operations since a vector is only 4x1), or multiply all the matrixes in order and only multiply the vector at the end.
That only saves operations if you are doing only one or two transformations. If you are processing lots of vectors, then the matrix concatenation is clearly more efficient.
Regarding your original question, are you having issues with only one or two shaders, but all of the others are working as you expected (using the same matrix multiplication order)?
You could do more or less the same thing as the constant buffer approach, but instead use an SRV to hold the data. That allows you to have a variable sized resource that has a much larger potential size than a CB, and you can still access the data in a simple way. You will have to test it out to see if there is a performance delta for your particular situation though, since you will be trading device memory accesses for interpolants. It may or may not be a good trade off...
How do i get the reflection interface for the default constant buffer?
If you compile your shader with FXC, and then check the output listing for the name of the buffer you should be able to use that name to reflect the constant buffer. Regarding the performance, I think there is a penalty for using the dynamic linkage (although I've never heard hard numbers on this before). Most of the systems I have heard of just generate the appropriate shader code and compile the needed variants accordingly.
Is that something that isn't a usable solution for you?
Have you seen the visual editor in UE4? I think it targets the same or similar functionality that you are implementing. That should tell you that there is at least some demand for such a feature - most likely to allow designers to get in on behavior modification and things like that.
Would you ever see yourself using such a system? Are you planning to make a product or to provide an open source library? I would say that if you are learning while you are building the system, then keep on going!