Jump to content

  • Log In with Google      Sign In   
  • Create Account

phantom

Member Since 15 Dec 2001
Offline Last Active Private

#5269390 Criticism of C++

Posted by on 05 January 2016 - 06:10 AM

Yes, because the deprecation of some library features is the same as a 'rethink of the language'...

and as apparently the level of conversation here is now describing things as 'fail' and bitter sarcasm I'm out; nothing productive will come of this.


#5269374 Criticism of C++

Posted by on 05 January 2016 - 05:00 AM

I seriously think someone should re-think C++ from scratch instead of driving away into another language.


A 'rethink from scratch' would just give you another language.

Anything which breaks backwards compatibility is a new language.

End of story.

And people have tried, D which was brought up earlier is this indeed incarnate, yet it has failed to catch on.

A 'rethink of C++' is no longer C++.


#5269065 Criticism of C++

Posted by on 03 January 2016 - 03:33 PM

Which is why I'm having big expectations for D (being developed by, basically, two people) once it matures a bit.


Yep, probably around the time of the Year Of The Linux Desktop...
(D is already 14 years old, it has had two people working on it since 2006... if you look in to the distance you can see the boat it missed sailing off with all the other languages which have come since partying on it...)


#5268117 Enable MSAA in DirectX 11

Posted by on 27 December 2015 - 08:24 AM

On point 2 I think you've misread it; the data pointer must be null when creating a multi-sampled texture as there is no way to upload multisample data to the GPU. Multisample textures are only useful as render targets (and for sampling from once they have been rendered to) and thus can not be immutable.


#5267904 Unity vs Unreal 4 approach to entity-component pattern

Posted by on 25 December 2015 - 06:42 AM

but Unreal's approach smells a bit like a holdover from the days of over-using inheritance, which makes me a little wary.


While it shouldn't make you 'wary' as such you are correct that the larger classes are a hold over from the pre-UE4 days.
While it does give you easy canned classes to work with it does however present problems as some of those classes don't derive from the right part of the inheritance tree so you couldn't, for example, treat an Actor as a SceneComponent in an interchangeable way which leads to complications both in the code and in the usage pattern and the designer ends up having to remember special rules.

I believe there is effort afoot to convert things like StaticMeshRenderer and the like to proper component systems so that problem goes away (there are a number of classes which share this work flow/looks-like-a-component-but-sint problem) but I've no idea how far along that work is.


#5267660 Compiling and running on different OpenGL versions

Posted by on 23 December 2015 - 09:56 AM

Yes, I've seen drivers in the wild which will return a pointer to a function which prints 'not supported' in the log, not just Qualcomm either, this was an NV driver a few years back.

Basically Android remains the place your dreams go to die smile.png


#5261214 How hard it is to live from games as indie developer?

Posted by on 09 November 2015 - 03:20 PM

heh, make money in the mobile market... heh... good one!


#5253730 Dealing with D3DX____ functions being depricated.

Posted by on 23 September 2015 - 04:41 PM

Thanks, great explanation. I was totally not aware of that sample page. In view of all that I'm going to go straight to 12, I've got a feeling that industry will transition to that much faster then other builds considering the advantages. Thanks, cheers.


I would just like to ask this question; why are you doing what you are doing?

If it is for a project that you plan to finish and release then I would transition to 11 and work with that for a while. D3D11 isn't going away and while it might have performance issues for AAA guys if you are building your own stuff then it might be worth sticking with D3D11.

If your goal is to get a job in the industry then D3D12 makes more sense, and you might as well skip 11 as the APIs aren't remotely compatible.

One of the key things, going from 10 or 11 to 12 is that it will probably require a rethink and a rebuild of many things to get the best from it; trying to treat D3D12 the same as D3D10, data and work wise, isn't going to net you a great deal. Heck, I would make the argument that unless you are multi-threading your whole rendering system the whole way through stick to 11 - you get pretty well tuned drivers which will spread the workload.

D3D12 will shine when you can go wide and when you understand how the hardware works (and thus why things are as they are) - it is more hands on.

That's not to discourage you from moving across of course, just think about why you are doing it - new and shiny isn't the best for every project and you might be served well enough with D3D11 where things are tuned, stable and much better documented.


#5253374 "Xbox One actually has two graphical queues"

Posted by on 21 September 2015 - 06:40 PM

Close, but the layering is a bit more complicated than that.

An ACE is a higher level manager which will split work to compute units; internally the CU can schedule and control up to 40 wave fronts of work (4 x 10 'program counters' if you will) dispatching instructions and switching between work as required - the details are covered in AMD presentations, but basically from each group of 10 program counters it can dispatch up to 4 instructions to the SIMD, scalar, vector memory and scalar memory and program flow control units, which is the 'hyper threading' part.

(Each CU can handle 40 programs of work, each of those consists of 64 threads, multiple up by CU count and you get the amount of 'in flight' work the GPU can handle).

The ACE, which is feeding the CU, handles work generation and dispatch, along with work dependency tracking - from a CPU point of view it is more like the kernel secular, working out what needs to be dispatched to each core (although instead of just assigning work its more like a case of "I need these resources, can anyone handle it?" for the work, with the ability to suspend work (and, iirc, pull the state back) when more important work is required to be run on a CU).

The amount of ACEs varies across hardware; at least 2, currently a max of 8.


#5252076 [D3D12] Enabling the depth test in d3d12

Posted by on 13 September 2015 - 02:53 PM

It isn't that simple, in fact that abstraction is a bit of a lie ;)

In D3D12, for something like that, you need to have a Pipeline State Object setup which contains details on everything needed for a draw operation; These docs show how it is setup


#5251948 [D3D12] Barriers and Fences

Posted by on 12 September 2015 - 05:20 PM

Resource barriers - adds commands to convert a resource (or resources) from one type to another (such as a render target to a texture), prevents further command execution until the GPU has finished doing any work needed to convert the resources as requested.

Fences - a marker in a command stream. Allows you to know when the GPU, or CPU, has finished doing some work so they can be synchronised.

Example; GPU command queue/list contains a fence which can be set by the CPU - this prevents the GPU from executing more commands until the CPU signals it is done. So, for example, if you have a command which reads from a buffer on the GPU and the CPU needs to fill that buffer, you'd insert a fence in to the GPU commands which tell it to wait until the CPU has signalled that the copy has completed.

Going the other way, you can use a fence the GPU has set to know where the GPU has got to executing. A good example of this added a fence command at the end of the frame, so the CPU knows when the GPU is done with the last frame's worth of data/buffers.


#5250157 What is latest version of DXSDK

Posted by on 01 September 2015 - 01:12 PM

The DirectX SDK is now part of the Windows SDK - if you have that installed (which should be the case with VS2015) then you have the latest version.


#5247595 [D3D12] SetDescriptorHeaps

Posted by on 19 August 2015 - 03:17 AM

Conceptually, that makes sense to me. The confusing part is that Set*RootDescriptorTable already takes a GPU descriptor handle, which defines the GPU pointer to the heap (is that correct?). Is there not enough information in the D3D12_GPU_DESCRIPTOR_HANDLE to identify the heap? I suppose I could see it as a way to simplify the backend by requiring the user to specify the exact set of buffers instead of gathering a list from descriptor handles (which would be more expensive). Secondly, can I provide the heaps in arbitrary order? Do they have to match up somehow with the way I've organized the root parameters?


I suspect it is done that way to let the driver decide what to do.
Given a heap base address and two pointers into that heap you can store the latter as offsets into the former, potentially taking up less register space vs a full blown reference. A look at the GCN docs would probably give a good reason for this, at least for that bit of hardware.

As for the order; seems not to matter.
I only did a simple test on this (D3D12DynamicIndexing example; swapped order of heaps @ Ln93 in FrameResource.cpp), but it worked fine so I'm willing to assume this holds true of all resources... until I find a situation which breaks it ;)


#5247353 Vulkan is Next-Gen OpenGL

Posted by on 18 August 2015 - 04:28 AM

This might be of some interest to people, I've only just started looking at it myself however; Siggraph; An Overview of Next Gen Graphics APIs.

I bring it up because it mentions Vulkan, not that I've got to that slide deck yet biggrin.png


#5247128 DX12 SkyBox

Posted by on 17 August 2015 - 10:26 AM

There is a better way to do things in general; don't do the sky first.

You just end up wasting overdraw on pixels which will never be seen.




PARTNERS