Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 15 Dec 2001
Online Last Active Today, 07:47 AM

#5245788 D3D 12: Using fence objects

Posted by phantom on 11 August 2015 - 12:09 PM

1. There is no direct link between the two. Links are 'created' simply by calling 'signal' after 'execute command list' on a queue.

2. 'From the GPU side' simply means doing it in the GPU's time frame. So calling 'signal' will insert a command into the command stream to have the GPU execute which sets the signal to a value. This would happen inside the command processor of the GPU.

3. The command queue wait stalls the GPU's command processor until the fence is signalled. The Win32 API function stalls the CPU until the fence is signalled.

From a practical stand point;
- CommandQueue::Wait() causes the GPU's command processor to wait for the fence to be signalled. Lets say you have a command list which is running a compute shader and a graphics command list which is going to do some graphics commands which depends on the output of that compute work. You can submit both lists to separate queues and have the graphics queue wait on the fence from the compute queue before executing the graphics commands. Without this the two commands could execute at the same time if the GPU in question has separate graphics and compute queue hardware. (Maxwell 2 and GCN are both examples of this).

A second example would be doing a texture upload via a copy queue; you'd want to make sure the copy was complete before allowing any work which depended on it to reference it, so again you'd put a 'signal' in the copy queue and 'wait' on it in the graphics queue.

- Win32 wait would be used when you want to cause a CPU thread to sleep until the GPU has done some work. A simple example of this is waiting for all of a scene to be drawn before submitting the next batch of work.

A good example of all of this is the ExecuteIndirect example in the DirectX Graphics GitHub examples; https://github.com/Microsoft/DirectX-Graphics-Samples

#5245456 Backface culling in geometry shader?

Posted by phantom on 10 August 2015 - 07:37 AM

What are you trying to do which makes you think you need to use the GS?
Generally there is likely to be a way of doing it which will perform better.

#5245447 Backface culling in geometry shader?

Posted by phantom on 10 August 2015 - 07:06 AM

You don't improve the performance by adding an extra stage, and certainly not the geometry shader stage which is a known performance sinkhole.

Turn on back face culling on the API and let the hardware do its thing.

#5245400 [D3D12] Multi-threading: Command Queue, Allocator, List

Posted by phantom on 10 August 2015 - 02:43 AM

You are correct on the queue front.

AMD GCN hardware has 1 gfx queue, at least 2 compute pipes (GCN1.0, the 290X I have at home has 8) and, iirc, 2 DMA engines. The 'compute pipes' are referred to a Async Compute Engines, or ACE, and each can handle multiple command queues and keep more than one job in flight.

NV is a bit more complex, before Maxwell 2 you basically couldn't have a Gfx pipe and a compute pipe active at once. Maxwell 2 removes this restriction, giving you 1 gfx pipe and 31 compute pipes. However NV aren't forthcoming with details so it is unknown how those pipes match up to queues.

Intel don't have any speciality hardware and, due to how it is design, show little improvement with D3D12 in the first place. Still worth using however as it will still reduce CPU overhead.

(Queue = memory address we are reading commands from, pipe = hardware consuming said queue.)

And yes, multiple queues allow work to be dealt with independently in an optimal fashion.
For example, if you had a copy, a gfx and a compute work which is independent you could put that all into the graphics queue BUT it would take time for the graphics command processor to chew all that and distribute it. You also have the serial nature of pushing each command into a single queue to execute it.

By contrast by using a separate queue for each piece of work the GPU can dispatch it at the same time as each queue will be directed at the correct bit of hardware. Front end pressure on the graphics command processor is dropped by 2/3rds as instead of dealing with 3 commands it is now dealing with 1, and the hardware can be utilised fuller faster. (You can also setup and dispatch each piece of work independently on the CPU side so a win there too).

This is, of course, a simple example, but when you start throwing loads of copies and more gfx and compute work into the mix you can see the win.

How you split things up is, of course, up to you and finding the right balance is key.

#5245208 [D3D12] Multidraw, Resource binding

Posted by phantom on 09 August 2015 - 03:21 AM

HOWEVER, as you aren't wandering off into user mode driver town every time you do a draw call, because you are recording them into a client side buffer, this isn't a bad thing.

MultiDraw, and its ilk, are pretty much there as an optimisation and would allow the driver to hit some faster paths because it knows, between draw calls, that you aren't changing state.

Useful in a client-server setup.
Useful in a high overhead setup.
Less useful in a thin API.

In fact with the executeindirect functionality of D3D12 (which gives you the ability to change some root signature constants, and indeed vertex and index buffer locations based on a buffer input) you can do practically more things with better functionality. (One thread writes a command list with N indirect calls, one thread (or the GPU!) writes a buffer with draw call information, bit of fence sync magics, and bam! loads of draws!).

#5244803 Vulkan is Next-Gen OpenGL

Posted by phantom on 06 August 2015 - 06:33 AM

I'm pretty sure it's a whole new system; although as with most things Vulkan the details are low on the ground ;)

Quickly looking over a GDC presentation (clicky) it seems like you create an instance of a Vulkan interface and go from there.

The back buffer looks to be a special texture which can be drawn to the screen; if you are familiar with DX's DXGI stuff it seems to be a bit like that.

As a side note, having recently started getting up to speed with DX12 and now having looked back at that presentation I would say if you want to start getting a handle on this stuff then go and look at/play with DX12.
While the API is a little different the overall shape is the same; buffers, pipelines, barriers.. it all looks very much alike in application, so even if you don't plan to stick with DX12 for the long haul its still a good starting point right now.

#5244662 Vulkan is Next-Gen OpenGL

Posted by phantom on 05 August 2015 - 09:10 AM

They only said that it's coming this year. I'm also curious about who will support it. PS4, Apple etc. or will they insist on their proprietary solutions?

Apple might, but they're already doing Metal on iOS/OSX now, and they've always been slow on the OpenGL front, so I wouldn't hold my breath.

Sony already have the best API of the current offerings... They could make a Vulkan wrapper around it, to make porting easier for PC developers... But in that situation, Vulkan would be a wasteful high level API that's blocking you from low-level efficiencies laugh.png
I doubt many developers would prefer that.

Yeah, I honestly don't see buy in from anyone who already has a low overhead API happening.

Vulkan is likely to cover Windows, Linux (and related) and, as pointed out above, Android - although like OpenGL/OpenGL|ES before it I wouldn'e expect 'one code path to rule them all'.

Future landscape is likely to be;
MS platforms; DX12/DX11(legacy)
iOS/OSX; Metal
Linux; Vulkan/OGL
Android; Vulkan/OGL|ES

I'm intrigued, in a vague way, to see what happens with WebGL. It remains behind the times on features with everyone else in the GL ecosystem (iirc WebGL2 is still very much WIP) but exists in a world where Vulkan-like low level APIs don't really fit.

Although it's still very much a matter of 'when it happens...'; the other day the official VulkanAPI twitter feed posted a picture saying Vulkan was 'forged by industry', to which I replied along the lines of "forged by industry and AWOL. Less marketing BS, more specs/libs/drivers." because, for all the pretty slides, right now all we have to look at is marketing and one alpha Android driver.

(By contrast I've had a look over some D3D12 docs/examples, got a handle on the whole 'root signature' stuff and I'm making plans to write some D3D12 code on my Win10 system probably this weekend...)

#5239746 54x50 tiles map searching for neighbours - takes extremely long

Posted by phantom on 11 July 2015 - 09:55 AM

looks like this is your problem right here. you're storing a 2D array as a 1D array, then trying to do adjacency tests.  doable, but god awful ugly, slow, and complex compared to a 2D array.
store in a 2d array such as my2Darray[x][y].  adjacent squares to the square at x,y are then: x-1,y;   x+1,y;   x,y-1;   x,y+1;  x-1,y-1;   etc.

Errm... wut?

A 1D array is perfectly suited for 2D data; the look up is a simple case of id = y * width + x; which is practically the same lookup as you are doing for a 2D array.

Depending on array size, allocation strategy and walking method the 2D array could end up performing worse if it is implemented, under the hood, as T** as you have a double indirection to Whatever bit of memory.

TLDR; using a 1D array to store 2D data is perfectly sane and easy to work with.

#5239729 Help in creating a game engine.

Posted by phantom on 11 July 2015 - 07:13 AM

You need to concretely define your requirements.


The term 'game engine' is simply too broad; a few years back as part of my degree I wrote in a week or two a system which interfaced Lua with Java (course requirement on the latter) which could play sounds, draw sprites, do basic collision and have a 'game loop'.

That is technically an 'engine' - a simple, 2D engine, but with a change of graphics and a change of Lua logic it could drive another game type.

#5239573 Why Do People Use DirectX?

Posted by phantom on 10 July 2015 - 01:12 PM

Vulkan on the other hand promises to addresses all those issues (dealing with legacy/slow paths, inconsistent API between desktop and mobile).

Indeed, however as optimistic as I've tried to be about Vulkan I can't help but note that DX12 has been around for some time for people to try and has it's release in 19 days when Win10 drops - Vulkan doesn't even have a public spec, the last presentation still has 'TBD' on a few things and is still in flux as of a few weeks ago...

#5239572 Why Do People Use DirectX?

Posted by phantom on 10 July 2015 - 01:09 PM

Well, OpenGL is available for all platforms, including web browsers and there are still OpenGL games out there that run as well as ones on DirectX.

Except it isn't.

OpenGL is only on Windows, Linux and OSX. But OSX only supports up to 4.something and last I check was lacking compute shaders... also it is going to Metal so the future support is in doubt. DX on Windows has better tools and drivers so is a major win...
... also DX is used on the Xbox line of consoles for any big developer using it is a no brainer. OSX and Linux continue to be non-events in the PC gaming world.

WebGL is not OpenGL. It is also a mess. In my professional option the whole platform is a write off but that aside WebGL continues to be a mess of platforms which don't do the same thing and lack tools.

OpenGL|ES on mobile phones is not OpenGL. It is also a mess. iOS is going Metal anyway so that removes one platform and Android is a clusterfuck of broken drivers and extension hell far beyond whatever was seen on Windows.

Beyond that no console uses OpenGL either, despite what people often like to claim - as mentioned the Xbox uses DX, the PlayStation uses its own low level library, as does the WiiU and other devices.

So 'OpenGL' only exist currently on three platforms, of which one has a better solution in DX11.
You'll not be reusing any of that code on any other target.

The 'OpenGL is on everything!' is a nice myth but it is just that... a myth... it has no real basis in fact.

#5235905 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 20 June 2015 - 01:39 PM

Will be means the future. Microsoft constantly obsoletes its own APIs as well, so really it is a poor decision for anyone who is not part of a big company to bother with using it.
You will be able to use vulkan on windows systems, so really there will be no point to DirectX any more.

Everyone obsoletes APIs, however D3D sticks around - DX9 was here for a long time, DX11 was announced 7 years ago, DX12 I would expect to have a pretty long life time as well as the primary windows graphics interface.

Yes, you will be able to use Vulkan, just no one knows when.
D3D12 will be here in just over a month.
Vulkan is still MIA when it comes to anything public and was last reported as 'in flux'.

So, you can start developing with an API which works (D3D11), an API which has docs, beta drivers, good performance reports (D3D12) or you can wait around for Vulkan which is being developed by the same group who mismanaged OpenGL for over a decade.

Personally I'd go with D3D, because say what you like about MS they get shit done and with the last 3 graphics APIs they have done well (D3D10 as a good, if not doomed, API) and by all accounts D3D12 is another sound result.
Khronos and the ARB (who went on to form the Khronos group for graphics), on the other hand, have historically not managed very well and are showing worrying signs of lapsing back in to the old habits which plagued OpenGL.

#5235903 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 20 June 2015 - 01:29 PM

Slight thread clean up; lets keep it on topic, not call other people names or drag other unrelated topics in to this thanks.

I might start handing out warnings to people otherwise...

#5235631 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 19 June 2015 - 02:16 AM

You can use it with GL too; it's only a container format and the data payload is the same whatever graphics API you are using.

Yes true, the down side is that outside of windows you'll need a third party library to load dds files or its a case of roll your own. It's a feature rich format which I wouldn't fancy creating a parser for myself. Best to stick to windows for that IMHO and leverage that particular advantage...

Yeah, but if you are using OpenGL then chances are you shouldn't be using D3DX libs anyway even on Windows if only for consistency's sake - and as others have pointed out there are a few libs out there which already read .dds structured files (also the header isn't that hard to pass for the common stuff).

Ultimately however I'd tend towards a custom header in front of the payload with just the details you need it in, which can include a custom 'type' ID so once you have parsed the DDS header during the data pipeline you can translate the various blogs into a simpler, game ready, format. However that is overkill when starting out thus why I didn't originally suggest it smile.png

#5235568 Is OpenGL enough or should I also support DirectX?

Posted by phantom on 18 June 2015 - 04:06 PM

Not just that, but in DirectX there is the dds format which can hold texture arrays, texture cubes, mipmaps, and all sorts of other stuff in one convenient container format. For directx definitely investigate it...

You can use it with GL too; it's only a container format and the data payload is the same whatever graphics API you are using.

Really .dds should be your first stop when loading game ready textures, it can handle everything you graphics card can after all.
PNG and TGA are source images and shouldn't be anywhere near your final game.