phantomMember Since 15 Dec 2001
Offline Last Active Private
Code Monkey Type, slight ego-manic and hater of All The Things.
- Group Members
- Active Posts 12,911
- Profile Views 27,740
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Has had an item featured
Blog post contributor
Posted by phantom on 17 August 2015 - 10:26 AM
You just end up wasting overdraw on pixels which will never be seen.
Posted by phantom on 17 August 2015 - 03:12 AM
1a) There are also hardware assumptions being made. No one has said what hardware Vulkan will cover on the PC. I suspect the same as DX12 is most likely. So you'll probably need to keep your existing DX11/OpenGL renderer around to cover older hardware.
2) You assume Win10 doesn't have a decent uptake rate. Before release it was sitting at 2.3% of the Windows market, 0.6% behind 32bit XP, from data provided by the Steam Hardware Survey. As I said this was before release, so only testers.
In short; DX12 and Vulkan will likely fill the same gap, hardware wise, so you'll need the old renderer still. Those tend to be DX11.
The other factor forgotten here is that DX12 also covers the Xbox One so the API hits two targets.
Vulkan will get you Win and Linux, but the former will likely be covered by DX11/12 due to 12 hitting first so it'll be down to any Linux support people want to throw out.
By not coming out first Vulkan has lost mindshare, interest and technically relevance. Much like latter GL versions, which finally after years of lagging behind produced functions DX11 didn't have, any functionality is unlikely to be adopted due to a bedding in of DX12 focused systems.
Of course there might well not be any real difference in ability, everyone is targeting the same hardware after all and the APIs reflect that hardware pretty well, so aside from platform specific extensions/features I'm not sure what Vulkan could offer that DX12 wouldn't already cover.
All of which, of course, if speculation as months on from GDC we simply don't know anything beyond the API looking very much like DX12s.
Posted by phantom on 16 August 2015 - 09:29 AM
2) Yes and no.
Generally its accepted to mean the first bit, that draw calls are generated across multiple threads and queued as work by a single thread (or task) to ensure correct ordering.
That said if you could keep your dependencies in order then there is nothing stopping you queuing work from multiple threads, although I'd have to check the thread safety of the various command queues to see what locks/protection you might need.
However your 'render to various textures' thing brings up a second part; the GPU is itself highly threaded so even if you have one thread pushing execute commands the GPU itself can have multiple commands in flight at once (dependencies allowing) so regardless of what method you use to queue work to the device it can be doing multiple things at the same time.
Posted by phantom on 11 August 2015 - 12:09 PM
2. 'From the GPU side' simply means doing it in the GPU's time frame. So calling 'signal' will insert a command into the command stream to have the GPU execute which sets the signal to a value. This would happen inside the command processor of the GPU.
3. The command queue wait stalls the GPU's command processor until the fence is signalled. The Win32 API function stalls the CPU until the fence is signalled.
From a practical stand point;
- CommandQueue::Wait() causes the GPU's command processor to wait for the fence to be signalled. Lets say you have a command list which is running a compute shader and a graphics command list which is going to do some graphics commands which depends on the output of that compute work. You can submit both lists to separate queues and have the graphics queue wait on the fence from the compute queue before executing the graphics commands. Without this the two commands could execute at the same time if the GPU in question has separate graphics and compute queue hardware. (Maxwell 2 and GCN are both examples of this).
A second example would be doing a texture upload via a copy queue; you'd want to make sure the copy was complete before allowing any work which depended on it to reference it, so again you'd put a 'signal' in the copy queue and 'wait' on it in the graphics queue.
- Win32 wait would be used when you want to cause a CPU thread to sleep until the GPU has done some work. A simple example of this is waiting for all of a scene to be drawn before submitting the next batch of work.
A good example of all of this is the ExecuteIndirect example in the DirectX Graphics GitHub examples; https://github.com/Microsoft/DirectX-Graphics-Samples
Posted by phantom on 10 August 2015 - 07:37 AM
Generally there is likely to be a way of doing it which will perform better.
Posted by phantom on 10 August 2015 - 07:06 AM
Turn on back face culling on the API and let the hardware do its thing.
Posted by phantom on 10 August 2015 - 02:43 AM
AMD GCN hardware has 1 gfx queue, at least 2 compute pipes (GCN1.0, the 290X I have at home has 8) and, iirc, 2 DMA engines. The 'compute pipes' are referred to a Async Compute Engines, or ACE, and each can handle multiple command queues and keep more than one job in flight.
NV is a bit more complex, before Maxwell 2 you basically couldn't have a Gfx pipe and a compute pipe active at once. Maxwell 2 removes this restriction, giving you 1 gfx pipe and 31 compute pipes. However NV aren't forthcoming with details so it is unknown how those pipes match up to queues.
Intel don't have any speciality hardware and, due to how it is design, show little improvement with D3D12 in the first place. Still worth using however as it will still reduce CPU overhead.
(Queue = memory address we are reading commands from, pipe = hardware consuming said queue.)
And yes, multiple queues allow work to be dealt with independently in an optimal fashion.
For example, if you had a copy, a gfx and a compute work which is independent you could put that all into the graphics queue BUT it would take time for the graphics command processor to chew all that and distribute it. You also have the serial nature of pushing each command into a single queue to execute it.
By contrast by using a separate queue for each piece of work the GPU can dispatch it at the same time as each queue will be directed at the correct bit of hardware. Front end pressure on the graphics command processor is dropped by 2/3rds as instead of dealing with 3 commands it is now dealing with 1, and the hardware can be utilised fuller faster. (You can also setup and dispatch each piece of work independently on the CPU side so a win there too).
This is, of course, a simple example, but when you start throwing loads of copies and more gfx and compute work into the mix you can see the win.
How you split things up is, of course, up to you and finding the right balance is key.
Posted by phantom on 09 August 2015 - 03:21 AM
MultiDraw, and its ilk, are pretty much there as an optimisation and would allow the driver to hit some faster paths because it knows, between draw calls, that you aren't changing state.
Useful in a client-server setup.
Useful in a high overhead setup.
Less useful in a thin API.
In fact with the executeindirect functionality of D3D12 (which gives you the ability to change some root signature constants, and indeed vertex and index buffer locations based on a buffer input) you can do practically more things with better functionality. (One thread writes a command list with N indirect calls, one thread (or the GPU!) writes a buffer with draw call information, bit of fence sync magics, and bam! loads of draws!).
Posted by phantom on 06 August 2015 - 06:33 AM
Quickly looking over a GDC presentation (clicky) it seems like you create an instance of a Vulkan interface and go from there.
The back buffer looks to be a special texture which can be drawn to the screen; if you are familiar with DX's DXGI stuff it seems to be a bit like that.
As a side note, having recently started getting up to speed with DX12 and now having looked back at that presentation I would say if you want to start getting a handle on this stuff then go and look at/play with DX12.
While the API is a little different the overall shape is the same; buffers, pipelines, barriers.. it all looks very much alike in application, so even if you don't plan to stick with DX12 for the long haul its still a good starting point right now.
Posted by phantom on 05 August 2015 - 09:10 AM
Apple might, but they're already doing Metal on iOS/OSX now, and they've always been slow on the OpenGL front, so I wouldn't hold my breath.
They only said that it's coming this year. I'm also curious about who will support it. PS4, Apple etc. or will they insist on their proprietary solutions?
Sony already have the best API of the current offerings... They could make a Vulkan wrapper around it, to make porting easier for PC developers... But in that situation, Vulkan would be a wasteful high level API that's blocking you from low-level efficiencies
I doubt many developers would prefer that.
Yeah, I honestly don't see buy in from anyone who already has a low overhead API happening.
Vulkan is likely to cover Windows, Linux (and related) and, as pointed out above, Android - although like OpenGL/OpenGL|ES before it I wouldn'e expect 'one code path to rule them all'.
Future landscape is likely to be;
MS platforms; DX12/DX11(legacy)
I'm intrigued, in a vague way, to see what happens with WebGL. It remains behind the times on features with everyone else in the GL ecosystem (iirc WebGL2 is still very much WIP) but exists in a world where Vulkan-like low level APIs don't really fit.
Although it's still very much a matter of 'when it happens...'; the other day the official VulkanAPI twitter feed posted a picture saying Vulkan was 'forged by industry', to which I replied along the lines of "forged by industry and AWOL. Less marketing BS, more specs/libs/drivers." because, for all the pretty slides, right now all we have to look at is marketing and one alpha Android driver.
(By contrast I've had a look over some D3D12 docs/examples, got a handle on the whole 'root signature' stuff and I'm making plans to write some D3D12 code on my Win10 system probably this weekend...)
Posted by phantom on 11 July 2015 - 09:55 AM
looks like this is your problem right here. you're storing a 2D array as a 1D array, then trying to do adjacency tests. doable, but god awful ugly, slow, and complex compared to a 2D array.
store in a 2d array such as my2Darray[x][y]. adjacent squares to the square at x,y are then: x-1,y; x+1,y; x,y-1; x,y+1; x-1,y-1; etc.
A 1D array is perfectly suited for 2D data; the look up is a simple case of id = y * width + x; which is practically the same lookup as you are doing for a 2D array.
Depending on array size, allocation strategy and walking method the 2D array could end up performing worse if it is implemented, under the hood, as T** as you have a double indirection to Whatever bit of memory.
TLDR; using a 1D array to store 2D data is perfectly sane and easy to work with.
Posted by phantom on 11 July 2015 - 07:13 AM
You need to concretely define your requirements.
The term 'game engine' is simply too broad; a few years back as part of my degree I wrote in a week or two a system which interfaced Lua with Java (course requirement on the latter) which could play sounds, draw sprites, do basic collision and have a 'game loop'.
That is technically an 'engine' - a simple, 2D engine, but with a change of graphics and a change of Lua logic it could drive another game type.
Posted by phantom on 10 July 2015 - 01:12 PM
Vulkan on the other hand promises to addresses all those issues (dealing with legacy/slow paths, inconsistent API between desktop and mobile).
Indeed, however as optimistic as I've tried to be about Vulkan I can't help but note that DX12 has been around for some time for people to try and has it's release in 19 days when Win10 drops - Vulkan doesn't even have a public spec, the last presentation still has 'TBD' on a few things and is still in flux as of a few weeks ago...
Posted by phantom on 10 July 2015 - 01:09 PM
Well, OpenGL is available for all platforms, including web browsers and there are still OpenGL games out there that run as well as ones on DirectX.
Except it isn't.
OpenGL is only on Windows, Linux and OSX. But OSX only supports up to 4.something and last I check was lacking compute shaders... also it is going to Metal so the future support is in doubt. DX on Windows has better tools and drivers so is a major win...
... also DX is used on the Xbox line of consoles for any big developer using it is a no brainer. OSX and Linux continue to be non-events in the PC gaming world.
WebGL is not OpenGL. It is also a mess. In my professional option the whole platform is a write off but that aside WebGL continues to be a mess of platforms which don't do the same thing and lack tools.
OpenGL|ES on mobile phones is not OpenGL. It is also a mess. iOS is going Metal anyway so that removes one platform and Android is a clusterfuck of broken drivers and extension hell far beyond whatever was seen on Windows.
Beyond that no console uses OpenGL either, despite what people often like to claim - as mentioned the Xbox uses DX, the PlayStation uses its own low level library, as does the WiiU and other devices.
So 'OpenGL' only exist currently on three platforms, of which one has a better solution in DX11.
You'll not be reusing any of that code on any other target.
The 'OpenGL is on everything!' is a nice myth but it is just that... a myth... it has no real basis in fact.
Posted by phantom on 20 June 2015 - 01:39 PM
Will be means the future. Microsoft constantly obsoletes its own APIs as well, so really it is a poor decision for anyone who is not part of a big company to bother with using it.
You will be able to use vulkan on windows systems, so really there will be no point to DirectX any more.
Everyone obsoletes APIs, however D3D sticks around - DX9 was here for a long time, DX11 was announced 7 years ago, DX12 I would expect to have a pretty long life time as well as the primary windows graphics interface.
Yes, you will be able to use Vulkan, just no one knows when.
D3D12 will be here in just over a month.
Vulkan is still MIA when it comes to anything public and was last reported as 'in flux'.
So, you can start developing with an API which works (D3D11), an API which has docs, beta drivers, good performance reports (D3D12) or you can wait around for Vulkan which is being developed by the same group who mismanaged OpenGL for over a decade.
Personally I'd go with D3D, because say what you like about MS they get shit done and with the last 3 graphics APIs they have done well (D3D10 as a good, if not doomed, API) and by all accounts D3D12 is another sound result.
Khronos and the ARB (who went on to form the Khronos group for graphics), on the other hand, have historically not managed very well and are showing worrying signs of lapsing back in to the old habits which plagued OpenGL.