Jump to content

View more

Image of the Day

Full Syncs #screenshotsaturday favourites https://t.co/i1Flnwcg3l #xbox #ps4 https://t.co/m0v2F1SxGs
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net's newsletters to receive the latest updates and exclusive content.


Sign up now

Vulkan is Next-Gen OpenGL

2: Adsense
  • You cannot reply to this topic
499 replies to this topic

#481 away2903   Members   

895
Like
0Likes
Like

Posted 14 April 2016 - 08:48 PM

Android N Developer Preview 2 includes Vulkan support now.

http://android-developers.blogspot.jp/2016/04/android-n-developer-preview-2-out-today.html



#482 FGFS   Members   

345
Like
0Likes
Like

Posted 18 July 2016 - 04:23 AM

http://www.eurogamer.net/articles/digitalfoundry-2016-doom-vulkan-patch-shows-game-changing-performance-gains

 

Will try the Doom demo now with Vulkan. Although I'm mostly on Ubuntu, I'll check it out.

 



#483 Ed Welch   Members   

997
Like
0Likes
Like

Posted 19 July 2016 - 08:24 AM

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.



#484 samoth   Members   

9424
Like
1Likes
Like

Posted 19 July 2016 - 10:06 AM

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Ah, one should be careful with such a blanket statement (unless you have some more detailled information?). Benchmarks are rarely unbiased and never fair, and interpreting the results can be tricky. I'm not trying to defend nVidia here, and you might quite possibly even be right, but I think solely on the linked results, this is a bit of a hasty conclusion.

From what one can see in that video, one possible interpretation is "Yay, AMD is so awesome, nVidia sucks". However, another possible interpretation might be "AMD sucks less with Vulkan than with OpenGL". Or worded differently, AMD's OpenGL implementation is poor, with Vulkan they're up-to-par.

Consider the numbers. The R9 Fury is meant to attack the GTX980 (which by the way is Maxwell, not Pascal). With Vulkan, it has more or less the same FPS, give or take one frame. It's still way slower than the Pascal GPUs, but comparing against these wouldn't be fair since they're much bigger beasts, so let's skip that. With OpenGL, it is however some 30+ percent slower than the competing Maxwell card.

All tested GPUs gain from using Vulkan, but for the nVidia ones it's in the 6-8% range while for the AMDs it's in the 30-40% range. I think there are really two ways of interpreting that result.

#485 Ed Welch   Members   

997
Like
0Likes
Like

Posted 19 July 2016 - 10:56 AM

 

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Ah, one should be careful with such a blanket statement (unless you have some more detailled information?). Benchmarks are rarely unbiased and never fair, and interpreting the results can be tricky. I'm not trying to defend nVidia here, and you might quite possibly even be right, but I think solely on the linked results, this is a bit of a hasty conclusion.

From what one can see in that video, one possible interpretation is "Yay, AMD is so awesome, nVidia sucks". However, another possible interpretation might be "AMD sucks less with Vulkan than with OpenGL". Or worded differently, AMD's OpenGL implementation is poor, with Vulkan they're up-to-par.

Consider the numbers. The R9 Fury is meant to attack the GTX980 (which by the way is Maxwell, not Pascal). With Vulkan, it has more or less the same FPS, give or take one frame. It's still way slower than the Pascal GPUs, but comparing against these wouldn't be fair since they're much bigger beasts, so let's skip that. With OpenGL, it is however some 30+ percent slower than the competing Maxwell card.

All tested GPUs gain from using Vulkan, but for the nVidia ones it's in the 6-8% range while for the AMDs it's in the 30-40% range. I think there are really two ways of interpreting that result.

 

Not really, because all games that have async compute AMD gains much more than pascal, including direct X titles. Check this out if you don't believe me:

http://wccftech.com/nvidia-geforce-gtx-1080-dx12-benchmarks/



#486 Hodgman   Moderators   

50333
Like
0Likes
Like

Posted 19 July 2016 - 06:41 PM

Does Doom have a configuration option do disable async compute? That would let you measure the baseline GL->Vulcan gain, and then the async gain separately.

We all knew from the existing data though, that GCN was built for async and Nvidia is only now being presurred into it by their marketing department.

#487 Mona2000   Members   

1963
Like
0Likes
Like

Posted 21 July 2016 - 10:23 AM

 

 

Async compute is looking to be a killer feature for DX12 / Vulkan titles. The benchmarks indicate that Pascal's implementation of async compute isn't half as good as AMD's.

Ah, one should be careful with such a blanket statement (unless you have some more detailled information?). Benchmarks are rarely unbiased and never fair, and interpreting the results can be tricky. I'm not trying to defend nVidia here, and you might quite possibly even be right, but I think solely on the linked results, this is a bit of a hasty conclusion.

From what one can see in that video, one possible interpretation is "Yay, AMD is so awesome, nVidia sucks". However, another possible interpretation might be "AMD sucks less with Vulkan than with OpenGL". Or worded differently, AMD's OpenGL implementation is poor, with Vulkan they're up-to-par.

Consider the numbers. The R9 Fury is meant to attack the GTX980 (which by the way is Maxwell, not Pascal). With Vulkan, it has more or less the same FPS, give or take one frame. It's still way slower than the Pascal GPUs, but comparing against these wouldn't be fair since they're much bigger beasts, so let's skip that. With OpenGL, it is however some 30+ percent slower than the competing Maxwell card.

All tested GPUs gain from using Vulkan, but for the nVidia ones it's in the 6-8% range while for the AMDs it's in the 30-40% range. I think there are really two ways of interpreting that result.

 

Not really, because all games that have async compute AMD gains much more than pascal, including direct X titles. Check this out if you don't believe me:

http://wccftech.com/nvidia-geforce-gtx-1080-dx12-benchmarks/

 

 

That benchmark shows that samoth is correct. Async Compute seems to be improving performance by 2-5% for AMD, which is far from the massive improvement in Doom. The "AMD is awful at OpenGL" theory seems even more likely now.



#488 mhagain   Members   

13017
Like
0Likes
Like

Posted 21 July 2016 - 10:54 AM

There's an instructive comparison here: https://www.youtube.com/watch?v=ZCHmV3c7H1Q

 

At about the 4:35 mark (link) it compares CPU frame times between OpenGL and Vulkan on both AMD and NVIDIA and across multiple generations of hardware.  The take-home I got from that was that yes, AMD does have significantly higher gains than NVIDIA, but those gains just serve to make both vendors more level.  In other words, AMD's OpenGL CPU frame time was significantly worse than NVIDIA's to begin with.

 

So for example, the R9 Fury X went from 16.2 ms to 10.3 ms, whereas the GTX 1080 went from 10.7 ms to 10.0 ms, all of which supports the "AMD's OpenGL implementation is poor, with Vulkan they're up-to-par" reading.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#489 Servant of the Lord   Members   

33692
Like
0Likes
Like

Posted 21 July 2016 - 11:06 AM

It'd be downright embarrassing if AMD wasn't up-to-par with Vulkan, since they pretty much designed it (or at least it's earlier incarnation).


It's perfectly fine to abbreviate my username to 'Servant' or 'SotL' rather than copy+pasting it all the time.
All glory be to the Man at the right hand... On David's throne the King will reign, and the Government will rest upon His shoulders. All the earth will see the salvation of God.
Of Stranger Flames - [indie turn-based rpg set in a para-historical French colony] | Indie RPG development journal | [Fly with me on Twitter]

#490 phantom   Members   

11195
Like
0Likes
Like

Posted 21 July 2016 - 12:03 PM

The simple truth is that for quite some time now NV have had better drivers than AMD, both in DX11 and OpenGL, when it came to performance.
They did a lot of work and took advantage of the fact you could push work to other threads and use that time to optimise the crap out of things.

Vulkan and DX12 however have different architectures which don't allow NV to burn CPU time to get their performance up; however it also shows that when you get the driver out of the way in the manner that these new APIs allow you to do then AMD's hardware can suddenly open up its legs a bit and get on par with NV.

That is where the majority of the gains come from.

As for Async; firstly it seems that pre-Pascal forget it on NV. If they were going to enable things to work sanely on those cards they would have done so by now, I suspect the front end just simply doesn't play nice.

After that, it seems like a gain BUT it depends on what work is going on and it is going to vary per GPU as to how much of a win it will be.

I suspect the reason you see slightly more gains on AMD is down to the front end design; the 'gfx' command queue process can only keep so many work units in flight, lets pretend that number is 10. So when 10 work units have been dispatched in to the ALU array it'll stop processing until there is spare space - I think it'll only retire in order, so even if WU1 finishes before WU0 it still won't have any spare dispatch space. However, AMD also have their ACE, which can keep more work units in flight; even if you only have access to one of those that'll be 2 more WU in flight in the array (iirc) so you can do more work if you have the space.

NV, on the other hand, seems to have a more unified front end (they are piss poor at telling people things so a bit of guess work involved), which I suspect means they can get more work in flight so the same amount of spare time might not be there to take advantage of.

This is all guess work however, the main take away is that async can be a win, but it very much depends on the work going on within the GPU and the GPU it is being run on.

The big wins however are the API changes, which get the driver out the way, let you use less CPU time setting things up per thread AND let you setup across threads.

#491 Hodgman   Moderators   

50333
Like
1Likes
Like

Posted 21 July 2016 - 02:11 PM

Yeah it's both.

AMD's drivers have traditionally been (CPU) slower than NV's. Especially GL, which is an inconceivable amount for driver code (for comparison, NV's GL driver likely dwarfs Unreal engine's code base).

A lot of driver work is now engine work,
letting AMD catch up on CPU performance by handing half their responsibilities to smart engine devs who can use design instead of heuristics now :)

Resource barriers also give engine devs some opportunity to micro-optimize a bit of GPU time, which was the job of magic driver heuristics previously -- and NV's heuristic magic was likely smarter than AMD's.

AMD were the original Vulkan architects (and probably had disproportionate input into D12 as well - the benefits of winning the console war), so both APIs fit their HW architecture perfectly (closer API fit than NV).

AMD actually can do async compute right (again: perfect API/HW fit) allowing modest gains in certain situations (5-30%). Which could mean as much as 5ms of extra GPU time per frame :o

#492 mhagain   Members   

13017
Like
1Likes
Like

Posted 22 July 2016 - 01:13 PM

Axel Gneiting from id Software is porting Quake to Vulkan: https://twitter.com/axelgneiting/status/755988244408381443

Code: https://github.com/Novum/vkQuake

 

This is cool; it's going to be a really nice working example and we'll be able to see exactly how each block of modern Vulkan code relates to the original cruddy old legacy OpenGL 1.1 code.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#493 samoth   Members   

9424
Like
0Likes
Like

Posted 23 July 2016 - 04:51 AM

As for Async; firstly it seems that pre-Pascal forget it on NV. If they were going to enable things to work sanely on those cards they would have done so by now

It's not so much limited to async compute, but to compute and transfers in combination alltogether it seems.

I can confirm that from my immediate experience with using Blender (which on NV uses CUDA) on a desktop computer with a Maxwell GPU in comparison to my notebook which only has the Skylake CPU's integrated graphics chip.

Yeah, Blender on a notebook, and setting the 3D viewport to "render", what a fucked up idea, this cannot possibly work. Guess what, it works way better than on the desktop computer with dedicated GPU.

Now looking at the performance counters shown by process explorer, it turns out the Maxwell has two shader units busy on average (two!) whereas the cheap integrated Intel GPU has them all 95-100% busy. So I guess if the GPU does not spend its time doing compute, the time must go into doing DMA and switching engines between "transfer" and "compute". Otherwise I couldn't explain why the GPU usage is so darn low.

Now, since this is not an entirely unknown issue, I would seriously expect Pascal schedule/pipeline that kind of mode switching much better, or even do it in parallel seamlessly (I think I have read something about them adding an additional DMA controllers at one point, too, though I believe that was even for Maxwell -- would make seem on yet older generations it's still worse?).

#494 Nightwing52   Members   

119
Like
0Likes
Like

Posted 18 October 2016 - 08:19 PM

Although it's true that Vulkan performs much faster than OpenGL, I don't think OpenGL is going anywhere anytime soon. Now, engines such as Unreal and Unity will perform Vulkan under the hood, but nobody in their right mind will code in Vulkan (have you seen this "simple" Hello Triangle?). However, I think it'll be interesting to see what happens. Who knows, maybe we'll all end up coding with Vulkan. Computers will continue to get faster, but so will our ambitions, so Vulkan seems like the new future of graphics processing, but OpenGL will be here to stay due to its "simplicity" (at least compared to Vulkan).  :D



#495 Hodgman   Moderators   

50333
Like
0Likes
Like

Posted 18 October 2016 - 08:48 PM

Modern OpenGL is also ridiculously complex and requires pages of code to render a triangle using the recommended "fast path". No one should be programming in GL either, except a small number of people within engine development teams :D



#496 mhagain   Members   

13017
Like
0Likes
Like

Posted 19 October 2016 - 12:55 AM

Modern OpenGL is also ridiculously complex and requires pages of code to render a triangle using the recommended "fast path". No one should be programming in GL either, except a small number of people within engine development teams :D

 

This.

 

OpenGL's reputation for "simplicity", I suspect, probably stems from John Carmack's comparison with D3D3 dating back to 1996-ish.  Nobody in their right mind would write production-quality performance-critical OpenGL code in that style any more either.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#497 LorenzoGatti   Members   

4302
Like
1Likes
Like

Posted 19 October 2016 - 02:45 PM

nobody in their right mind will code in Vulkan (have you seen this "simple" Hello Triangle?).

This isn't a triangle example, it's a simple rendering engine for triangles with a RGB colour at each vertex that has an easy to find hardcoded triangle in the middle as an example of the example:
// Setup vertices
		std::vector<Vertex> vertexBuffer = 
		{
			{ {  1.0f,  1.0f, 0.0f }, { 1.0f, 0.0f, 0.0f } },
			{ { -1.0f,  1.0f, 0.0f }, { 0.0f, 1.0f, 0.0f } },
			{ {  0.0f, -1.0f, 0.0f }, { 0.0f, 0.0f, 1.0f } }
		};
		uint32_t vertexBufferSize = static_cast<uint32_t>(vertexBuffer.size()) * sizeof(Vertex);

		// Setup indices
std::vector<uint32_t> indexBuffer = { 0, 1, 2 };
In example code, the priority is calling the API properly, not good abstractions (which would be confusing).
 
EDIT: quoting loses links. The "triangle example" is https://github.com/SaschaWillems/Vulkan/blob/master/triangle/triangle.cpp from the well known Vulkan examples repository by Sascha Willems.

Edited by LorenzoGatti, 20 October 2016 - 12:48 AM.

Omae Wa Mou Shindeiru


#498 samoth   Members   

9424
Like
0Likes
Like

Posted 20 October 2016 - 04:29 AM

nobody in their right mind will code in Vulkan (have you seen this "simple" Hello Triangle?).

This isn't a triangle example, it's a simple rendering engine for triangles with a RGB colour at each vertex that has an easy to find hardcoded triangle in the middle as an example of the example:
// Setup vertices		std::vector&lt;Vertex&gt; vertexBuffer = 		{			{ {  1.0f,  1.0f, 0.0f }, { 1.0f, 0.0f, 0.0f } },			{ { -1.0f,  1.0f, 0.0f }, { 0.0f, 1.0f, 0.0f } },			{ {  0.0f, -1.0f, 0.0f }, { 0.0f, 0.0f, 1.0f } }		};		uint32_t vertexBufferSize = static_cast&lt;uint32_t&gt;(vertexBuffer.size()) * sizeof(Vertex);		// Setup indicesstd::vector&lt;uint32_t&gt; indexBuffer = { 0, 1, 2 };
In example code, the priority is calling the API properly, not good abstractions (which would be confusing).
&nbsp;
EDIT: quoting loses links. The "triangle example" is https://github.com/SaschaWillems/Vulkan/blob/master/triangle/triangle.cpp from the well known Vulkan examples repository by Sascha Willems.
I think that's not what these quotes (both towards Vulkan and modern GL) are meant to be or what they should be read as.

Sure, the draw-triangle code is concise, straightforward, clear (both in GL4 and Vulkan), but it takes about 100 lines of code to even get a context which can do anything at all set up in GL, and about three times as much in Vulkan. Plus, it takes like half a page of code to do what you would wish to be none more "CreateTexture()" or the like.

Hence the assumption "no sane person will...". Almost everybody, almost all the time, will want to be using an intermediate library which does the Vulkan heavy lifting.

#499 _Silence_   Members   

754
Like
0Likes
Like

Posted 20 October 2016 - 05:46 AM

Hence the assumption "no sane person will...". Almost everybody, almost all the time, will want to be using an intermediate library which does the Vulkan heavy lifting.

 

This looks to be a good assumption... When you have a real good low-level library, we can foresee new higher-level libraries that will provide more easy means. But then we might end with a plethora of libraries, for sure not compatible between themselves... Each of them having their good and bad things.

 

It's also possible for OpenGL implementations and even Direct3D to have future releases based on Vulkan... But maybe I'm too wrong here.



#500 mhagain   Members   

13017
Like
0Likes
Like

Posted 20 October 2016 - 07:24 AM

Sure, the draw-triangle code is concise, straightforward, clear (both in GL4 and Vulkan), but it takes about 100 lines of code to even get a context which can do anything at all set up in GL, and about three times as much in Vulkan. Plus, it takes like half a page of code to do what you would wish to be none more "CreateTexture()" or the like.

Hence the assumption "no sane person will...". Almost everybody, almost all the time, will want to be using an intermediate library which does the Vulkan heavy lifting.


The thing is, creating a context is an absolutely standard task that is also something you will typically write code for once and once only, then reuse that code in subsequent projects (or alternatively, grab somebody else's code off the internet).


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.