Jump to content

View more

Image of the Day

Full Syncs #screenshotsaturday favourites https://t.co/i1Flnwcg3l #xbox #ps4 https://t.co/m0v2F1SxGs
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net's newsletters to receive the latest updates and exclusive content.


Sign up now

Vulkan is Next-Gen OpenGL

2: Adsense
  • You cannot reply to this topic
499 replies to this topic

#41 L. Spiro   Members   

25214
Like
4Likes
Like

Posted 04 March 2015 - 10:31 PM

Hopefully Vulkan could also be used to write an opengl implementation on top of it.

That might be fun as a pet project but otherwise I don’t see the point in subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).

 

It may be a fun and novel project to make a Nintendo Entertainment System® emulator, but since OpenGL still exists and will continue to exist and be maintained there’s no novelty in making an OpenGL API rewrite using Vulkan.

 

 

L. Spiro



#42 Boreal   Members   

1676
Like
0Likes
Like

Posted 04 March 2015 - 10:32 PM

This will be quite interesting as it could potentially mean that a Crossfire/SLI system can accumulate VRAM instead of having to keep each card in a similar state.


"So there you have it, ladies and gentlemen: the only API I’ve ever used that requires both elevated privileges and a dedicated user thread just to copy a block of structures from the kernel to the user." - Casey Muratori

 

boreal.aggydaggy.com


#43 Ashaman73   Members   

13708
Like
0Likes
Like

Posted 05 March 2015 - 12:30 AM


Ashaman73, on 04 Mar 2015 - 07:01 AM, said:

Matias Goldberg, on 04 Mar 2015 - 06:03 AM, said:

THIS. A lot of people don't seem to get these are very low level APIs with a focus on raw memory manipulation and baking of objects/commands that are needed very frequently. You destroyed a texture while it was still in use?

Come on, time has changed. Current game engines uses multithreading and multithreading is one of the best ways to kill your game project, still people are able to code games smile.png

It's not really the same. Multithreading problems can be debugged and there's a lot of literature and tools to understand them.
It's much harder to debug a problem that locks up your entire system every time you try to analyze it.


Ashaman73, on 04 Mar 2015 - 06:48 AM, said:

I'm currently at the state of handling many things by buffers and in the application itself and that with OGL2.1 (allocate buffer, manage double/triple buffering yourself, handling buffer sync yourself etc.). Most likely I use only a few % of the API at all. I think that a modern OGL architecture (AZDO, using buffers everywhere including UBOs etc) will be close to what you could expect from vulkan and that if they expose some vulkan features as extensions (command buffer), then switching over to vulkan will not be a pain in the ass.

If you're already doing AZDO with explicit synchronization then you will find these new APIs pleasing indeed. However there are breaking changes like how textures are being loaded and bound. Since there's no hazard tracking, you can't issue a draw call that uses a texture until the it is actually in GPU memory. Drivers were also handling residency for you, but since now they don't, out of GPU errors can be much more common unless you write your own residency solution. Also how textures are bound is going to change.
Then, in the case of D3D12, there's PSOs, which fortunately you should be already emulating them for forward compatibility.

Indeed, professional developers won't have much problems; whatever annoyance they may have is obliterated by the happiness from the performance gains. I'm talking from a rookie perspective.

I'm still not seeing a lot of issues here. Multithreading debugging is as hard as visual debugging. Most rookie coders will have a hard time using some profiler or debugger nevertheless and reproducing a multithreading issue only occuring in a fast environment (release mode) which can't be reproduced easily in a slower environment (debug mode) will drive many rookies crazy.

So, my point is just, that the new APIs, much like multithreading, will be not suited to beginners, but they are neither hard nor extensive difficult.


Ashaman

 

Gnoblins: Website - Facebook - Twitter - Youtube - Steam Greenlit - IndieDB - Gamedev Log


#44 Ashaman73   Members   

13708
Like
0Likes
Like

Posted 05 March 2015 - 12:35 AM


That might be fun as a pet project but otherwise I don’t see the point in subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).

Yet it already happened more or less. As the shader languages came up most IHV removed the fix-pipeline and exchanged it with internal shaders. This could work for OpenGL too, everything got compiled to the intermediate language and delegated to the vulkan driver, why not ?

 

It would be interesting if only the shader code will use the intermediate language or all commands ?


Ashaman

 

Gnoblins: Website - Facebook - Twitter - Youtube - Steam Greenlit - IndieDB - Gamedev Log


#45 Hodgman   Moderators   

50335
Like
5Likes
Like

Posted 05 March 2015 - 01:02 AM

*
POPULAR


It's much harder to debug a problem that locks up your entire system every time you try to analyze it.
Hopefully their validation layer is good enough to solve this issue.

I expect that when running in validation mode, every command in every command buffer will be sanity checked, so that it's impossible to crash the GPU -- it will just refuse to submit the buffer to the queue rather than crash. This would also include checking that every page of every pointer-range that you've supplied is actually mapped.

 

Now graphics corruption due to bad synchronisation... that's a different kettle of fish! :D


That might be fun as a pet project but otherwise I don’t see the point in subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).

NVidia has solid GL drivers... but AMD/Intel could probably save themselves a lot of time and money if they could just completely scrap their own GL drivers and just make Vulkan drivers instead. A reliable, open-source GL->Vulkan layer would be very handy for them :)

#46 phantom   Members   

11195
Like
0Likes
Like

Posted 05 March 2015 - 04:04 AM

If Vulkan doesn't support this, I'll be quite surprised.


Well, it is based on Mantle and Mantle had that so I'm hoping they would have left that ability intact; the ImgTech blog example code has a 'graphicsQueue' variable in it which implies there are separate queues which can be made so I'm hoping this means the preservation of per-device queues and separate graphics and compute queues even if the memory transfer one has gone away (although I'd prefer if they kept all 3 but I could live with just the first chunk).

#47 samoth   Members   

9424
Like
0Likes
Like

Posted 05 March 2015 - 05:34 AM

 



That might be fun as a pet project but otherwise I don’t see the point in subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).

NVidia has solid GL drivers... but AMD/Intel could probably save themselves a lot of time and money if they could just completely scrap their own GL drivers and just make Vulkan drivers instead. A reliable, open-source GL->Vulkan layer would be very handy for them smile.png

 

It even makes a lot of sense for nVidia. This is a rather high-level thing (compared to a "real" OpenGL implementation) that you write once and never touch again afterwards, and pronto you have backwards compatibility for every card that you sold during the last 10 years, with no weird quirks and very little room for card-driver-combo specific bugs. Plus, every customer can trivially use old OpenGL programs on every new card that you sell in the future.

 

That's an immense advantage if you ask me. If nothing else, it's great for marketing.

 

There exist games that ask for OpenGL 3 or 4, and people will be playing them for another 10 years (fewer people every year, but there are people who still want to play DX9 games nowadays, so why not).

 

Customers who don't want to yank out another few ten thousands for new versions of their CAD software come to mind. They'd probably stick with their hardware (which is totally sufficient as it is, if you're being honest!) for another few years rather than having to update the hardware and the software. So, an IHV interested in selling hardware is somewhat forced to provide OpenGL too, just so the old software keeps working. Now you can let Vulkan do the heavy lifting, and it will run on your new cards on the unmodified OpenGL layer.


Edited by samoth, 05 March 2015 - 05:38 AM.


#48 swiftcoder   Senior Moderators   

18098
Like
2Likes
Like

Posted 05 March 2015 - 06:22 AM


NVidia has solid GL drivers... but AMD/Intel could probably save themselves a lot of time and money if they could just completely scrap their own GL drivers and just make Vulkan drivers instead.

That definitely was the case once, but I don't think I've had real trouble with an AMD or Intel driver in the past 5 years...


Tristam MacDonald - Software Engineer @ Amazon - [swiftcoding] [GitHub]


#49 FGFS   Members   

345
Like
0Likes
Like

Posted 05 March 2015 - 06:28 AM

https://www.youtube.com/watch?v=KdnRI0nquKc

 

I've read somewhere about also Intel already having a Vulkan demo but I forgot the link.

 

Personally Vulkan might save me the next PC update. I usually get a new PC every 5 years or so to keep up with games demands foremost X-Plane and FSX.

Now I should be able to keep my PC some years longer. wub.png

Curious about GDC Valve news coming today.



#50 Hodgman   Moderators   

50335
Like
1Likes
Like

Posted 05 March 2015 - 06:30 AM

NVidia has solid GL drivers... but AMD/Intel could probably save themselves a lot of time and money if they could just completely scrap their own GL drivers and just make Vulkan drivers instead.

That definitely was the case once, but I don't think I've had real trouble with an AMD or Intel driver in the past 5 years...
performance wise, NV still has a huge edge.
I don't imagine NV supporting an open source GL implementation, as it would mean giving up this advantage.

#51 Alessio1989   Members   

4544
Like
0Likes
Like

Posted 05 March 2015 - 06:31 AM

 


That might be fun as a pet project but otherwise I don’t see the point in subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).

NVidia has solid GL drivers... but AMD/Intel could probably save themselves a lot of time and money if they could just completely scrap their own GL drivers and just make Vulkan drivers instead. A reliable, open-source GL->Vulkan layer would be very handy for them smile.png

 

 

Didn't Microsoft do something similar for old&legacy D3D version every couple of Windows release? (i.e.: current Windows OSs wrap D3D8 and older versions all together)


Edited by Alessio1989, 05 March 2015 - 06:32 AM.

"Recursion is the first step towards madness." - "Skeggǫld, Skálmǫld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

#52 Matias Goldberg   Members   

9399
Like
4Likes
Like

Posted 05 March 2015 - 08:36 AM

subjecting yourself to the tortures that OpenGL driver writers had to endure for so long (and still will unless they got promoted).
The OpenGL API is significantly flawed, which is specifically why these kinds of major upgrades have been requested for so long(’s Peak).

Agreed.
 

That might be fun as a pet project but otherwise I don’t see the point(...)

IMO the point is that instead of having one GL implementation per vendor; we could have just one running on top of Vulkan. So if it doesn't work in my machine due to an implementation bug, I can at least be 90% certain it won't work in your machine either.
In principle it's no different from ANGLE which translates GL calls and shaders into DX9.
However ANGLE is limited to ES2/WebGL-like functionality and DX9 is a high level API with high overhead; while running on top of Vulkan could deliver very acceptable performance and support the latest GL functionality.


Edited by Matias Goldberg, 05 March 2015 - 08:37 AM.


#53 samoth   Members   

9424
Like
0Likes
Like

Posted 05 March 2015 - 11:04 AM

IMO the point is that instead of having one GL implementation per vendor; we could have just one running on top of Vulkan. So if it doesn't work in my machine due to an implementation bug, I can at least be 90% certain it won't work in your machine either.
And the logical consequence would be to crowdsource the maintenance by making the common OpenGL layer open source.

#54 TheChubu   Members   

9305
Like
6Likes
Like

Posted 05 March 2015 - 12:12 PM

*
POPULAR

Vulkan presentation is starting right now, here is some live blogging from it: https://steamdb.info/blog/vulkan-the-future-graphics/

 

It wont be streamed apparently, but they said videos will be available later.

 

EDIT: Live tweeting https://twitter.com/scottwasson

 

EDIT2:  This one is taking pics of the slides and all http://www.roadtovr.com/valve-presents-glnext-the-future-of-high-performance-graphics-live-blog-10am-pst/


Edited by TheChubu, 05 March 2015 - 12:56 PM.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#55 Hodgman   Moderators   

50335
Like
0Likes
Like

Posted 05 March 2015 - 11:15 PM

If Vulkan doesn't support this, I'll be quite surprised.


Well, it is based on Mantle and Mantle had that so I'm hoping they would have left that ability intact; the ImgTech blog example code has a 'graphicsQueue' variable in it which implies there are separate queues which can be made so I'm hoping this means the preservation of per-device queues and separate graphics and compute queues even if the memory transfer one has gone away (although I'd prefer if they kept all 3 but I could live with just the first chunk).

The Intel D3D12 GDC presentation mentioned all 3 queue types -- graphics/compute, compute-only and "copy" biggrin.png
 
So explicit DMA engine control, and asynchronous compute exist as D3D12 features, so it should be a pretty safe gamble that they will exist as Mantle 2.0 (Vulkan) features too.

[edit]
They also noted that Intel GPU's dont support async compute, so there's no benefit to using compute-only queues on their device, and mentioned that their DMA unit is horribly slow so you're often better off using the graphics queue for copy operations as well laugh.png
So when using these new features, we'll definitely have to make some choices based on the kind of hardware we're running on, and make some smart decisions about whether to use these extra queues or not -- it sounds like trying to use them on existing Intel chipsets might actually just hurt performance... I guess that's one of the prices we pay for getting a low level API!

[edit2] Promit posted the slide while I was writing my edit smile.png

Edited by Hodgman, 05 March 2015 - 11:28 PM.


#56 Promit   Senior Moderators   

12950
Like
0Likes
Like

Posted 05 March 2015 - 11:21 PM


One of the GDC D3D12 presentations mentioned all 3 queue types -- graphics/compute, compute-only and "copy" 

Followed by Intel's comment: "By the way, we don't have simultaneous compute and graphics even in Broadwell so please don't use those queues. Thanks!"

B_X6zFsUIAAKzSh.jpg

(Image credit Scott Wasson)


Edited by Promit, 05 March 2015 - 11:23 PM.

SlimDX | Shark Eaters for iOS | Ventspace Blog | Twitter | Proud supporter of diversity and inclusiveness in game development

#57 Promit   Senior Moderators   

12950
Like
5Likes
Like

Posted 06 March 2015 - 02:06 AM

*
POPULAR

Alright, so today's Vulkan slides are now up:

https://www.khronos.org/developers/library/2015-gdc

Here's the main slideset (PDF): https://www.khronos.org/assets/uploads/developers/library/2015-gdc/Khronos-Vulkan-GDC-Mar15.pdf

Explicit multi-GPU is a go, including heterogeneous multi-vendor.


Edited by Promit, 06 March 2015 - 02:06 AM.

SlimDX | Shark Eaters for iOS | Ventspace Blog | Twitter | Proud supporter of diversity and inclusiveness in game development

#58 Alessio1989   Members   

4544
Like
0Likes
Like

Posted 06 March 2015 - 06:10 AM

 


One of the GDC D3D12 presentations mentioned all 3 queue types -- graphics/compute, compute-only and "copy" 

Followed by Intel's comment: "By the way, we don't have simultaneous compute and graphics even in Broadwell so please don't use those queues. Thanks!"

B_X6zFsUIAAKzSh.jpg

(Image credit Scott Wasson)

 

 

B_X9jiXUcAAFY3B.jpg

(Image credit Scott Wasson)

 
 
This two pics explains a lot to about intel hw ^_^ 

"Recursion is the first step towards madness." - "Skeggǫld, Skálmǫld, Skildir ro Klofnir!"
Direct3D 12 quick reference: https://github.com/alessiot89/D3D12QuickRef/

#59 TheChubu   Members   

9305
Like
0Likes
Like

Posted 06 March 2015 - 07:33 AM

Here's the main slideset (PDF): https://www.khronos.org/assets/uploads/developers/library/2015-gdc/Khronos-Vulkan-GDC-Mar15.pdf

Sweet. I'm not quite getting the whole descriptor sets thingy. On the other hand, pipeline state compiling and render pass object look really nice.

 

EDIT: Here is the recorded presentation

 

 

Says in the description they'll be uploading it in better quality in a few days, but with the slides hand in hand its easy to follow (except for the GFXBench guy).


Edited by TheChubu, 06 March 2015 - 08:05 AM.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#60 agleed   Members   

942
Like
0Likes
Like

Posted 06 March 2015 - 09:18 AM

I like the question one guy asked about a SPIR-V -> CPU output. Imagine writing GLSL/HLSL/whatever, and being able to decide if its executed on GPU or CPU. Since it's based on LLVM, the backend for it should already be there shouldn't it? So you can replicate a lot of your pipeline on the CPU with no or very few extra lines of code (just reuse what is already there in shader code), and offload work to CPU if you need to. Even better, no requirement for external SIMD tech like ISPC or worse, writing SIMD-code manually ( I know lots of people do it cause it's necessary but you have to admit it doesn't make the code any easier to understand).


Edited by agleed, 06 March 2015 - 11:08 AM.