Sign in to follow this  
NumberXaero

OpenGL Vulkan is Next-Gen OpenGL

Recommended Posts

_the_phantom_    11250
Yeah, as I said in the other thread, it seems sane... a Khronos take on the Mantle API.

Interested to see the complete model; do we get separate command queues for graphics and compute? (based on the ImgTec blog this looks to be the case!) how does it deal with multiple gpu machine? what about upload/download control?

But on the face of it things look sane... which I still find confusing... biggrin.png Edited by phantom

Share this post


Link to post
Share on other sites
Klutzershy    1681
Will work on any platform that supports OpenGL ES 3.1 and up

 

Now THAT is exciting news.

 

EDIT: Maybe I spoke too soon, AMD doesn't support it then?  Weird.

Edited by Boreal Games

Share this post


Link to post
Share on other sites
_the_phantom_    11250
ES3.1 is just a target hardware level; AMD just don't have a driver for it is all and ES is used because Mobile.

AMD's hardware will support this, likely anything which can support GL3.3 (which is roughly where ES3.1 is) has the ability to support this API which means basically all hardware in the wild today on the desktop. (driver allowing.)

Share this post


Link to post
Share on other sites
TheChubu    9446


AMD's hardware will support this, likely anything which can support GL3.3 (which is roughly where ES3.1 is) has the ability to support this API which means basically all hardware in the wild today on the desktop. (driver allowing.)
I can only hope. OpenGL 3.3 support in AMD hardware goes back to HD 2xxx series, but they stopped supporting newer extensions on that hardware after OpenGL 4.2, whereas nVidia stopped at 4.4. Thus why something really useful like ARB_direct_state_access is only available in OpenGL 4.5 hardware, or why something simple like ARB_vertex_attrib_binding works in a nVidia GeForce 8800 GT from 2007, but doesn't works on an ATI Radeon HD4870 from 2009.

 

I'm not seeing them supporting Vulkan on anything pre-HD5xxx, maybe they'll go as far as not supporting Vulkan on anything that is pre-GCN.

Share this post


Link to post
Share on other sites
Ashaman73    13715

If the driver support is there too, then D3D12 is dead in the water for me

Microsoft is not listed as supporter (they didn't support OGL for years now,right ?), and if vulkan will hold its promises and mantle is the preferred console API, why should someone want to use D3D12 ? To support Win10 games ? With microsoft history of loosing interest in projects (starting from DirectPlay/DirectSound over to several DirectX version up to XNA), who want to risk to use a version which might be unsupport in 1-2 years ?

Edited by Ashaman73

Share this post


Link to post
Share on other sites
mrhyperpenguin    468

Looks like Khronos finally got their API redesign, pretty good timing too.

 

But in order for this to take off there needs to be good drivers. Not sure if Apple will keep up-to-date drivers (past bad OpenGL support, Metal) and not sure how Microsoft will cooperate (DX12 still not available). So it's mostly up to the GPU manufacturers (NV, AMD, Intel, PowerVR) to maintain good drivers. From [0], it looks like there's still a lot of work to be done.

 

[0] - www.youtube.com/watch?v=KdnRI0nquKc

Share this post


Link to post
Share on other sites
Washu    7829

why should someone want to use D3D12 ?

 
The reason to prefer D3D has always been more robust drivers and better tools on an API that hits ~95% of the target market.  That's it.  Preference for one vendor or one platform comes nowhere into it, nor do any malicious backdoor shenanigans.  The D3D driver model was simply a better driver model that, once the emotional aspect of the original API war burned out, became obvious to everyone that it allowed for this to happen.  But that's also the hurdle that Vulkan now has to get over (and Khronos are making the right kind of noises about this, which is encouraging).
 
Right now Vulkan seems to have a head-start, and if we can get a spec, sample apps, some functional tools and reasonable drivers from all 3 desktop vendors by SIGGRAPH, it should eat D3D 12.
 
On the other hand if Khronos stall or if the vendors fail to deliver then D3D 12 will have a chance to jump back ahead.
 
Either way the next year is going to be interesting. biggrin.png


Robust tools are a huge winning point there. OpenGL could target 95% of the market as well, but the lack of decent vendor agnostic tools made it a real pain in the keister to use. Using an API where you can actually debug what is going on (PIX for example) is really important. Unfortunately, OpenGL never really had that capability and you were stuck with whatever tools the various GPU vendors provided. Which, frankly, all suck in their own unique ways.

Vulkan certainly looks interesting, but I also recall the LAST TIME we were promised something great OpenGL wise. You might remember how that ended. So at the moment I'm going to go with current mood: Pessimistic without further evidence. Edited by Washu

Share this post


Link to post
Share on other sites
Promit    13246

 


Why would MS have to do anything?
They don't support OpenGL, Mantle, OpenCL and CUDA and yet they all work just fine... this is no different.

 

Not on tablet/phone hardware they don't. Neither is there VS support, without wacky plugins from IHVs. Still, I don't know if I dare to dream that the new standards-attentive MS will actually boost Vulkan to first class support.

Edited by Promit

Share this post


Link to post
Share on other sites
swiftcoder    18426

Not on tablet/phone hardware they don't.

I'm not sure that Microsoft-based phones and tablets represent a credible enough install-base to be worried about.

 

More interesting to see if Apple will let this in the door to iOS - without that chunk of the mobile market, you'll be stuck supporting Vulcan and Metal for the foreseeable future.

Share this post


Link to post
Share on other sites
cozzie    5029
Any idea how vulkan will be "deviding" the support for lots of different devices?
Meaning that you don't have a disadvantage when developing for current gen consoles/pc versus a gui for a washing machine.

Share this post


Link to post
Share on other sites
Promit    13246

AMD published a new release: http://community.amd.com/community/amd-blogs/amd-gaming/blog/2015/03/03/one-of-mantles-futures-vulkan

The main point being that Vulkan is essentially an iterated cross platform version of Mantle. I like that AMD was willing to describe their own press release hours earlier as cryptic.

Share this post


Link to post
Share on other sites
mrhyperpenguin    468

 

and not sure how Microsoft will cooperate (DX12 still not available).


Why would MS have to do anything?
They don't support OpenGL, Mantle, OpenCL and CUDA and yet they all work just fine... this is no different.

 

 

I was trying to bring up the fact that some people believe that Microsoft sabotaged the OpenGL implementation on Windows to increase DirectX adoption. And whether Microsoft will allow Vulkan and Mantle to be first-class citizens with DX12 (if it's even possible) and whether Microsoft will keep their open-source friendly ways up (like Promit mentioned).

 

IIRC, Apple has to explicitly allow support for new APIs because they write their own drivers. So "it just works" isn't always possible.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      Question:
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
       
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
       
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
    • By C0dR
      I would like to introduce the first version of my physically based camera rendering library, written in C++, called PhysiCam.
      Physicam is an open source OpenGL C++ library, which provides physically based camera rendering and parameters. It is based on OpenGL and designed to be used as either static library or dynamic library and can be integrated in existing applications.
       
      The following features are implemented:
      Physically based sensor and focal length calculation Autoexposure Manual exposure Lense distortion Bloom (influenced by ISO, Shutter Speed, Sensor type etc.) Bokeh (influenced by Aperture, Sensor type and focal length) Tonemapping  
      You can find the repository at https://github.com/0x2A/physicam
       
      I would be happy about feedback, suggestions or contributions.

  • Popular Now