• Advertisement
Sign in to follow this  

Metal API .... whait what

This topic is 1322 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Advertisement
It's not really a coding horror. AMD has introduced Mantle to fill this niche, DirectX 12 is pitched in the same direction, and console GPUs have been programmed this way for decades.

Most AAA game studios and engine developers already target at least 3-5 separate graphics APIs (DirectX 9, OpenGL, XBox 360, PS3, and Wii), with the leaders targeting many more (DirectX 11, OpenGL ES for android/iOS, XBox One and PS4). So adding another platform to their rendering backend is not a huge deal, especially if it gives them a competitive advantage.

And the advantages are pretty big - not only do you get a drastic increase in the number of draw calls which can be performed in a frame due to lower setup overhead, but explicit management of memory gives you a degree of control completely unobtainable through traditional OpenGL/DirectX drivers.

Share this post


Link to post
Share on other sites

I for one am very happy to see Metal -- much more so than I care about Mantle or D3D 12. The GL ES API is a ridiculous amount of overhead.

Share this post


Link to post
Share on other sites


Too bad they had to make it an Objective-C API.

 

Would it be better if it used their new Swift (supposedly Obj-C without the C) language? Because the only things better than Objective-C is a new language based on Objective-C.

Share this post


Link to post
Share on other sites

I can't imagine that OpenGL will just go away.  It is used by so many other things besides games.  I figured with all the effort that Valve has put into Linux and OpenGL that it will force them to be relevant.

 

My big question is what to start learning now to stay relevant?  Are there books on this new way of doing graphics, or is it still too "bleeding edge"?

Share this post


Link to post
Share on other sites

My big question is what to start learning now to stay relevant?  Are there books on this new way of doing graphics, or is it still too "bleeding edge"?

 

I don't think so. Collectively there's a lot of experience with this style of graphics programming among console developers, but unfortunately nobody can share anything publicly due to NDA's.

Probably the biggest issues with people looking to learn will be the manual memory management and synchronization. These things open the door to a lot of performance gains and memory savings, but getting it right can be very difficult. 

 

Metal's C++ based shader language is an interesting idea. Generics/templates might be fun.

 

Yeah, it's pretty interesting. There's definitely been times when I wished that I could template a function so that I didn't have to write 4 versions of a function to handle the 4 float vector types. Using Clang also seems like a good idea, so that they can leverage that technology instead of subjecting their developers to years of working with a buggy compiler. However if I were them, I would be careful to make sure that it's not too difficult to convert from HLSL or GLSL. Does anyone know if they support swizzles?

Share this post


Link to post
Share on other sites
Yeah they support swizzle sand a a lot of the other stuff you expect from a shading language:
https://developer.apple.com/library/prerelease/ios/documentation/Metal/Reference/MetalShadingLanguageGuide/Introduction/Introduction.html


Regarding synching, I only had a very quick skim of the docs yesterday, but I thought I saw stuff on how to tell if a command buffer has executed yet in order to know whether it's safe for the CPU to read produced data or overwrite consumed data.

Share this post


Link to post
Share on other sites

We are now in a fun situation where 3 APIs look largely the same (D3D12, Mantle and Metal) and OpenGL - while this won't "kill" OpenGL the general feeling outside of those who have a vested interest in it is that the other 3 are going to murder it in CPU performance due to lower overhead, explicate control and the ability to setup work across multiple threads.

It'll be interesting to see what, if any, reply Khronos has to this direction of development because aside from the N API problem the shape of the thing is what devs have been asking for (and on consoles using) for a while now.

 

This is why I just want something like this from OpenGl, at least on the driver overhead front and if possible (hardware guys make it so!) with memory control. 1 API to rule them, One API to run them, One API to render them all, and in the code bind them (or bindless if that's your thing).

 

But that's Khronos, at least I got a Lord of the Rings reference out of them.

Edited by Frenetic Pony

Share this post


Link to post
Share on other sites


Using Clang also seems like a good idea, so that they can leverage that technology instead of subjecting their developers to years of working with a buggy compiler. However if I were them, I would be careful to make sure that it's not too difficult to convert from HLSL or GLSL. Does anyone know if they support swizzles?

Keep in mind that Apple's GLSL compiler has been running through Clang for some time now.

 


Where is their versioning/extension/feature query APIs? They don't seem to have any in place.

I'd imagine they'll just do the same thing they do for API releases: additive only, place the new features behind #ifdef METAL_SDK_MIN_VERSION >= 1.1 guards.

Share this post


Link to post
Share on other sites
  • Wth is the 16bytes of color data per sample restriction in Framebuffers? A single 4xFloat32 already consumes all of that and it's not possible to add multiple render targets after that. That is very low, and limits a lot of deferred rendering abilities, which I'd think the latest-gen hardware is very capable of already.

 

I would assume that they're just exposing the limitations of the on-chip framebuffer used by PowerVR GPU's.

Share this post


Link to post
Share on other sites

They mention a 4xMRT limit, so 4 x 4x32bit follows on from that anyway, seeing as 4x32bit is the fattest texture format biggrin.png

But yes, a "feature level" API would be nice, so you can ask the device what kind of limits it actually has.

 

BTW, I wouldn't dare use 4 x 4x32bit on the nextgen consoles! That's way too much bandwidth -- At 1080p, that's over 120MiB of framebuffer!

(or if you assume a perfectly optimal only 1 write followed by 1 read per pixel, then at 60Hz that's 14.8GiB/s of bandwidth)

3 x 4x8bit is a pretty standard setup for deferred rendering, and even that's uncomfortable from a bandwidth point of view...

 

Keep in mind that on Xbone, you want to keep your frame-buffers under ~30MiB! 3 x 4x8bit plus a depth buffer just barely fits in 32MiB of ESRAM.

Share this post


Link to post
Share on other sites


Will it be available from C/C++ code? The examples are all with Obj-C, but perhaps they will provide a C header for accessing from C code?

 

Since you can write C/C++-code in obj-c files, it should be pretty easy to integrate an Obj-C graphics api into an existing C/C++ engine, just implement your backend in a .m/.mm-file instead of a .c/.cpp-file

 

That fact also makes a C-version of the api a bit unnecessary, it would just be a more or less 1:1 wrapper of the api with little or no gain.

Share this post


Link to post
Share on other sites

They mention a 4xMRT limit, so 4 x 4x32bit follows on from that anyway, seeing as 4x32bit is the fattest texture format biggrin.png

But yes, a "feature level" API would be nice, so you can ask the device what kind of limits it actually has.

 

BTW, I wouldn't dare use 4 x 4x32bit on the nextgen consoles! That's way too much bandwidth -- At 1080p, that's over 120MiB of framebuffer!

(or if you assume a perfectly optimal only 1 write followed by 1 read per pixel, then at 60Hz that's 14.8GiB/s of bandwidth)

3 x 4x8bit is a pretty standard setup for deferred rendering, and even that's uncomfortable from a bandwidth point of view...

 

Keep in mind that on Xbone, you want to keep your frame-buffers under ~30MiB! 3 x 4x8bit plus a depth buffer just barely fits in 32MiB of ESRAM.

 

Oh, perhaps I read that wrong. The docs state "A framebuffer can store up to 16 bytes of color data per sample."  instead of e.g. "A framebuffer can store up to 16 bytes of color data per sample in each color attachment point."  so I understood that the total amount of bytes is accumulated over all used attachments. If the limit is 16 bytes for each attachment, then that's of course adequate for most uses, however stating such a limit is odd since I don't know of any pixel formats where a pixel takes up more than 16 bytes anyways. I guess there are some exotic RGBAxFloat64Bit that this would limit, but I haven't even heard of such a thing in D3D/GL land.

 

 

 


Will it be available from C/C++ code? The examples are all with Obj-C, but perhaps they will provide a C header for accessing from C code?

 

Since you can write C/C++-code in obj-c files, it should be pretty easy to integrate an Obj-C graphics api into an existing C/C++ engine, just implement your backend in a .m/.mm-file instead of a .c/.cpp-file

 

That fact also makes a C-version of the api a bit unnecessary, it would just be a more or less 1:1 wrapper of the api with little or no gain.

 

This is basically saying "since you can access the API in Objective-C, there's no point in providing a C/C++ API"? I was pondering exactly whether it was possible to avoid having to write Objective-C to access the API.

Share this post


Link to post
Share on other sites

This is basically saying "since you can access the API in Objective-C, there's no point in providing a C/C++ API"? I was pondering exactly whether it was possible to avoid having to write Objective-C to access the API.

 

Fair point. I guess I'm so used to writing in multiple languages in one project I don't think of it as a problem...

 

What I was trying to say was that the obj-c api would have to wrapped in a c-api any way, either apple do it and know nothing about the rest of your system, so they have to do a 1:1 mapping, which you then implement your abstractions on top of, or you write the wrapper yourself, and can implement the higher level abstractions directly and shave off one layer of code in your system. (with the same number of levels to maintain). And the obj-c code would not spread in your project, just kept to one or a few files.

 

Also mixing obj-c and c++ is actually very easy and has minimal overhead. Though you would have to write some obj-c code of course smile.png

Edited by Olof Hedman

Share this post


Link to post
Share on other sites


This is basically saying "since you can access the API in Objective-C, there's no point in providing a C/C++ API"? I was pondering exactly whether it was possible to avoid having to write Objective-C to access the API.

It's a proprietary platform, with all the framework APIs exposed pretty much only to Objective-C (and now Swift).

 

You already have to write Objective-C just to create a window and a OpenGL context, so I don't think Apple has much impetus to provide C APIs to their newer stuff.

Share this post


Link to post
Share on other sites

Metal's C++ based shader language is an interesting idea. Generics/templates might be fun.

I don't think the shader langauge is c++ based. I never seen a double square bracket in c++. Looks more like object c to me

Share this post


Link to post
Share on other sites

I don't think the shader langauge is c++ based. I never seen a double square bracket in c++. Looks more like object c to me

"The Metal shading language is based on the C++11 Specification (a.k.a., the ISO/IEC JTC1/SC22/WG21 N3290 Language Specification) with specific extensions and restrictions."
 -- from the Metal Shading Language Guide.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement