Jump to content

  • Log In with Google      Sign In   
  • Create Account

Its all about DirectX and OpenGL?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
20 replies to this topic

#1 krinosx   Members   -  Reputation: 565

Like
1Likes
Like

Posted 03 December 2013 - 07:39 AM

Hi guys,

 

I don't even know it is the right place to post my question, but.. lets go.

 

I was wondering if all the platforms supports DirectX and or OpenGL.

 

Let try me clarify my doubts:

I know the Windows support DirectX ( also support OpenGL), as do XBOX and Windows Phone ( maybe ).

 

As far as I know, the other platforms like: MacOS X, iOS ( iPhone, iPad, etc ), Android, Linux, etc supports OpenGL ( or OpenGL ES for mobiles ).

 

But, how about PlayStation 3, PlayStation 4? Play Station Vita? Wii, Game Cube, Nintendo 3DS etc etc etc?

 

What is its drawing API? It support some kind of OpenGL? There is other drawing APIs for consoles? Some proprietary API for each platform?

 

I know the most engines will convert code to platform native code, like Unity3d, you write your game with Unity and it compile your code and generate all the native code ( or something like that ) for iPhone, Xbox, PS, etc... but.. I want to know at low level... what Unity like engines use to convert code to platform specific?

 

And if you use something like 'PlayStatin SDK' ( I dont even know if it exists ), how do you develop your graphics?

 

Well, thanks in advance for any replies.

 

ps: Sry about my english, I am not a native speaker/writer.

 



Sponsor:

#2 Zaoshi Kaba   Crossbones+   -  Reputation: 4645

Like
1Likes
Like

Posted 03 December 2013 - 07:51 AM

Playstation 3 provides SDK which you have to use to make games for it. Chances are it's not DirectX nor OpenGL (I don't have it so don't know), probably same situation with PS4, PSVita and other consoles. They use more or less same GPU architecture so it'll have same theory: vertex buffers, index buffers, shaders, etc.


Edited by Zaoshi Kaba, 03 December 2013 - 07:51 AM.


#3 krinosx   Members   -  Reputation: 565

Like
0Likes
Like

Posted 03 December 2013 - 08:11 AM

Thanks for your reply Zaoshi!!

 

So, its a good idea to write a 'wrapper' layer to graphics API if you want to write a cross platform game engine... right? :)

Someone who use Playstation SDK and/or some 'Nintendo SDK'  may share some knowledge?? 

Thanks!! :D



#4 ATEFred   Members   -  Reputation: 1132

Like
1Likes
Like

Posted 03 December 2013 - 09:20 AM

Thanks for your reply Zaoshi!!

 

So, its a good idea to write a 'wrapper' layer to graphics API if you want to write a cross platform game engine... right? smile.png

Someone who use Playstation SDK and/or some 'Nintendo SDK'  may share some knowledge?? 

Thanks!! biggrin.png

 

yeah, you want to have your own wrapper around the different graphics apis (libgcm for ps3, libgnm for ps4, dx11, dx for xbob, ogl, etc.)
Coming up with the right level of abstraction can take some time, you need to learn the differences in between the APIs pretty well to get it right, but overall not too difficult. 
 

As Zaoshi mentioned, the constructs are pretty similar in between all major graphics apis. Some things are still a touch different such as constant management (gles2 vs dx11 /ogl3+ vs consoles), console graphics apis also usually expose alot more than is typically available on pc through dx and ogl.



#5 pcmaster   Members   -  Reputation: 685

Like
1Likes
Like

Posted 03 December 2013 - 09:52 AM

The "western" console uses an enhanced version of their known graphics API and the "oriental" one uses its own completely new API, nobody can disclose more, I'm afraid:D Dunno about the other "oriental" console. However the concepts are really the same, also the shading languages are extremely similar, so you can wrap it and port it quite easily.



#6 krinosx   Members   -  Reputation: 565

Like
0Likes
Like

Posted 03 December 2013 - 10:33 AM

Humm, interesting... with the names given by ATEFred I was able to research something ( 'googled it' ) and found some references to LibGCM for PS3... 
 
It is a derivation from OpenGL ES 1.0 as far as I was able to find... so... its a 'OpenGL like' library... with its differences I imagine, but no something complete new...
 
Sometime ago I read something about programming graphics for old consoles ( Atari, Master System, etc ) and it was very.. hummm how can I say... primitive way to develop... OpenGL and DirectX were a big evolution, but I think they are the 'top of mind' technology for now...
 
I am a bit curious about what cannot be 'disclosed' for now as pcmaster says... but I think the 'insiders' ( the pro game developers, the guys who works for the industry ) may know some new platform coming... hope someday it become public and or I can find my place in game industry smile.png
 
Thanks guys for the information given!!
 
much appreciated!

Edited by krinosx, 03 December 2013 - 10:34 AM.


#7 cozzie   Members   -  Reputation: 1771

Like
0Likes
Like

Posted 03 December 2013 - 01:44 PM

I would guess xbox one uses some sort of directx11 api (not sure, just based on the used hardware and microsoft being the manufacturer :)) I believe it was similar with the xbox 360, that used xna.

#8 Krypt0n   Crossbones+   -  Reputation: 2684

Like
1Likes
Like

Posted 03 December 2013 - 02:45 PM

http://en.wikipedia.org/wiki/PSGL



#9 Hodgman   Moderators   -  Reputation: 31947

Like
6Likes
Like

Posted 03 December 2013 - 03:03 PM

Humm, interesting... with the names given by ATEFred I was able to research something ( 'googled it' ) and found some references to LibGCM for PS3... 
It is a derivation from OpenGL ES 1.0 as far as I was able to find... so... its a 'OpenGL like' library... with its differences I imagine, but no something complete new...

As linked above, you're talking about PSGL, not GCM. GCM is the native API. PSGL is a wrapper around GCM, which makes it look more like a GL-like API.
 

I am a bit curious about what cannot be 'disclosed' for now as pcmaster says...

Whenever you're working with private SDKs, you have to sign a non-disclosure agreement, which basically means that in exchange for access to these tools, you have to treat them as being top-secret. Breaching the secrecy agreements will get your company's licence revoked, and/or yourself fired.



#10 Ravyne   GDNet+   -  Reputation: 8194

Like
4Likes
Like

Posted 03 December 2013 - 03:22 PM

In general, developers have much lower-level access to graphics programming constructs than on less-specialized platforms like PCs, tablets, or smartphones. So, rather than having an API which gives you nice calls for drawing collections of vertices and which takes care of graphics memory management and threading in its own special way (in conjunction with the OS, Drivers and Hardware), instead you get to do things like building raw command-lists and DMA'ing them to the GPU, and being responsible for managing graphics memory on your own, and ensuring all your threads play nice together. Its this free-wheeling quality, together with less abstraction overhead that allows game consoles to perform several times better than would a PC with identical hardware.

 

AMD's upcoming Mantle API is supposed to bring many of these abilities to the PC-space, although Mantle will necessarily have to play well with the OS/driver model, and pay a higher--although less than D3D or OpenGL--abstraction penalty than consoles do. I expect the mantle programming model to fall somewhere just past the mid-point between PC APIs and console-style APIs, and performance benefit to fall at about the same. I also expect that such a model is much more difficult to grok than D3D or OpenGL for the average graphics programmer, and that the naive graphics programmer will be able to shoot himself in the foot with far more frequency than with D3D or OpenGL--so performance wins will in no way be guaranteed.



#11 MJP   Moderators   -  Reputation: 11790

Like
4Likes
Like

Posted 03 December 2013 - 03:51 PM

I'm not going to go into the specifics of graphics API's on Vita, PS3 or PS4 since I'm a registered developer for all of those. But you can probably find some info if you look around enough.

The Xbox 360 is actually pretty interesting in terms of the API's they provide. The API superficially resembles D3D9, and in fact you can treat it that way and directly compile D3D9 code meant for a PC and it will work. However unlike the PC version, it also exposes new functionality and additional data structures that control the low-level workings of the GPU. There's actually some Gamefest presentations you can find that go into some of the details, if you're interested. As for the XB1 I have no particular insight into what's being used on that console, but I would guess they're taking a similar approach to allow for easy porting of D3D11-based engines.


Edited by MJP, 03 December 2013 - 03:54 PM.


#12 krinosx   Members   -  Reputation: 565

Like
0Likes
Like

Posted 03 December 2013 - 04:07 PM

Thanks guys for the information shared!!

 

 


As linked above, you're talking about PSGL, not GCM. GCM is the native API. PSGL is a wrapper around GCM, which makes it look more like a GL-like API.

 

 

Humm thanks for the clarification! So, if I got it right... libGCM is the lowest level used on PS3... PSGL was derivated from OpenGL ES and call the LibGCM 'methods/functions/calls' to deal with PS3 hardware...

 

 

Whenever you're working with private SDKs, you have to sign a non-disclosure agreement, which basically means that in exchange for access to these tools, you have to treat them as being top-secret. Breaching the secrecy agreements will get your company's licence revoked, and/or yourself fired.

 

Humm I can imagine it... so, as I said, I hope someday I can find a place in game industry and be able to work with its tools...

 

 


AMD's upcoming Mantle API is supposed to bring many of these abilities to the PC-space

 

Never heard about it... will google it to see what I can find... biggrin.png Thanks for the info! ( I will give good shots on my feets for sure! )

 

 

 


There's actually some Gamefest presentations you can find that go into some of the details, if you're interested

I will take a look at ( https://www.microsoftgamefest.com/pastconferences.htm ) and see what I can find! thanks a lot!


Edited by krinosx, 03 December 2013 - 04:08 PM.


#13 Krypt0n   Crossbones+   -  Reputation: 2684

Like
1Likes
Like

Posted 04 December 2013 - 02:39 AM

another nice read:

http://www.psdevwiki.com/ps3/RSX



#14 jHaskell   Members   -  Reputation: 1109

Like
0Likes
Like

Posted 05 December 2013 - 02:09 PM


allows game consoles to perform several times better than would a PC with identical hardware.

 

Are you really claiming that, with identical hardware, a console will have 3x better performance than a PC?



#15 Hodgman   Moderators   -  Reputation: 31947

Like
0Likes
Like

Posted 05 December 2013 - 02:28 PM

Try playing GTA4/other modern console game on a high-end PC from 2006 (or a 3GHz PowerPC Mac with a GeForce7) and find out ;-)

#16 ATEFred   Members   -  Reputation: 1132

Like
0Likes
Like

Posted 05 December 2013 - 04:06 PM

Try playing GTA4/other modern console game on a high-end PC from 2006 (or a 3GHz PowerPC Mac with a GeForce7) and find out ;-)

To be fair, that is not only down to the APIs giving you lower level access to the HW (though it plays a part of course). A big part is optimizing your game (engine, assets, the lot) for just one fixed setup. 



#17 Hodgman   Moderators   -  Reputation: 31947

Like
3Likes
Like

Posted 05 December 2013 - 05:01 PM

Yeah, ";-)" was because that was the cheeky answer.

To be less cheeky:

 

It is true though that it's very hard to play modern games on a PC that's as old as the consoles are -- every time I see a new game come out, I'm truly amazed at the technical skill that's enabled these teams to produce such amazing results on such crappy, crappy old hardware.

If they've made a PS3 version though, then in theory it should be capable of running on a 2006 PC (maybe not at 30Hz though)... but in practice, most devs simply don't support PC's with graphics cards that old any more, because they're too slow and aren't capable enough. Plus CPU's that old are pretty terrible compared to the CPU-power available in the PS3 (provided that the code is written specifically for the PS3, and not just ported directly from PC).

 

 

Less API abstraction isn't something that you can easily sum up as giving you x times more performance. Sure, there's less overhead in a lot of things -- e.g. when calling a D3D or GL function, there's quite a few layers of calls, and/or double-indirections (virtual/COM/function-pointer,etc) that occur, which themselves are micro-inefficiencies. Over enough calls, these micro-inefficiencies might add up to a few milliseconds, compared to the direct access that's afforded by the consoles lack of abstraction and direct hardware access.

There's also plenty of other micro-inefficiencies, like having an unpredictable OS and an unpredictable number of background applications/services stealing an unpredictable amount of CPU/GPU time away from your game.

 

A bigger factor is that having a fixed hardware spec, and much less abstracted access to that hardware enables a different class of algorithms than in D3D. For example, when managing your own resources, you could create an R8G8 texture, and an R16F texture which are allocated in the exact same memory area -- this is called memory aliasing, or in D3D11 it's kind of represented by having different "views" of a resource. In D3D9 though, it's impossible to implement resource aliasing at all.

Simply having this ability might allow a developer to halve the memory requirements of their render-target pool (and they might then use up that saved memory by increasing texture resolution on their models), or it might allow them to achieve a 4x improvement in the throughput of their post-processing effects, etc, etc...

That is, individual algorithms might be able to be implemented in ways that are otherwise impossible, and these individual algorithms could be anywhere from having the same performance, to an order of magnitude better performance...

Because the boosts vary from system to system, it's impossible to quantify in the general case. You can only look at specific cases and compare specific optimized implementations.

 

There's all sorts of tricks of endless kinds that console folks can pull off, that aren't applicable to the generic desktop world.

e.g. if you've got a fixed GPU that you can talk to directly, then instead of calling graphics API functions at all, you can pre-compute the stream of packets of bytes that you would be sending to this hardware device and you can create a big buffer containing these bytes ahead of time, in a tool... then at runtime you can load that file of bytes, and start streaming them through to the GPU directly. It will behave as if you were calling all the right API functions, but with virtually zero CPU usage... That's only applicable if your rendering commands are static, so in one situation this might give a 100x saving, whereas in another situation it gives no savings.


Edited by Hodgman, 05 December 2013 - 05:17 PM.


#18 EarthBanana   Members   -  Reputation: 996

Like
0Likes
Like

Posted 05 December 2013 - 10:45 PM

e.g. if you've got a fixed GPU that you can talk to directly, then instead of calling graphics API functions at all, you can pre-compute the stream of packets of bytes that you would be sending to this hardware device and you can create a big buffer containing these bytes ahead of time, in a tool... then at runtime you can load that file of bytes, and start streaming them through to the GPU directly. It will behave as if you were calling all the right API functions, but with virtually zero CPU usage... That's only applicable if your rendering commands are static, so in one situation this might give a 100x saving, whereas in another situation it gives no savings.

 

honestly, I'm glad I don't ever work on projects where stuff like this is necessary. I watched this keynote with John Carmack talking about the texture strategy they used to get Rage to run well on the console - he talked a lot about how he hoped the graphics card companies would release driver sets for closer access to the hardware for pcs - basically saying that its so much easier to optimize code for the consoles because of the closeness the API has to the hardware

 

I personally do not enjoy this type of programming at all though - its interesting to read about but I hate the idea of writing code to correctly swap memory in and out of here and make sure the data is being sent as fast as possible there.. gosh how much of a headache that sounds like



#19 mhagain   Crossbones+   -  Reputation: 8285

Like
1Likes
Like

Posted 06 December 2013 - 02:00 AM

So, if I got it right... libGCM is the lowest level used on PS3... PSGL was derivated from OpenGL ES and call the LibGCM 'methods/functions/calls' to deal with PS3 hardware...

 

Almost, but not quite.  GCM is the lowest level, but it's also available for direct use.  So as a developer you don't have to use PSGL, you can use GCM itself and completely bypass the GL layer.

 

The "OpenGL Everywhere" people can frequently be seen claiming that using OpenGL allows you to target the PS3, but that's not actually true as nobody who wants performance will actually use PSGL - it's just too slow.  Instead, developers will use GCM itself.


It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#20 Krypt0n   Crossbones+   -  Reputation: 2684

Like
0Likes
Like

Posted 06 December 2013 - 05:43 AM



 


e.g. if you've got a fixed GPU that you can talk to directly, then instead of calling graphics API functions at all, you can pre-compute the stream of packets of bytes that you would be sending to this hardware device and you can create a big buffer containing these bytes ahead of time, in a tool... then at runtime you can load that file of bytes, and start streaming them through to the GPU directly. It will behave as if you were calling all the right API functions, but with virtually zero CPU usage... That's only applicable if your rendering commands are static, so in one situation this might give a 100x saving, whereas in another situation it gives no savings.

 

honestly, I'm glad I don't ever work on projects where stuff like this is necessary. I watched this keynote with John Carmack talking about the texture strategy they used to get Rage to run well on the console - he talked a lot about how he hoped the graphics card companies would release driver sets for closer access to the hardware for pcs - basically saying that its so much easier to optimize code for the consoles because of the closeness the API has to the hardware

 

I personally do not enjoy this type of programming at all though - its interesting to read about but I hate the idea of writing code to correctly swap memory in and out of here and make sure the data is being sent as fast as possible there.. gosh how much of a headache that sounds like

 

APIs usually convert simple hardware calls into complex layers of abstract API calls. It always sounds like 'high level apis' make life easier, but in reality, it's the opposite. hardware can do so much more and so much faster if you talk directly to it and its way simpler. e.g. the now coming 'hipster' marketing buz of HUMA and unified architecture and what not, that's the fact if you work directly on the hardware. if you want, you can have just one memory allocator for the whole thing. allocating a texture is as simplle as



myTexture = new uint32_t[width*height];

and there it is (ok, in reality you have to allocate with some alignment etc. but I think you get my point here.

you want to fill it with data?



memcpy(myTexture,pTextureFromStreaming,width*height*sizeof(uint32_t));

you do HDR tonemapping and you want to read the downsampled average tone, and you don't care if it's from the previous frame or even n-2, as you don't want to stall on this call (e.g. if someone has 4x SLI, you don't want stalling even on the n-2 frame as you'd effectivelly kill the SLI parallelization.



vec4 Tone=myHDRTexture[0];

and there are tons of not obvious things, e.g. a drawcalls on PC goes through several security layers until your driver is called. the driver now has to figure out what states you've changed, what memory areas touched that should be synced to the some particular rendering device and finally it has to queue up the work and eventually add some sychronization primitives, as you might want to lock some buffer that are midway of the whole big commandbuffer it created.

on console a drawcall at the lowest level is simply



myCommandBuffer[currentIndex++]=DrawCommand;
GPUcurrentIndex=currentIndex;

that's why an old PS2 can push more drawcalls than your super high end PC. On consoles, nobody is really drawcall count limited, simply because the hardware consumes a commandbuffer fast enough to be limited at other places if you don't do ridiculous stuff like making a drawcall per triangle. on PC and especially on phones (iOS/Android) you are frequently limited by drawcalls. a 333MHz PSP can draw more objects than your latest 2GHz quadcore cellphone that has a gpu close to x360/ps3 performance.

 

APIs make sense to keep stuff compatible, but I somehow doubt they make anything easier. they have in a lot of cases ridiculous limitations and a lot of times people just try to work around those. e.g. we had some register limits for shaders which made it necessary to introduce pixelshader 2.0a and 2.0b which essentially was one version for ATI and one for NV as they could not hack around in drivers work around the API limitation as they so frequently do in other cases.

it's no different nowadays, modern hardware can use 'boundless resources', which means, you can just set pointers to textures etc. and use those. nvidia supports some extensions which in combination allow you to draw the whole scene with very few drawcalls. but it's an extension, it will take time till maybe directx supports it and in reality, it's again just a work around for the APIs. on console you won't need that kind of multidraws, as you can simply get to the HW limit by pushing individual drawcalls.

 

 

and if someone doesn't like to just send drawcalls, then I suggest those person should not want to fizzle around with ogl/d3d, it's so much easier to get some engine that deals with that for you. you can still modify all aspects, but you don't have to and at that point, you'd not care what the engine does beneath. actually, you'd maybe want it to be directly on hardware, otherwise you build a level, it's very low poly, yet it becomes slow and you get told (even as an artist) "well, you cannot have more than 2500visible objects on the screen, it's slow, yes, I know you have all those tiny grass pieces that should render in a millisecond, but those are 2k drawcalls, go and combine those, but don't make them too big, we don't want to render all invisible grass either... huf fun with that instead of building another fun map"






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS