• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
krinosx

OpenGL
Its all about DirectX and OpenGL?

20 posts in this topic

Hi guys,

 

I don't even know it is the right place to post my question, but.. lets go.

 

I was wondering if all the platforms supports DirectX and or OpenGL.

 

Let try me clarify my doubts:

I know the Windows support DirectX ( also support OpenGL), as do XBOX and Windows Phone ( maybe ).

 

As far as I know, the other platforms like: MacOS X, iOS ( iPhone, iPad, etc ), Android, Linux, etc supports OpenGL ( or OpenGL ES for mobiles ).

 

But, how about PlayStation 3, PlayStation 4? Play Station Vita? Wii, Game Cube, Nintendo 3DS etc etc etc?

 

What is its drawing API? It support some kind of OpenGL? There is other drawing APIs for consoles? Some proprietary API for each platform?

 

I know the most engines will convert code to platform native code, like Unity3d, you write your game with Unity and it compile your code and generate all the native code ( or something like that ) for iPhone, Xbox, PS, etc... but.. I want to know at low level... what Unity like engines use to convert code to platform specific?

 

And if you use something like 'PlayStatin SDK' ( I dont even know if it exists ), how do you develop your graphics?

 

Well, thanks in advance for any replies.

 

ps: Sry about my english, I am not a native speaker/writer.

 

1

Share this post


Link to post
Share on other sites

Playstation 3 provides SDK which you have to use to make games for it. Chances are it's not DirectX nor OpenGL (I don't have it so don't know), probably same situation with PS4, PSVita and other consoles. They use more or less same GPU architecture so it'll have same theory: vertex buffers, index buffers, shaders, etc.

Edited by Zaoshi Kaba
1

Share this post


Link to post
Share on other sites

Thanks for your reply Zaoshi!!

 

So, its a good idea to write a 'wrapper' layer to graphics API if you want to write a cross platform game engine... right? :)

Someone who use Playstation SDK and/or some 'Nintendo SDK'  may share some knowledge?? 

Thanks!! :D

0

Share this post


Link to post
Share on other sites

Thanks for your reply Zaoshi!!

 

So, its a good idea to write a 'wrapper' layer to graphics API if you want to write a cross platform game engine... right? smile.png

Someone who use Playstation SDK and/or some 'Nintendo SDK'  may share some knowledge?? 

Thanks!! biggrin.png

 

yeah, you want to have your own wrapper around the different graphics apis (libgcm for ps3, libgnm for ps4, dx11, dx for xbob, ogl, etc.)
Coming up with the right level of abstraction can take some time, you need to learn the differences in between the APIs pretty well to get it right, but overall not too difficult. 
 

As Zaoshi mentioned, the constructs are pretty similar in between all major graphics apis. Some things are still a touch different such as constant management (gles2 vs dx11 /ogl3+ vs consoles), console graphics apis also usually expose alot more than is typically available on pc through dx and ogl.

1

Share this post


Link to post
Share on other sites

The "western" console uses an enhanced version of their known graphics API and the "oriental" one uses its own completely new API, nobody can disclose more, I'm afraid:D Dunno about the other "oriental" console. However the concepts are really the same, also the shading languages are extremely similar, so you can wrap it and port it quite easily.

1

Share this post


Link to post
Share on other sites
Humm, interesting... with the names given by ATEFred I was able to research something ( 'googled it' ) and found some references to LibGCM for PS3... 
 
It is a derivation from OpenGL ES 1.0 as far as I was able to find... so... its a 'OpenGL like' library... with its differences I imagine, but no something complete new...
 
Sometime ago I read something about programming graphics for old consoles ( Atari, Master System, etc ) and it was very.. hummm how can I say... primitive way to develop... OpenGL and DirectX were a big evolution, but I think they are the 'top of mind' technology for now...
 
I am a bit curious about what cannot be 'disclosed' for now as pcmaster says... but I think the 'insiders' ( the pro game developers, the guys who works for the industry ) may know some new platform coming... hope someday it become public and or I can find my place in game industry smile.png
 
Thanks guys for the information given!!
 
much appreciated!
Edited by krinosx
0

Share this post


Link to post
Share on other sites
I would guess xbox one uses some sort of directx11 api (not sure, just based on the used hardware and microsoft being the manufacturer :)) I believe it was similar with the xbox 360, that used xna.
0

Share this post


Link to post
Share on other sites

Thanks guys for the information shared!!

 

 


As linked above, you're talking about PSGL, not GCM. GCM is the native API. PSGL is a wrapper around GCM, which makes it look more like a GL-like API.

 

 

Humm thanks for the clarification! So, if I got it right... libGCM is the lowest level used on PS3... PSGL was derivated from OpenGL ES and call the LibGCM 'methods/functions/calls' to deal with PS3 hardware...

 

 

Whenever you're working with private SDKs, you have to sign a non-disclosure agreement, which basically means that in exchange for access to these tools, you have to treat them as being top-secret. Breaching the secrecy agreements will get your company's licence revoked, and/or yourself fired.

 

Humm I can imagine it... so, as I said, I hope someday I can find a place in game industry and be able to work with its tools...

 

 


AMD's upcoming Mantle API is supposed to bring many of these abilities to the PC-space

 

Never heard about it... will google it to see what I can find... biggrin.png Thanks for the info! ( I will give good shots on my feets for sure! )

 

 

 


There's actually some Gamefest presentations you can find that go into some of the details, if you're interested

I will take a look at ( https://www.microsoftgamefest.com/pastconferences.htm ) and see what I can find! thanks a lot!

Edited by krinosx
0

Share this post


Link to post
Share on other sites


allows game consoles to perform several times better than would a PC with identical hardware.

 

Are you really claiming that, with identical hardware, a console will have 3x better performance than a PC?

0

Share this post


Link to post
Share on other sites

Try playing GTA4/other modern console game on a high-end PC from 2006 (or a 3GHz PowerPC Mac with a GeForce7) and find out ;-)

To be fair, that is not only down to the APIs giving you lower level access to the HW (though it plays a part of course). A big part is optimizing your game (engine, assets, the lot) for just one fixed setup. 

0

Share this post


Link to post
Share on other sites
e.g. if you've got a fixed GPU that you can talk to directly, then instead of calling graphics API functions at all, you can pre-compute the stream of packets of bytes that you would be sending to this hardware device and you can create a big buffer containing these bytes ahead of time, in a tool... then at runtime you can load that file of bytes, and start streaming them through to the GPU directly. It will behave as if you were calling all the right API functions, but with virtually zero CPU usage... That's only applicable if your rendering commands are static, so in one situation this might give a 100x saving, whereas in another situation it gives no savings.

 

honestly, I'm glad I don't ever work on projects where stuff like this is necessary. I watched this keynote with John Carmack talking about the texture strategy they used to get Rage to run well on the console - he talked a lot about how he hoped the graphics card companies would release driver sets for closer access to the hardware for pcs - basically saying that its so much easier to optimize code for the consoles because of the closeness the API has to the hardware

 

I personally do not enjoy this type of programming at all though - its interesting to read about but I hate the idea of writing code to correctly swap memory in and out of here and make sure the data is being sent as fast as possible there.. gosh how much of a headache that sounds like

0

Share this post


Link to post
Share on other sites
So, if I got it right... libGCM is the lowest level used on PS3... PSGL was derivated from OpenGL ES and call the LibGCM 'methods/functions/calls' to deal with PS3 hardware...

 

Almost, but not quite.  GCM is the lowest level, but it's also available for direct use.  So as a developer you don't have to use PSGL, you can use GCM itself and completely bypass the GL layer.

 

The "OpenGL Everywhere" people can frequently be seen claiming that using OpenGL allows you to target the PS3, but that's not actually true as nobody who wants performance will actually use PSGL - it's just too slow.  Instead, developers will use GCM itself.

1

Share this post


Link to post
Share on other sites


 


e.g. if you've got a fixed GPU that you can talk to directly, then instead of calling graphics API functions at all, you can pre-compute the stream of packets of bytes that you would be sending to this hardware device and you can create a big buffer containing these bytes ahead of time, in a tool... then at runtime you can load that file of bytes, and start streaming them through to the GPU directly. It will behave as if you were calling all the right API functions, but with virtually zero CPU usage... That's only applicable if your rendering commands are static, so in one situation this might give a 100x saving, whereas in another situation it gives no savings.

 

honestly, I'm glad I don't ever work on projects where stuff like this is necessary. I watched this keynote with John Carmack talking about the texture strategy they used to get Rage to run well on the console - he talked a lot about how he hoped the graphics card companies would release driver sets for closer access to the hardware for pcs - basically saying that its so much easier to optimize code for the consoles because of the closeness the API has to the hardware

 

I personally do not enjoy this type of programming at all though - its interesting to read about but I hate the idea of writing code to correctly swap memory in and out of here and make sure the data is being sent as fast as possible there.. gosh how much of a headache that sounds like

 

APIs usually convert simple hardware calls into complex layers of abstract API calls. It always sounds like 'high level apis' make life easier, but in reality, it's the opposite. hardware can do so much more and so much faster if you talk directly to it and its way simpler. e.g. the now coming 'hipster' marketing buz of HUMA and unified architecture and what not, that's the fact if you work directly on the hardware. if you want, you can have just one memory allocator for the whole thing. allocating a texture is as simplle as



myTexture = new uint32_t[width*height];

and there it is (ok, in reality you have to allocate with some alignment etc. but I think you get my point here.

you want to fill it with data?



memcpy(myTexture,pTextureFromStreaming,width*height*sizeof(uint32_t));

you do HDR tonemapping and you want to read the downsampled average tone, and you don't care if it's from the previous frame or even n-2, as you don't want to stall on this call (e.g. if someone has 4x SLI, you don't want stalling even on the n-2 frame as you'd effectivelly kill the SLI parallelization.



vec4 Tone=myHDRTexture[0];

and there are tons of not obvious things, e.g. a drawcalls on PC goes through several security layers until your driver is called. the driver now has to figure out what states you've changed, what memory areas touched that should be synced to the some particular rendering device and finally it has to queue up the work and eventually add some sychronization primitives, as you might want to lock some buffer that are midway of the whole big commandbuffer it created.

on console a drawcall at the lowest level is simply



myCommandBuffer[currentIndex++]=DrawCommand;
GPUcurrentIndex=currentIndex;

that's why an old PS2 can push more drawcalls than your super high end PC. On consoles, nobody is really drawcall count limited, simply because the hardware consumes a commandbuffer fast enough to be limited at other places if you don't do ridiculous stuff like making a drawcall per triangle. on PC and especially on phones (iOS/Android) you are frequently limited by drawcalls. a 333MHz PSP can draw more objects than your latest 2GHz quadcore cellphone that has a gpu close to x360/ps3 performance.

 

APIs make sense to keep stuff compatible, but I somehow doubt they make anything easier. they have in a lot of cases ridiculous limitations and a lot of times people just try to work around those. e.g. we had some register limits for shaders which made it necessary to introduce pixelshader 2.0a and 2.0b which essentially was one version for ATI and one for NV as they could not hack around in drivers work around the API limitation as they so frequently do in other cases.

it's no different nowadays, modern hardware can use 'boundless resources', which means, you can just set pointers to textures etc. and use those. nvidia supports some extensions which in combination allow you to draw the whole scene with very few drawcalls. but it's an extension, it will take time till maybe directx supports it and in reality, it's again just a work around for the APIs. on console you won't need that kind of multidraws, as you can simply get to the HW limit by pushing individual drawcalls.

 

 

and if someone doesn't like to just send drawcalls, then I suggest those person should not want to fizzle around with ogl/d3d, it's so much easier to get some engine that deals with that for you. you can still modify all aspects, but you don't have to and at that point, you'd not care what the engine does beneath. actually, you'd maybe want it to be directly on hardware, otherwise you build a level, it's very low poly, yet it becomes slow and you get told (even as an artist) "well, you cannot have more than 2500visible objects on the screen, it's slow, yes, I know you have all those tiny grass pieces that should render in a millisecond, but those are 2k drawcalls, go and combine those, but don't make them too big, we don't want to render all invisible grass either... huf fun with that instead of building another fun map"

0

Share this post


Link to post
Share on other sites

 

So, if I got it right... libGCM is the lowest level used on PS3... PSGL was derivated from OpenGL ES and call the LibGCM 'methods/functions/calls' to deal with PS3 hardware...

 

Almost, but not quite.  GCM is the lowest level, but it's also available for direct use.  So as a developer you don't have to use PSGL, you can use GCM itself and completely bypass the GL layer.

 

The "OpenGL Everywhere" people can frequently be seen claiming that using OpenGL allows you to target the PS3, but that's not actually true as nobody who wants performance will actually use PSGL - it's just too slow.  Instead, developers will use GCM itself.

 

This is not precisely true. Rage used PSGL to manage states etc but GCM to build command buffers. So while pure PSGL is probably a bad idea, there is some precedent for putting it into production.

1

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
  • Popular Now