• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
IgnatusZul

OpenGL
Universal OpenGL Version

20 posts in this topic

I've seen quite a few projects start like this. "Which is the best OGL for compatibility?" And they land on OpenGL 2.1.

 

It is true, pretty much anything you grab will support OGL 2.1 (grab a can on the street, it supports 2.1). The thing is that I've seen projects like this going on for years, while OGL 2.1 was a good idea 3 or 4 years ago, compatibility wise, today? When their game hits the street? Not so much.

 

So I'd pick up something from OpenGL 3.0 and upwards.

2

Share this post


Link to post
Share on other sites

Also, many of them feel that updating drivers is a difficult thing to do (too technical for them) and are afraid that they will mess up their system.

Honestly, this has mainly to do with one of the most common recommendations to install drivers, which is to go to safe mode, uninstall the old driver and install the new one. You don't have to, but it isn't hard to see where the issue lies. Besides, people think that if it already works as-is it's probably fine, not realizing that old drivers may be leaving features unused (e.g. the drivers bundled with the GeForce 7 use OpenGL 2.0, but the newest drivers provide OpenGL 2.1).

 

 

In reality, there are a lot more games that use DirectX than use OpenGL (easily 10 to 1 ratio).  So, Intel/AMD/NVidia have not had too much incentive to keep the quality of their OpenGL drivers on par with the quality of their DirectX driver.  But, the quality of the OpenGL drivers in the past few years has greatly improved.

It's a chicken-and-egg situation, if nobody uses OpenGL, there's no incentive to improve its support, which in turn means nobody wants to use it, and... well, it's a self-feedback loop. I think id is pretty much the only reason it didn't die completely. At least OpenGL 3 seemed to have gotten all vendors back into OpenGL, just because apparently it had enough of a reputation to make lack of support look stupid (maybe the backslash when Vista was implied to lack OpenGL support was a hint, even if it turned out to be false later).

 

 

The bad news is that a lot of people are still using (or stuck with) older, buggy OpenGL drivers.

I wouldn't expect those to care about gaming anyway ^^; (or to have something that supports anything newer than 1.1 for that very reason...)

 

EDIT: basically, if you care about people with old systems (especially people in e.g. developing countries, where hardware can be considered quite expensive), OpenGL 2 may be a good compromise. If you expect some decent hardware, OpenGL 3 would be better. I'd say that OpenGL 4 would be better if considered optional for now unless you really need the most powerful hardware (i.e. support it if you want but don't assume it'll be very common).

 

If somebody is stuck with OpenGL 1 that's most likely the kind of people you wouldn't want to bother targetting anyway... Either their hardware is pretty weak and will slow down without much effort or they're the kind of people who'd rather stick to browser games (if they play games at all).

Edited by Sik_the_hedgehog
2

Share this post


Link to post
Share on other sites

The bad news is that a lot of people are still using (or stuck with) older, buggy OpenGL drivers.

I wouldn't expect those to care about gaming anyway ^^; (or to have something that supports anything newer than 1.1 for that very reason...)

 

You would be surprised.  Check out the Steam or Bethesda forums for Rage - there was an awful lot of so-called "hardcore gamers" who had issues because they never bothered updating their drivers, not to mention an awful lot more who had issues because they were randomly copying individual DLL files all over their systems without any clear knowledge of what they were doing.  (It's also a good example of how poor driver support can mess up a game.)

1

Share this post


Link to post
Share on other sites

Many thanks for the feedback everyone.

Turns out this is the ugly part of game dev, hopefully pumping up the system requirements and some proper error handling, will make people aware of what they need.

I'm targeting people with decent computers, something that can render 3D graphics with post processing at a playable fps, I really REALLY want to avoid the old pipeline, it's just seems dirty, do some newer AAA games even use old pipeline these day?

 

For example I am interested to know what versions of OGL do Valve use for their games on MAC?

 

And I'll probably just end up going with 3.2. seems to be a better choice.

2

Share this post


Link to post
Share on other sites

Theoretically you could also program in a relatively modern style with VBO and Shaders even with 2.1, if you accept a few quirks and dont need all the new features.

If you can accept people with weak onboard chips not getting to play then 3.x should be fine.

1

Share this post


Link to post
Share on other sites

I'm targeting people with decent computers, something that can render 3D graphics with post processing at a playable fps....

 

In that case go for 3.x - it's all achievable with earlier versions for sure, but you'll have a much nicer time using 3.x.

 

One project I was involved in up to maybe this time last year (where initially I had thought I was being brought in just to optimize the renderer), one of the leads was absolutely insistent on the "what about older hardware?" line but yet was also pushing very heavily for lots of post-processing, lots of complex geometry, lots of real-time dynamic lighting, etc.  I ended up with an insane mixture of core GL1.4 with a software wrapper around VBOs, ARB assembly programs, glCopyTexSubImage2D, multiple codepaths for everything and an edifice so fragile that I was terrified of even bugfixing it (the fact that it was build on an originally GL1.1 codebase that was fairly crankily and inflexibly maintained to that point didn't help).  It was a nightmare - I walked out one day without saying a word and just didn't come back.

 

It's just not worth going down that route - you'll only burn yourself out.  So either dial back the ambitions and use an earlier version, or else keep the ambitions and use the most reasonable sane recent version.  But don't try to mix the two.

Edited by mhagain
2

Share this post


Link to post
Share on other sites

Hi,

 

I have what I believe could be a relevant question here which I am actually handling in a job project to make a 2D game with jMonkey that can run through OpenGL on WinXP or higher.

 

OpenGL 2.1 which my jMonkey installation has is my heavy favorite for WinXP or higher compatibility.  I don't need any advanced OpenGL features.   Am I on the right track? 

 

Where can I get information on what version of OpenGL ships with WinXP, Vista, Win7, and Win8?  (Really I am only interested in WinXP to meet the minimum OpenGL requirements.)

 

smile.png

Edited by 3Ddreamer
0

Share this post


Link to post
Share on other sites

All versions of Windows ship with OpenGL 1.1 (with a small handful of extensions), but this is a software-emulated OpenGL.  The key thing here is that OpenGL is not software so it doesn't really make sense to talk about "what version of OpenGL ships with Windows".  OpenGL is implemented in your 3D card's driver, so it's shipped by the 3D hardware vendor.

1

Share this post


Link to post
Share on other sites

 I think the sweet spot is 3.3, but since mac only supports 3.2 it leaves them out.  3.3 is akin to 4.0 but for dx10 cards, so it modern version but for legacy cards too.  

1

Share this post


Link to post
Share on other sites

Well, the Macs are about 1/4 of the target market according to research in my case.  So it seems that implementing upto 3.2 and a notification message for the user to update OpenGL if needed will be in order.

 

I have no idea yet how to jump from default 2.1 to 3.2 with jMonkey but I am sure the community there has the method, likely done at the tool level (having development software updated for OpenGL 3.2).

 

Thanks! smile.png

0

Share this post


Link to post
Share on other sites

For example, you can have a card that supports multiple render targets, with a maximum of one. You can have a card that supports vertex texture fetch with zero fetches.

ATI pulled that one on D3D9c as well -- to be D3D9c compliant, you have to support MRT and VTF, so ATI returned true for those caps, but then also returned 1 for the MRT max count, and returned false for every texture format when querying if it was a VTF-capable format... sad.png

0

Share this post


Link to post
Share on other sites

For example, you can have a card that supports multiple render targets, with a maximum of one. You can have a card that supports vertex texture fetch with zero fetches.

ATI pulled that one on D3D9c as well -- to be D3D9c compliant, you have to support MRT and VTF, so ATI returned true for those caps, but then also returned 1 for the MRT max count, and returned false for every texture format when querying if it was a VTF-capable format... sad.png

 

GL_ARB_occlusion_query allows the query counter bits to be 0 - what's worse is this was a deliberate decision by the ARB made so as to allow vendors that don't support occlusion queries to be able to claim GL1.5 support; see http://www.opengl.org/archives/about/arb/meeting_notes/notes/meeting_note_2003-06-10.html for more info on that one.

2

Share this post


Link to post
Share on other sites

I remember reading an nVidia employee's response on OpenGL.org to a poster's annoyance that the noise function was always returning 0. The response was (and I'm paraphrasing) "the specs state to return a number in the range [0,1], therefore returning 0 conforms to the spec".

0

Share this post


Link to post
Share on other sites

Sorry if this is somewhat offtopic, but the 3 posts above mine (especially GeneralQuery's mentioning NVIDIA), reminds me of the time when I was porting some code from Direct3D8 to Direct3D9 from an older version of the NVSDK (5.21).  I was particularly interested in the bump refraction demo submitted from Japan using a proprietary texture format "NVHS".  I never found anywhere in NVIDIA's documentation that this texture format was only supported on the GeForce 3 and 4 Ti series GPUs, so I was getting upset that I couldn't get the feature to work on my 8400 GS M.  I assumed it was just a problem with my drivers, but to be sure I asked some other people to verify that it does or doesn't work on their machines. Turns out that when checking the device caps, the driver claims the texture format was supported on all NVIDIA cards, but creating the texture using that format would always fail unless your GPU was from the NV2x series.

 

I tried to warn NVIDIA of this driver bug, but to no avail.  It's not too relavent now since no one used that format (Q8W8V8U8 was more compatible anyway), and DirectX9 is dying slowly but surely either way.

0

Share this post


Link to post
Share on other sites

It may be veering off-topic, but all of this does serve to highlight one key point that is relevant to the original post.  That is: vendor shenanigans are quite widespread, and no matter which API (or which version of an API) you choose, you do still have to tread a little carefully.

Edited by mhagain
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By Toastmastern
      So it's been a while since I took a break from my whole creating a planet in DX11. Last time around I got stuck on fixing a nice LOD.
      A week back or so I got help to find this:
      https://github.com/sp4cerat/Planet-LOD
      In general this is what I'm trying to recreate in DX11, he that made that planet LOD uses OpenGL but that is a minor issue and something I can solve. But I have a question regarding the code
      He gets the position using this row
      vec4d pos = b.var.vec4d["position"]; Which is then used further down when he sends the variable "center" into the drawing function:
      if (pos.len() < 1) pos.norm(); world::draw(vec3d(pos.x, pos.y, pos.z));  
      Inside the draw function this happens:
      draw_recursive(p3[0], p3[1], p3[2], center); Basically the 3 vertices of the triangle and the center of details that he sent as a parameter earlier: vec3d(pos.x, pos.y, pos.z)
      Now onto my real question, he does vec3d edge_center[3] = { (p1 + p2) / 2, (p2 + p3) / 2, (p3 + p1) / 2 }; to get the edge center of each edge, nothing weird there.
      But this is used later on with:
      vec3d d = center + edge_center[i]; edge_test[i] = d.len() > ratio_size; edge_test is then used to evaluate if there should be a triangle drawn or if it should be split up into 3 new triangles instead. Why is it working for him? shouldn't it be like center - edge_center or something like that? Why adding them togheter? I asume here that the center is the center of details for the LOD. the position of the camera if stood on the ground of the planet and not up int he air like it is now.

      Full code can be seen here:
      https://github.com/sp4cerat/Planet-LOD/blob/master/src.simple/Main.cpp
      If anyone would like to take a look and try to help me understand this code I would love this person. I'm running out of ideas on how to solve this in my own head, most likely twisted it one time to many up in my head
      Thanks in advance
      Toastmastern
       
       
    • By fllwr0491
      I googled around but are unable to find source code or details of implementation.
      What keywords should I search for this topic?
      Things I would like to know:
      A. How to ensure that partially covered pixels are rasterized?
         Apparently by expanding each triangle by 1 pixel or so, rasterization problem is almost solved.
         But it will result in an unindexable triangle list without tons of overlaps. Will it incur a large performance penalty?
      B. A-buffer like bitmask needs a read-modiry-write operation.
         How to ensure proper synchronizations in GLSL?
         GLSL seems to only allow int32 atomics on image.
      C. Is there some simple ways to estimate coverage on-the-fly?
         In case I am to draw 2D shapes onto an exisitng target:
         1. A multi-pass whatever-buffer seems overkill.
         2. Multisampling could cost a lot memory though all I need is better coverage.
            Besides, I have to blit twice, if draw target is not multisampled.
       
    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
  • Popular Now