OpenGL is slightly schizophrenic (yes, I know this is not the real definition of "schizophrenic") in this regard.
Early OpenGL was designed around some very specific hardware (of it's time and that no longer exists) and was closely coupled to the capabilities of that hardware.
Mid-period OpenGL became a fairly abstract virtual machine which didn't really reflect what the hardware of it's time was capable of doing or how it worked.
Modern OpenGL is getting back to being more tightly coupled to the way hardware works.
So, early OpenGL is the simple, clean, elegant API that the likes of John Carmack raved about back in 1996. In truth there weren't that many exciting hardware capabilities back then anyway: you could put on some textures, do some limited blending, and that was about all. The entire per-vertex pipeline was typically run in software, so it didn't hurt so much to have to respecify your geometry each frame. The cool kids with expensive hardware got to cache some stuff with display lists, but graphics memory was so tight that you couldn't do too much of this.
Mid-period OpenGL went from about 1998 to 2008. It started small by just adding multitexture, but some bad design decisions were made at the time (and that were carried forward to other features later on) and the API eventually became bloated and overloaded with esoteric features, multiple different ways of doing the same thing, interminable delays on important features making it to the core API resulting in massive fragmentation as it had to be propped up with vendor extensions, and those initial bad decisions coming back to bite as the requirements of those actually using the API (as opposed to thinking abstractly about it) became more complex and demanding.
Modern OpenGL runs from 2008 to the present day. Good, new, important features are coming in, new extensions address much of the mid-period mess, and the deprecation mechanism addresses the "multiple different ways of doing the same thing" mess (even if it doesn't quite work out in practice, at least the existence of core contexts clues you in on the API paths you should be using).
If you're using early or modern OpenGL you may find it too low-level. You just get basic (as in fundamental, not simple) capabilities and it's up to you to put them together in a way that actually does something.
If you're using mid-period OpenGL you may find (parts of) it too high-level. This is the dreaded "switches and dials" state machine with multiple selectors, hugely complex drivers, messy interacting states, an API model that tries (and ultimately fails) to model 4 or 5 different (sometimes wildly different) generations of hardware, and guess what - it's still up to you to put all of this together in a way that actually does something.
If you consider that the "put them together in a way that actually does something" is too low-level, then you've ultimately just chosen the wrong tool for what it is you want to do. No graphics API is going to give you the higher level capabilities you'd need, you're much better off with a pre-built framework or even using somebody else's game engine instead. Graphics APIs need to exist because higher-level functionality must be built on lower-level functionality, and even if you wanted OpenGL to be higher-level (and assuming that the ARB would take up this desire) there would still need to be a lower-level layer under it. Considering OpenGL to be too low-level doesn't mean that something is wrong with OpenGL, it means that something is wrong with the choice you've made and that you need to back off and rethink.
Edited by mhagain, 26 January 2014 - 06:32 PM.