Sign in to follow this  
sylvestre

Ork: a new object-oriented API on top of OpenGL

Recommended Posts

Ork, for OpenGL Rendering Kernel, provides an object-oriented C++ API on top of OpenGL. Using Ork can greatly simplify the implementation of OpenGL applications. For instance, suppose that you want to draw a mesh in an offscreen framebuffer, with a program that uses a texture. Assuming that these objects are already created, with the OpenGL API you need something like this:
glUseProgram(myProgram);
glActiveTexture(GL_TEXTURE0 + myUnit);
glBindTexture(GL_TEXTURE_2D, myTexture);
glUniform1i(glGetUniformLocation(myProgram, "mySampler"), myUnit);
glBindBuffer(GL_ARRAY_BUFFER, myVBO);
glVertexAttribPointer(0, 4, GL_FLOAT, false, 16, 0);
glEnableVertexAttribArray(0);
glBindFramebuffer(GL_FRAMEBUFFER, myFramebuffer);
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);
With the Ork API you simply need two steps (and the first one does not need to be repeated before each draw, unless you want a different texture for each draw):
myProgram->getUniformSampler("mySampler")->set(myTexture);
myFramebuffer->draw(myProgram, *myMesh);
The Ork API fully covers the OpenGL 3.3 core profile, and partially covers the OpenGL 4.0 and 4.1 core profile APIs (tesselation shaders are supported, but uniform subroutines, binary shaders and programs, pipeline objects, separable shaders, and multiple viewports are currently not supported). Ork has just been released as an Open Source project, under the LGPL license.

Share this post


Link to post
Share on other sites
Looks great from what I have seen, but why do you have your own smart pointer? why not use the standard C++ smart pointers like shared_ptr? You also tend to not be consistent on your naming classes. I see some with starting upper and some starting lower.

Share this post


Link to post
Share on other sites
Quote:
Original post by SeaBourne
why not use the standard C++ smart pointers like shared_ptr?
Even if you decide to stick with your own smart pointer class, consider making it compatible with std::tr1::shared_ptr, and allowing the client to replace yours with the standard version at compile time (i.e. with a preprocessor switch). In a lot of cases, if a project already makes use of shared_ptr, they won't be very happy adding an additional smart pointer class to the mix.

On a purely stylistic note, are there really enough name collisions to justify the doubly-nested namespaces? I know it was the norm in some projects (i.e. IrrLicht) but I would have hoped we were moving away from that by 2010...

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by SeaBourne
why not use the standard C++ smart pointers like shared_ptr?
Even if you decide to stick with your own smart pointer class, consider making it compatible with std::tr1::shared_ptr, and allowing the client to replace yours with the standard version at compile time (i.e. with a preprocessor switch). In a lot of cases, if a project already makes use of shared_ptr, they won't be very happy adding an additional smart pointer class to the mix.

On a purely stylistic note, are there really enough name collisions to justify the doubly-nested namespaces? I know it was the norm in some projects (i.e. IrrLicht) but I would have hoped we were moving away from that by 2010...


thanks for your comments, we are open to suggestions to improve the code and make it more useful

about class naming and lower/upper case: this corresponds to stack vs heap allocated objects (but I realize that Ptr itself does not follow this convention!)

about nested namespaces: I developed in Java during 10 years before doing C++, this might explain this! I prefer having many small groups of classes instead of a big, unorganized set of classes. But it is true that I can organize the folders and the namespaces independently (and Doxygen has independent modules too).

about shared_ptr: I wanted to avoid a dependency on a very big external library like boost. Also Ptr corresponds in fact to intrusive_ptr, not to shared_ptr. I presume that the problem for users is not that two *implementations* would coexist, but rather two *interfaces*, i.e. having to use either shared_ptr or Ptr or intrusive_ptr depending on the situation, right? So the solution with #define would be to always use "shared_ptr", and have it implemented either with boost or with our own implementation (which would in fact be an intrusive ptr)? Is that it?

Share this post


Link to post
Share on other sites
Quote:
Original post by stonemetal
There is a shared_ptr in the standard library. As seen here in MS documentation. Though that does limit you to compilers that support it.
Also, shared_ptr (or intrusive_ptr) is a header-only library. As with most other boost components, you can cherry pick those few headers and distribute them alongside your library if you don't want to pull in the whole of boost.

Share this post


Link to post
Share on other sites
ok, I see that shared_ptr is now supported in many compilers (this was not the case four years ago when I wrote Ork 1.0). So here is a proposed new API:
- it uses only one ork namespace, without nested namespaces
- Ptr and StaticPtr are renamed to ptr and static_ptr for consistency of class names
- the USE_SHARED_PTR flag can be use to choose the implementation for the smart pointers

With USE_SHARED_PTR ptr extends std::tr1::shared_ptr (i.e., with separate counters); without this flag it is fully defined in Ork (with an intrusive counter in Object). Note that, with the USE_SHARED_PTR flag, users can use either ptr or shared_ptr in their code: since ptr is a subclass of shared_ptr, the result of an Ork function returning a ptr can be stored transparently in a shared_ptr. Conversely, users can transparently pass a shared_ptr to an Ork function requiring a ptr, because there is an implicit ptr constructor taking a shared_ptr as argument.

Share this post


Link to post
Share on other sites
As a little criticism, I noticed that pretty much everything in your library derives from Object, which is a thread-safe reference counted base class.

Which, in general, is probably a good thing, but here I think it can become somewhat awkward.

If something is reference counted, then the "smartness" built into the class will clean up behind you, which is good. If something is threadsafe, it won't crash and burn when used in multithreaded programs, which is good.

However, if something is threadsafe, people will be tempted to use it in multiple threads, too. After all, it was explicitly made threadsafe!
And here, we are at a point where too much smartness in a class can become dangerous.

For example, I might be tempted to map a GPUBuffer and pass it to a worker thread which calls getMappedData() and fills the buffer. There is nothing wrong with doing that, and it should work 100% perfectly, always.
However, in theory, I could end up accidentially having the last reference to the object in the worker thread. Which would mean the Object class cleans up and OpenGL functions are called from a thread that does not have a valid context.

Obviously in this contrieved example (apologies for not being more creative) this will not happen, because you will always keep a reference to your buffer -- after all you don't tell a worker thread to fill a buffer, if you don't plan on using the data later. But then again, it could happen.

Just imagine I wrote my texture manager in such a way that it just throws away references to the buffers when they're not needed any more. Which would be in the sense of using reference counted objects. So, say the program is just uploading a texture and the user does a sharp 180 degree turn, and the game logic decides (for whatever reason) that it doesn't need that particular texture any more, but urgently needs buffers for a more important texture. So, it does the correct thing and ditches the reference. Isn't reference counting nice and easy!
Now, evil things will happen (in this case, you will probably only leak memory, but still) as soon as the worker thread is done.
The worst thing is that you will have no clue what happens or why -- you might just see that you get GL_NO_MEMORY after playing your game for an hour, but it might not even happen always. This is a debugging nightmare.

The problem, in my opinion, not so much that such a thing is likely to happen, but that the design allows or rather explicitly calls for it.

Share this post


Link to post
Share on other sites
Quote:
Original post by samoth
As a little criticism, I noticed that pretty much everything in your library derives from Object, which is a thread-safe reference counted base class.

Which, in general, is probably a good thing, but here I think it can become somewhat awkward.

If something is reference counted, then the "smartness" built into the class will clean up behind you, which is good. If something is threadsafe, it won't crash and burn when used in multithreaded programs, which is good.

However, if something is threadsafe, people will be tempted to use it in multiple threads, too. After all, it was explicitly made threadsafe!


in fact the main aspect is the reference counting; the thread safety is only provided for the taskgraph framework, where *CPU* tasks can be executed in parallel, while *GPU* tasks (i.e. all the tasks using the 'render' classes) are *forced* to be executed by a single dedicated thread.

Also note that some 'render' classes internally use some shared data structures which are *not* protected with mutexes, and so are *not* thread safe. This is normal, as these clases are supposed to be used in a single thread (and the MultithreadScheduler in 'taskgraph' enforces this)

Finally, note that you can disable the thread safety of the smart pointers in Atomic.h with the SINGLE_THREAD compiler flag.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this