Sign in to follow this  
Ben Bowen

Will a Fixed, API-controlled Pipeline really limit you?

Recommended Posts

In regards to absolutely no existing API or any aspects of existing APIs: Will a pipeline fixed into the API always have strong limitations?

It seems like rendering engines ultimately strive towards a pipeline which is uniform and absolute to reduce redundant state changes and so forth (eventually a linear, fixed-pipeline?). What are your thoughts on this?

Share this post


Link to post
Share on other sites

In regards to absolutely no existing API or any aspects of existing APIs: Will a pipeline fixed into the API always have strong limitations?

Can you elaborate a bit more?
Do you mean a pipeline like input->vertex->raster->pixel->output, or a rendering pipeline like whether you're using "deferred lighting" and when your shadow maps get rendered, or the pipeline for traversing a scene and drawing it, such as a scene graph?
 

It seems like rendering engines ultimately strive towards a pipeline which is uniform and absolute to reduce redundant state changes and so forth (eventually a linear, fixed-pipeline?). What are your thoughts on this?

IMO, an engine should provide an API that's not too far removed from D3D/GL, which is:
* as cross platform as possible (common abstraction).
* solves the 'redundant state change' issues.
* may allow for some higher-level objects, like "models" and "materials".
* hides the "state machine" nature of the underlying APIs.

This API is still fully flexible, in that you can implement any kind of forward/deferred renderer, any kind of shadow mapping, and kind of post processing, etc, etc...

A specific game using the engine will then use the above API to build it's own specific rendering pipeline, which is fairly restrictive, being designed to render the kinds of scenes that are used in that game only. It then provides a much, much simpler API to the game's programmers.

Share this post


Link to post
Share on other sites

Are you asking whether there will ever be some kind of "Grand Unified Rendering Pipeline" which is capable of replicating any visual effect, given the right material parameters, but sans "shaders"? That's kind of what ray-tracers aim to be, for physically-based rendering, mostly.

 

You'll probably never be able to unify physically-based and non-photo-realistic rendering in a suitable manner though -- say cartoon shading combined with photo-realism.

 

I think you're right in that the trend is towards some kind of (at least internally) consistent, physically-based rendering model, rather than using a combination of one-off, artistic, "It looks right" hacks that art assets often still are today. John Carmack spoke on this a bit at this past QuakeCon. His talks are on Youtube if you haven't yet watched them.

Share this post


Link to post
Share on other sites

a rendering pipeline like whether you're using "deferred lighting" and when your shadow maps get rendered


^ this is what I'm talking about.

* as cross platform as possible (common abstraction).
* solves the 'redundant state change' issues.
* may allow for some higher-level objects, like "models" and "materials".
* hides the "state machine" nature of the underlying APIs.


I'm talking about irradicating things such as handle->Draw() and just streamlining everything. It looks like early versions of OpenGL were exploring this but they completely abandoned it and didn't really get close enough to the idea for it to be meaningful.

For example, the differences between buffers aren't needed to be accounted by individual objects if the essential "buffers" themselves are seperated into arrays (conceptually) that are specialized for those differences.

Important note:

I don't think "fixed" is an appropriate term to describe what I mean anymore. It may as well be modular, but rather than doing things like binding buffers etc. that will be streamlined to rendering components.

P.s. Why does the formatting keep getting screwed up?

Share this post


Link to post
Share on other sites

I have a suspicion everyone thinks I just went full retard. Is this true?
 

Are you asking whether there will ever be some kind of "Grand Unified Rendering Pipeline" which is capable of replicating any visual effect, given the right material parameters, but sans "shaders"? That's kind of what ray-tracers aim to be, for physically-based rendering, mostly.

 

You'll probably never be able to unify physically-based and non-photo-realistic rendering in a suitable manner though -- say cartoon shading combined with photo-realism.

 

I think you're right in that the trend is towards some kind of (at least internally) consistent, physically-based rendering model, rather than using a combination of one-off, artistic, "It looks right" hacks that art assets often still are today. John Carmack spoke on this a bit at this past QuakeCon. His talks are on Youtube if you haven't yet watched them.

Not a grand rendering pipeline. Not a grand anything. I'm just speculating about whether just the right kind of abstraction could actually be good for optimality without the expected limitations. By "fixed-pipeline," I mean think about a server model. Essentially, I'm considering what it might be like to approach API design from a declarative perspective rather than imperative.

Edited by Reflexus

Share this post


Link to post
Share on other sites

Declarative sounds like you want the API to retain even more stateful data. But I think people want less of that and GPUs are getting more and more programmable.

Maybe it eventually converges into something completely imperative where you just tell the gpu where to put some data, then tell it any number of any parallelized algorithms you want to transform some of it and then the result inside a buffer gets shown when your algorithm tells it to flip the output pointer to it depending on the current scanline and whats ready to be shown.

Share this post


Link to post
Share on other sites

But I think people want less of that and GPUs are getting more and more programmable.

Maybe it eventually converges into something completely imperative where you just tell the gpu where to put some data, then tell it any number of any parallelized algorithms you want to transform some of it and then the result inside a buffer gets shown when your algorithm tells it to flip the output pointer to it depending on the current scanline and whats ready to be shown.

 

Well. The less input you have, the better. You inherently cannot discard the data you want to transform, but you can simplify the instructions to transform it. That's following the nature behind SIMD.

Edited by Reflexus

Share this post


Link to post
Share on other sites

Are you asking this because you want to use one or because you want to make one?

 

If you are trying to make one, it’s fairly simple to keep it functional yet flexible.

 

 

On the bottom layer you work directly with DirectX or OpenGL calls.  All you do is wrap them behind your own cross-platform API and make lowest-level functionality such as creating, activating, and destroying textures, etc.  You wrap around the state-changing calls and do redundancy checking.  This provides the absolute lowest level of rendering that the rest of your engine will use.  It is exposed as well, which means anyone using your engine can use it too.  Now you already have the “flexible” part handled.  From there you make it “functional”.

 

Next up you add your own model format.  A system for users to add custom data must be in place, as well as the ability to handle any custom data inside the engine at run-time.  This means the ability to supply custom callbacks to parse the custom data, the ability to attach custom shaders to the models (otherwise built-in shaders are used), a few run-time callbacks to prepare data for the shader, etc.  Still fully flexible, but also functional.

 

Next up you add some kind of scene manager.  It simply organizes the models in the scene and prepares for a render.  It handles culling etc., but it’s not necessary for your engine to work properly.  Meaning it’s there for convenience.  You can still rendering models manually, and you can even make direct draw calls however you want via the base API I described in the beginning.  Fully flexible, but functional.

 

 

In other words, there is no reason an engine has to be limiting in any way.  It can literally be just as good as directly handling DirectX or OpenGL, but easier.

 

 

L. Spiro

Share this post


Link to post
Share on other sites

Thanks for the advice. I'm already experienced with that; however, this topic was intended to be more about speculation. Thanks though smile.png

To elaborate: Do you really need the "flexible / maintainable / extensible" part -- such genericity requires overhead-bloat that isn't easy to get rid of without careful design -- if you just implement seperate renderers per API meant to make the best of their differences using the most direct form (the "functional" part)? It sounds less maintainable, but in the long run I think it'd be convenient for optimally (and quite practically) implementing functionality. Sometimes it gets annoying to try to coordinate between APIs' complex differences.

The fundamental question I'm prompting: If the expressibility (not genericity) provided by the functional part is great enough, then may it be appropriate for APIs to eventually become radically fixed-pipeline specifications?

Maybe a critical problem is how the advancements with accelerated graphics hardware have usually been executed under an extensional manner, rather than progressive. Ultimately, I know this sounds like a very dangerous idea i.e. feasible and nice in concept, but just annoying-in-practice. I doubt anyone at the moment has any idea on how it could be done correctly, but this topic is all just speculation.

Edited by Reflexus

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this