Merits of writing a multi-API Renderer

Started by
12 comments, last by Hodgman 13 years, 4 months ago
You can also map a lot of GLSL to HLSL code (and vice versa) with #defines... "#define float4 vec4" etc. I've done this with some success in the past. The main area where the two languages differ too greatly though is how they handle vertex attributes and interpolated/varying values, so you'll end up having to write separate skeletons of the shaders separately in both languages. But you can easily have libraries of functions written in one language that compile in the other with #defines.
Advertisement
Quote:Original post by MJP
The viewport transform is different. In GL z is [-1,1], while in D3D z is [0,1].
Right, but that's not really a coordinate system issue, is it? In any case, I've never thought of it as such. To me, whether a system is left or right handed and which axes are considered to point in which directions (relative to some frame of reference) are coordinate system issues; the near clipping plane distance for the canonical view volume on the other hand would not be a coordinate system issue.

Also, the question of z clip range applies not only to the viewport transform, but also to the projection transform, which also needs to be adjusted depending on z clip range. Again though, I wouldn't consider that to be a coordinate system issue.
Hodgman, would you mind to elaborate on this?

Quote:
Quote:
4) What kind of performance hit is there in renderer that can use multiple APIs?

There shouldn't be any as long as you use compile-time polymorphism over runtime polymorphism.


I've been thinking about making a multi-API graphics plugin that could be used by someone on either language, so I've been thinking about the issue a little bit.

In trying to abstract both Direct3d and OpenGL, lets say you wanted to wanted to create an API independent class such as VertexBuffer. I'd first imagine to develop an equivalent interface, and from this you could have two classes (glVertexBuffer and dxVertexBuffer) that both inherit from this interface.

This isn't very good though because then you've got runtime polymorphism for every interaction with the graphics engine, and also would require to IFDEF anywhere you instantiate a buffer whether it is really a glVertexBuffer or a dxVertexBuffer.

Another thought I had would be to create the two classes with identical interfaces, but also maintain a global file of typedefs:
#ifdef RENDER_OPENGLtypedef glVertexBuffer VertexBuffer#endif#ifdef RENDER_DX11typedef dxVertexBuffer VertexBuffer#endif


This would allow you to only instantiate the class VertexBuffer everywhere in your code making it much cleaner.

Is this the best way you would go about it, or am I missing some important concept when you refer to 'compile time polymorphism'?
[size=2]My Projects:
[size=2]Portfolio Map for Android - Free Visual Portfolio Tracker
[size=2]Electron Flux for Android - Free Puzzle/Logic Game
Yep, I'd use your ifdef technique, or something like it. The point is to use ifdef (or template if you're a boost fanatic, j/k) over virtual.

Other variations are having the same interface in two headers:
//VertexBuffer.h#ifdef RENDER_OPENGL#include "gl2/VertexBuffer.h"#endif#ifdef RENDER_DX11#include "dx11/VertexBuffer.h"#endif
Or one interface with two implementation files:
//VertexBuffer.hclass VertexBuffer{ ...};//gl2VertexBuffer.cpp#ifdef RENDER_OPENGLVertexBuffer::...(){}#endif//dx11VertexBuffer.cpp#ifdef RENDER_DX11VertexBuffer::...(){}#endif

This topic is closed to new replies.

Advertisement