Archived

This topic is now archived and is closed to further replies.

Api independent engine design

This topic is 5540 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I am designing my 3d engine to be as flexible as possible, and part of this includes loading the renderer as a dll so its simple to write a new renderer (and port to another os) at a later date. The problem I have, is how to pass data through this abstraction layer as efficiently as possibly. I want to support static meshes and 'vertex caches' (where by a vertex buffer is created and polys are added to it in small chunks before rendering when the buffer is full). My current idea is to have an IRenderBuffer class, that is subclassed by each of the renderer dll's. The IRenderer class will be resonsible for creating IRenderBuffer's, so as to allow it the chance to set up any internal data that is specific to that renderer (for example, the d3d renderer will store a pointer to the LPDIREC3DDEVICE8 inside each IRenderBuffer). Anyway, this is the way I have sketched out the code so far. I am looking for feedback as to if this is a good way of passing the data/storing the data. I would also be very intrested in hearing how other people have solved this problem.

class IRenderBuffer
{
protected:
RENDEROP_TYPE RenderOpType;
DWORD VertexSize;
public:
bool UsesIndicies;

unsigned int NumVerticies;
unsigned int NumIndicies;

unsigned int MinIndex;
unsigned int PrimitiveCount;

void Create(int NumVerticies, int NumIndicies, unsigned int FVF);
void Clear();
void Lock();
void Unlock();
void Insert(void * VertexData, int NumVerticies,  void * IndexData, int NumIndicies);
void Render();
};

class IRenderBufferDirect3D8 : IRenderBuffer
{
protected:
LPDIRECT3DDEVICE8		lpDevice;
LPDIRECT3DVERTEXBUFFER8 	VertexBuffer;
LPDIRECT3DINDEXBUFFER8 		IndexBuffer;
public:
void Render()
{
lpDevice->SetStreamSource(0, VertexBuffer, VertexSize);

if (UsesIndicies)
{
lpDevice->SetIndicies(IndexBuffer, 0);
lpDevice->DrawIndexedPrimitive(RenderOpType, MinIndex,
NumVerticies, 0, PrimitiveCount);
}
else
{
lpDevice->DrawPrimitive(RenderOpType, StartVertex, PrimitiveCount);
}
}
};

Thanks for your time, Alan EDIT: Just sorting out some code formating issues. [edited by - AlanKemp on September 10, 2002 8:16:25 PM]

Share on other sites
Ah, come on :-)

I''m sure there are people reading these forums who have abstracted their renderers out into dlls. How did you handle the problem of passing data to the renderer efficiently, but maintaining the abstraction?

Ala

Share on other sites
*bump*

/me finds this one interesting...

Share on other sites
I have never done this, but I have been planning to do it for my next engine (when I get round to starting). What I do know is that there is a rather good open source object orientated graphices engine which loads its renders from dlls, this engine is quite a long way into development so I reckon its probably worth a quick scan through the code for ideas on this.

http://ogre.sourceforge.net/

Good luck

X2K

Share on other sites
Thanks for the tip, but unfortunatly I am alrady familiar with Ogre. Their method of passing data to the render is not very efficient once you want to use vertex buffers. They basically pass a pointer to an array of xyz coorindates, and array of indicies, and array of colours etc, and then the renderer puts these together in what ever format that render likes.

Unfortunatly this method is not very hardware friendly, you in effect end up using DrawPrimitveUP(..) for every call to the renderer - not really what nvidia would recommend.

I am going to implement the outline I presented this weekend, unless someone can give me a hint as to a better method?

Alan

Share on other sites
You''re going for independence but the base class uses things like FVFs and D3DPRIMITIVETYPEs which are obviously d3d specific.

In say the OpenGLRenderer class would you then convert the FVF to some OGL friendly thing as well as the PRIMITIVETYPE?

I don''t know much OGL but know d3d so it seems a problem you''d have to solve.

Otherwise it looks like an OK way to go.
Keep in mind though that if YOU''RE not likely to implement the opengl renderer, is it really going to happen? and therefore, is there any point making your engine so called "api idependent"?

Any PC that supports OGL will support Direct3D so unless you make all the rest of your code platform independent which to me rules out dlls, then is there too much point?

just something to consider.
Toby

Gobsmacked - by Toby Murray

Share on other sites

quote:

You''re going for independence but the base class uses things like FVFs and D3DPRIMITIVETYPEs which are obviously d3d specific.

In say the OpenGLRenderer class would you then convert the FVF to some OGL friendly thing as well as the PRIMITIVETYPE?

I am planning on mapping my RENDEROP_TYPE enumeration to directly correspond to D3DPRIMITIVETYPE for speed. In the direct 3d renederer the value will go straight though, but in the OpenGL renderer I am going to have to switch any way to call glBegin(..) with the right value. However, I had not thought about the FVF''s. To be honest I am not that familiar with OpenGL, I know they have a vertex cache concept, but I dont really know how you use it (and therefore what data it need avaible for creation etc).

I need to look into how OpenGL handles rendering of large batches of polygons, so if anyone has any links I would be greatful.

quote:

Otherwise it looks like an OK way to go.
Keep in mind though that if YOU''RE not likely to implement the opengl renderer, is it really going to happen? and therefore, is there any point making your engine so called "api idependent"?

Yes, I am am going to write an OpenGL renderer. At the moment I use Direct3D for most things, so writing an OpenGL renderer will be a learning excercise for me. Having the Direct3D renderer written first will help me a lot as it will provide a base reference to check results against.

quote:

Any PC that supports OGL will support Direct3D so unless you make all the rest of your code platform independent which to me rules out dlls, then is there too much point?

All that will be stored inside the dll (or the .so if its linux) will be the implimentation of the class, and two functions CreateRenderer(..) and DestroyRenderer(..). The code that loads the dll/so and calls these functions will be different for different platforms, but this can be handled quite easily with #ifdef _WIN32/#ifdef _LINUX preprocessor switches.

Alan

Share on other sites
Ok.. so I could be entirely incorrect, however I would do the following.

For both the d3d and ogl render, create a vector ( or two for indices ) and store your vertex/index buffers in there. And when you create a vertex or index buffer, return out the number assigned to the vertex buffer in your vector and store that number with your associated mesh or whatever.

Elegant no?
- Just my perspective - Hope this helps.
Andy

- edit : is this clear?

[edited by - skillfreak on September 12, 2002 5:55:57 PM]

Share on other sites
quote:
Original post by AlanKemp
I need to look into how OpenGL handles rendering of large batches of polygons, so if anyone has any links I would be greatful.

Look up glVertexPointer() in the MSDN, that should give you a start.

Share on other sites
Thanks for the tip!

I should probably ask this in its own thread in the OpenGL forum, but how do glVertexPointer()/glNormalPointer() etc compare to use Vertex Arrays or Display Lists?

Also, I belive that gl lets you allocate memory on the graphics card that you can manually transfere vertex data into? Is this a good idea (speed/stability wise)?

Alan

Share on other sites
quote:
Original post by AlanKemp
I am designing my 3d engine to be as flexible as possible, and part of this includes loading the renderer as a dll so its simple to write a new renderer (and port to another os) at a later date.

Sorry to bust your bubble, but if your looking for os-independability you''ve lost it all instantly by using Microsoft specific DLLs. If you want to have proper os-independability you''ll have to go the Q3 way and have your own vm type thing.

Henrym
My Site

Share on other sites
quote:
Original post by henrym
Sorry to bust your bubble, but if your looking for os-independability you''ve lost it all instantly by using Microsoft specific DLLs. If you want to have proper os-independability you''ll have to go the Q3 way and have your own vm type thing.

He already stated that he''s going to wrap the DLL loader to support at least Win32 DLLs and Linux shared objects.

quote:
Original post by AlanKemp
I should probably ask this in its own thread in the OpenGL forum, but how do glVertexPointer()/glNormalPointer() etc compare to use Vertex Arrays or Display Lists?

Also, I belive that gl lets you allocate memory on the graphics card that you can manually transfere vertex data into? Is this a good idea (speed/stability wise)?

Well, there are five ways to render things in OpenGL.

1) The immediate mode using glBegin() and glEnd(). This is the least efficient because it suffers from function overhead and the highest CPU usage.

2) Display lists. Very fast and can make use of hardware acceleration. Unfortunately, there is no Direct3D counterpart... Display lists are more specialized and more powerful than vertex buffers. Thus, they can be faster in some situations while they are generally limited.

3) Vertex arrays. Analogous to Direct3D vertex buffers and index buffers, but they can only reside in system memory (no hardware acceleration).

4) Compiled vertex arrays. Use the EXT_COMPILED_VERTEX_ARRAY extension. Allow the user to lock portions of the vertex array (thus making use of the graphics hardware), but the data must be transferred from system to video memory and is stored twice. This method is generally used by high-performance API-independent engines.

5) VAR and fences. Only supported by NVIDIA, making use of the extensions GL_NV_vertex_array_range and GL_NV_fence. The usage is explained here. VARs (Vertex Array Range) are the real counterpart to Direct3D vertex buffers. They are like compiled vertex arrays but can be created directly in video memory or fast AGP memory. Because they suffer from synchronization between CPU and GPU, NVIDIA came up with the fences which provide automatic synchronization handling.

You can find discussion about the above issues here.

Share on other sites
Thanks for the info Origin, I''ve been trawling the web for a couple of hours, but your summaries are much nicer than any of the descriptions I have read so far.

For anyone who is intrested, these are the conclusions I have come to (and although I am going to keep saying .dll, I mean .so as well):

- Instead of making a renderer dll I am going to make it a platform dll. That is all platform specific stuff will go into the dll, and then the core object I create from that will be IPlatformInterface. That interface will then be resonsible for creating a renderer when I ask it for one. The win32 platform dll will return me a Direct3D renderer, whilst the Linux platform dll will return an OpenGL renderer.

- The renderer Create(..) function is still going to have an FVF parameter. Why? Well the direct3d renderer is going to need an fvf anyway, and the opengl one will just have to bit test it to determine what components are required.

I am going to try and get this working with on both win32 and linux (probably just creating a window, no actual 3d) over the weekend just as a proof of concept. If people are intrested I might write up my results into a short article, maybe GameDev would be intrested in it as it seems there is intrest into api independant engine design.

Thanks for all the comments, if anyone has any other views / ideas / tips please post them, I still have a long way to go and any ideas people have will always be appreciated.

Alan

Share on other sites

Also, I think you should have your Create(...) function able to return either a D3D or OGL Renderer on Win32 rather than limiting it to only D3D, because some systems will run OGL better than D3D. Just a suggestion.

Henrym
My Site

Share on other sites
Hi!

I want to make a portable engine too.

I''m going to use Vertex Buffer for D3D and compiled Vertex Arrays for OGL.

I think that the IRenderBuffer class is a good idea, but I have a question.

If I want to linear interpolate between the vertices on two RenderBuffer, how I would design the engine to do that?

Would be a function named SetLinearInterpolateMode(bool) a clean method of doing this?

My problem is with OGL, I have to interpolate (using the CPU) the vertices, but with D3D is better to write a Vertex Shader to do the work. But if I write a Vertex Shader I have to calculate the lights in the Vertex Shader too. I though that it could be done compiling a new Vertex Shader when any light is created/destroyed, or writing a Vertex Shader with a fixed number of lights, and setting to 0 the lights that are off.

And I want only to use Vertex Shader when it is necessary, because, for example, the Vertex Shader is not hardware implemented in a GeForce 2 (my graphics card).

The function SetLinearInterpolateMode I said before would set the default Vertex Shader (hardware accelerated in my card) or the custom Vertex Shader.

I don''t know if I explained very well the problem, and my english is very bad. Please, if you don''t undestand something, tell me.

• Forum Statistics

• Total Topics
628651
• Total Posts
2984044

• 10
• 9
• 9
• 10
• 21