Jump to content
  • Advertisement
Sign in to follow this  
Klarre

OpenGL Vertex data structuring for both GL and D3D

This topic is 4338 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am working on a rendering engine that supports OpenGL, Direct3D9 and Direct3d8 (XBOX). With support I mean that the engine can swap the buffers. As you understand the engine is in a very early development stage. The next thing for me to do is to implement a render function in the rendering systems. The rendering function should receive an object named "RenderObject" which keeps the vertex data. This RenderObject structure is totally abstracted away from the rendering systems, so I can use it independent of the rendering system I am currently using. So to the problem: In OpenGL the data of the vertices are ordered in different arrays. Like one array with all positions and one with all colors and one with all normals etc. When using D3D they are ordered in the same array. Like {px, py, px, nx, ny, nz, cr, cg, cb, ...} How do people usually order their vertex data to make it as simple as possible to use it independent of the rendering system? Should I for example favor OpenGL by putting all the data in different arrays and let the D3D rendering system rearrange them when rending using D3D? Any ideas are welcome! Thanks a lot for your help! /Klarre

Share this post


Link to post
Share on other sites
Advertisement
Quote:
Original post by Klarre
So to the problem:
In OpenGL the data of the vertices are ordered in different arrays. Like one array with all positions and one with all colors and one with all normals etc. When using D3D they are ordered in the same array. Like {px, py, px, nx, ny, nz, cr, cg, cb, ...}

You can use these interleaved arrays with OpenGL, too. So there is no real problem. Here is a tutorial on interleaved vertex arrays in OpenGL.

Share this post


Link to post
Share on other sites
pretty funny, but I am also creating an API independant render manager... and I also have an abstracted RenderObject class. I think the difference I see between your method and mine is you are trying to fit one RenderObject class to both APIs, or at least keep it generic, whereas I implemented it as a base class and have RenderObject_GL and RenderObject_D3D9 classes that inherit from it, polymorphism style. The specifics on how to order vertex buffers or anything really are not exposed to the interface, you simply request a new render object with certain properties... and the implementation takes care of things behind the scenes.

I noticed that certain concepts do not translate well from one API to the other, so I tried to keep my interface with these classes pretty generic. This is why I ended deciding to do things this way... thought that down the road I would end up with so much interferance between APIs that I wouldn't be able to fit them all (both) with a single implementation.

Your method may certainly have its advantages, I'd have to think about it for a while.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!