Sign in to follow this  

OpenGL a way to port a code?

This topic is 1765 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

is there any way to port our code to any other sdk? i know direct x is better than opengl.but it only supports microsft platforms. i just want to know if we want ot make a multiplatform engine what is best way? write our whole code in open gl or port our code in opengl or psgl for other platforms like linux mac and ps? foe example unreal engine is based on direcx but supports all platforms. how is it? a proggram does it?

tnx

Share this post


Link to post
Share on other sites

first of all, DX is not Better than OpenGL , is just other alternative, and because OpenGL is not been built by a hugee company, sometimes takes a lot of time to be update

 

so the best way to "Port" a code, is based actually how you make your code, and OOP is the way to go,

 

a good way is usually to make a interface "RenderManager" with some methods and you used that in your entire code, and that way you can have two implementations of the "RenderManager", one can be RenderManagerOGL , and RenderManagerDX 

 

 

 

class RenderManagerOGL : RenderManager{} 
class RenderManagerDX : RenderManager{}

...

RenderManager *rm = NULL;

#ifdef WIN32
  rm = new RenderManagerDX();
#else
  rm = new RenderManagerOGL();
#endif
 

so in that way you make sure you have the same methods and your game is not going to have problem using the same code,

 

this is just a way to do it, im sure there are better solution, but that is how i would do it

Edited by Joyal

Share this post


Link to post
Share on other sites

Portability is something to consider BEFORE you begin coding a project.  It's best to plan out a wrapper class or function(s) to suit functionality for both APIs.  Create something fairly high level so that changing it would be simple enough to do with minimal hassle.

 

I do agree with Joyal to a certain degree, but I'd propose a slightly different approach.  If you're using Windows, I recommend instead of compiling Direct3D and OpenGL code in the same binary, create seperate .dll files that you load manually to support whatever API the user needs or requires.  This is what the Unreal Engine does (or did, at least).  In a .dll, you can explicitly export classes, functions and variables and load them manually using ::LoadLibrary and GetProcAddress.  It's also convenient if you plan on supporting multiple versions of Direct3D, because it helps prevent naming collisions and what not.  I just see it as less hassle to do it this way, unless you're under another platform besides windows that uses only one API like OpenGL, or a custom API for consoles.

 

Just make sure you think things through before taking action.  It will save you much trouble in the long run.

 

Shogun.

Share this post


Link to post
Share on other sites

Just make sure you think things through before taking action.  It will save you much trouble in the long run.

 

And just make sure you don't think things through too much. No battle plan survives contact with the enemy, or, in other words, the design you end up implementing will almost certainly be quite different than the one you originally thought up, so don't overthink it - get a rough idea, plan things out, and start iterating, otherwise you'll spend months wasting time, conceptualizing... making UML diagrams...

Edited by Bacterius

Share this post


Link to post
Share on other sites

Well to start just choose any API opengl or directx whichever you are more comfortable with and keep all low level platform dependent code in seperate directory or library. Create your own custom wrappers for all lowlevel functionality.

For example You can create MyRenderDevice for D3D device, MyVertexBuffer or D3DVertexBuffer, MyWindow to handle window mgt on win32,etc

Then to port your code to new platform lets say opengl you just have to replace these low level classes/libraries. 

 

One interesting way can be done is that you can try to keep header files for MyRenderDevice etc same and make implementations in #defs according to the platform its running on.

 





A Quick Example

////MyRenderDevice.h

class MyRenderDevice
{
void Init();
void RenderBegin();
void RenderEnd();
}


/////MyRenderDevice.cpp

#if GfxDriver == DirectX11

void Init()
{
ID3D11Device mydevice;
....
}

void RenderBegin()
{
}

void RenderEnd()
{
}

#elseif  GfxDriver == OpenGL

void Init()
{
OpenGLDeviceContext mydevice;
....
}

void RenderBegin()
{
}

void RenderEnd()
{
}

#endif

 

 

Another Approach can be to make an Interface from which actual RenderDevice Implementations will derive somewhat similar to what Joyal   mentioned above but it would be nice to make render in seprate libraries and keep a seprate var for choosing gfxdriver as you may want to run opengl on windows and may want to run different versions of directx (dx9 ,d3d11,etc) on windows - 

 





class IRenderDevice
{
......
}

#if Driver == DirectX11

class RenderDeviceDX11 : IRenderDevice
{
.....
}

typedef RenderDeviceDX11 MyRenderDevice;  // or #define

#elseif Driver == Opengl

class RenderDeviceGL : IRenderDevice
{
.....
}

typedef RenderDeviceGL MyRenderDevice;  // or #define

#endif


////And later in app

class Scene
{
MyRenderDevice device;

void Render()
{
device->RenderBegin();
...
device->RenderEnd();
}
}

There are many ways to do this but I would also agree with blueshogun96 it will be better to have seperate libraries.

I thing it would be a better if you can spend few days looking at some good open source cross platform engines. Here are the two of many - 

 

WildMagic

 

Ogre3D

 

(These engines also keep things in separate libraries)

 

 

Share this post


Link to post
Share on other sites

This topic is 1765 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now