Jump to content

  • Log In with Google      Sign In   
  • Create Account


Retsu90

Member Since 21 Dec 2011
Offline Last Active Mar 14 2014 06:11 PM

Topics I've Started

List of classes

02 May 2013 - 09:13 AM

I know that maybe it's not the proper title, but I have some difficulties to understand how to explain the problem in a short line.

Basically I'm creating a framework with an high level of abstraction. I'm searching a decent way to create an initialization part where I describe how every object should works and a way to get that description to create the object from the index specified. I need to create some complex objects with the same structure but with different attributes. I have found a way but it seems not efficient:

#include <stdio.h>

class IObject
{
public:
    virtual IObject* Create() = 0;
    virtual void Main() = 0;
};

class A : public IObject
{
public:
    IObject* Create()
    {
        return new A;
    }
    void Main()
    {
        printf("A");
    }
};

class B : public IObject
{
public:
    IObject* Create()
    {
        return new B;
    }
    void Main()
    {
        printf("B");
    }
};

size_t objCount = 0;
IObject* objList[0x10];

void AddObject(IObject* o)
{
    objList[objCount++] = o;
}
void RemoveAllObjects()
{
    while(objCount)
        delete objList[--objCount];
}
IObject* CreateObject(int index)
{
    return objList[index]->Create();
}

void Main()
{
    IObject* a = CreateObject(0);
    IObject* b = CreateObject(1);

    a->Main();
    b->Main();

    delete a;
    delete b;
}

int main()
{
    AddObject(new A);
    AddObject(new B);

    Main();

    RemoveAllObjects();
}

 

AddObject add the object description and associate it with an index, CreateObject should create it. It works, but the problem is that with AddObject I'm creating two objects that they will be never used. I'd like to to something like AddObject(A), AddObject(B) without to create them, next create for the first time for example the object B calling CreateObject(1), without to have a startup object where to take informations about how to use the memory. I want it totally dynamic, I won't use a switch(index) { case 0: return new A; etc. 

So, it's possible to create something like a Class list instead of an object list?


Abstraction layer between Direct3D9 and OpenGL 2.1

07 April 2013 - 06:28 AM

Hi, I'm currently working on an abstraction layer between Direct3D9 and OpenGL 2.1.

Basically there is a virtual class (the rendering interface) that can be implemented with D3D9 or GL21 class. The problem that I have is that I don't know how to store and send the vertices. D3D9 provides FVF where every vertices contains position, texture uv and color in a DWORD value. OpenGL handles the single vertices' elements in different locations of memory (an array for position, an array for texture uv and an array for color in vec4 float format). Basically I want an independent way to create vertices, the application should store them in a portion of memory, then send them all at once to the render interface, but how? I know that OpenGL can handle the vertices in D3D compatibility mode (with glVertexAttribPointer, managing stride and pointer value) that seems to be slower than the native way, but how to manage for example the colors? D3D9 accepts a single 32-bit value in integer format, OpenGL manages it in a more dynamic (but heavy) way, storing each color's channel in a 32-bit floating point value. In this case I can handle the 32-bit integer value inside the vertex shader and then convert it in a vec4 right? But all these operations seems to be too heavy. In the future I want that this abstraction layer is good enough to implement other rendering engines like Direct3D11, OpenGL ES 2.0 and others. So my main question is: there is a nice way to abstract these two rendering API without to change the rest of the code that uses the rendering interface? 


Most efficient way to batch drawings

27 December 2012 - 06:02 AM

Hi, I'm interested on a bit of theory about the best methods of optimization for OpenGL 3.0 (where a lot of function became deprecated).

On my current 2D framework, every sprite has own program with own values inside the uniform. Every sprite is draw separately and, now that I switched from 2.1 to 3.0, every sprite has own matrix Projection and View. Now my goal is to batch most vertexes possible and these are some ideas:

1) Use only one program for everything. The projection matrix is one, i can group the vertexes and send via glVertexAttribArray the values for shader and draw everything with one call. The problem is the model view matrix, that should be one for every vertex and this isn't the thing that I want because every sprite has own matrix.

2) Continue to use various shader. The projection matrix is shared between programs (how can I do it?), every sprite has own shader with own model view matrix and uniform values. The problem here is that I need to switch the program between sprite draws.

 

None of these ideas work as I expected so now I'm here to ask you what is it the most efficient way to batch drawings in OpenGL 3.0.


Default color attribute

22 December 2012 - 04:54 AM

Hi,
I'm converting my application from using OpenGL 2.0 to OpenGL 3.0. I was using glVertexPointer, glTexCoordPointer and glColorPointer to send the vertices to the GPU and when I doesn't need of Color, I simply disabled it with glDisableClientState(GL_TEXTURE_COORD_ARRAY), so the GPU set gl_Color from vertex shader is white by default. 
Now I'm using glDisableVertexAttribArray to disable the vertices and glVertexAttribPointer to send stuff and now comes the problem. glDisableVertexAttribArray(COLOR) set black the color by default in vertex shader! Now I won't fill all the vertices of 1.0f and force to enable the color, so I'm asking you if there is a way to not send the color where it doesn't required without to use particular tricks (like to create a fragment shader for color stuff and another fragment shader to the texture-only stuff).
 
My shaders looks like this:
#version 330 core
varying vec3 texcoord;
varying vec4 vertex_color;
in  vec4 in_Position;
in  vec4 in_Color;
in  vec4 in_Texture;

void main()
{
	gl_Position = in_Position;
	texcoord = in_Texture;
	vertex_color = in_Color;
}
#version 330 core
uniform sampler3D texture
varying vec3 texcoord;
varying vec4 vertex_color;

void main()
{
	vec4 precolor = texture3D(texture, texcoord);
	precolor *= vertex_color;
	gl_FragColor = precolor;
}

 

 

 


nVidia bug on texture depth?

08 October 2012 - 05:38 AM

Hi,
I've noticed a possible bug on nVIdia cards. I'm developing a game that handle 3D textures and when I need to use a layer from that 3D texture, I use a formula like this:
z = 1.0f/textureDepth * layer
On my home computer this formula works without problems (I'm using a ATI Radeon 4800 series) and it render the layer that I want, but on nVidia (and also on Intel HD 3000) it doesn't. The bugfix can be resolved editing that formula:
z = 1.0f/textureDepth * layer + 0.00001
Someone noticed this before? I can't find anything about it on gamedev and also on Google...

EDIT: This problem happens when the texture depth is an odd number

PARTNERS