Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 21 Dec 2011
Offline Last Active Mar 14 2014 06:11 PM

Posts I've Made

In Topic: List of classes

03 May 2013 - 08:09 AM

That code doesn't work because I'm using C++03. 

Meanwhile I tried to find a solution and seems that finally I found it!


I copied the pointer of A::Create and B::Create in the array and I call them when necessary. But the problems doesn't ends here, this works with MinGW compiler, but not with MVCPP one. The error is C2440: 'IObject *(__thiscall A::* )(void)' to 'IObject *(__thiscall *)(void)'

In Topic: Abstraction layer between Direct3D9 and OpenGL 2.1

09 April 2013 - 12:58 PM

I read carefully all the replies, thanks for the tips.
I learned OpenGL 1.1 and 2.1 some time ago, but I don't now many things of Direct3D9 (for example the possibility to specify the vertex declaration without using FVF), the global shader uniforms (thanks mhagain for this information) and others. Fortunately the framework will be pretty tiny, it will support a single vertex and fragment shader (directly compiled and used during the initialization until the end of the application), specify the view matrix only during window resizing, no world matrix (I'm translating all the vertices from CPU side for specific reasons); the only thing that the framework should do, is to initialize the rendering device, set the view, create and/or load the texture and accept indexed vertices. I think that the "preferred API" can be a choice in application / games that requires a lot of performance with heavy 3D models and effects.
So until now, I've a virtual class with these members:
GetDeviceName(), Initialize, SetClearColor(float, float, float), RenderBegin (handles glBegin or pd3d9->BeginScene if it's fixed pipelne), RenderEnd (glEnd or pd3d9->EndScene() with the swap of the backbuffer), SendVertices(data, count, primitiveType), CreateTexture(id&, width, height, format), UseTexture(id) and SetView(width, height). All the works from the render device (like shader creation, uniform assignments, stores various textures in a single big texture, vertices caching, texture filtering, render to texture (glcopyteximage2d should works with GL 2.1<) etc) will be transparent, like a black box. The framework works fine with rotated sprites, cubes etc, the only weird thing that I noticed is that Direct3D9 handles the colors as ABGR and OpenGL as ARGB, but I can imagine that shaders will cover this issue. 
So with big projects, a preferred API is chosen and during portings these APIs are wrapped, right? This remember me that Final Fantasy 1 for Windows Phone is a porting of Android version that it's also a porting of the iOS, that finally it's a porting of PSP version, in fact the game is... horrible. In cases of powerful engines that promises high performances like Unity, UDK etc, they program it directly with the low-levels things and during the porting they change most of the code without to wrap the original API? 

In Topic: Abstraction layer between Direct3D9 and OpenGL 2.1

07 April 2013 - 07:36 AM

I'd recommend not supporting fixed-function pipeline at all, and instead requiring pixel and vertex shaders.
In D3D9, this means that you'd be using vertex declarations instead of FVFs.
Both D3D and GL allow you to store arrays of individual attributes or strided structures of attributes. The optimal memory layout depends on the GPU.
I've not heard of 'D3D compatability mode' in GL...
Both APIs let you have 4-byte vertex attributes (e.g. For colors).

Yes, I found the vertex declaration more flexible than FVF. How D3D supports individual attributes? And how I can say if the GPU prefeer packed or individual vertices attributes? Seems that OpenGL manages better in a way, Direct3D9 in another way. I found the D3D9 compatibility mode here


How are you creating your vertices anyway? I would create an API independand vertex layout that fits your needs, and have the actual rendering implementation translate this into the correct vertex layout for each API. I don't know if this is the optimal way, but at least it will cause the least problems since the actual GPU vertices are quaranted to fit the current APIs needs. Creating could take a bit longer, but that all depends on how intelligent you implement the translation of your generic vertex format to the API specific. Pseudocode might go like this.


modelFile = LoadFile("Model.myext");

GenericVertexFmt vertices[modelFile.numVertices];

for(each vertex in model)
vertices[i] = vertex; //optimal if your file format is API independant, too

renderClass.createVertexObject(vertices[i]); //creates a vertex object for the current implemented API

Thats at least how I'd do that, again there might be a better approach, I havn't done this myself but it appears to be a good possibility.

This can be a solution obviously, but can be overweight as works in cases of a lot of vertices. Currently I'm creating a lot of vertices at runtime that handles sprites with XYZ coords, UV coords, colors and a value that points to the palette index (the value is processed by the fragment shader).

In Topic: Most efficient way to batch drawings

04 January 2013 - 09:08 AM

Okay, in these days I rewrote the entire sprite system. I'm using a pre-calculated unsigned short array for vertex indices and I'm copying the four vertices for each sprite in an array that is used as a cache. Now the framework reaches 997 fps with 20000 triangles. I'm currently using glVertexPointer and glDrawElements due to OpenGL 2.1 compatibility. I'm binding only one texture per frame. I discovered that the rendering isn't really CPU-limited, in fact I overclocked my video-card (it was in under clock to save power) and the framework reaches 1800fps. For VBO I don't understand exactly how to initialize and use it properly, it isn't the same thing to cache all the vertices in main memory then send all together before to call SwapBuffers? I forgot also to mention that more or less the 90% of the vertices changes every frame, so caching them inside the video-card has't a great effect... Also I don't understand how to buffer the uniforms and how to use them. It's possible also to avoid the glBindTexture? I know that I can upload the textures in a single big texture, but I'm asking if there is another way to switch with a batch/buffer the texture binding.

In Topic: Most efficient way to batch drawings

29 December 2012 - 11:58 AM

For some reason, VBO decrease the performances and with this, SwapBuffer takes a lot of CPU. However all this methods are CPU-limited, because the GPU isn't totally used. Much of the CPU is drawined by memcpy and SwapBuffer.


EDIT: I tried the same tests with the same software without edits on another computer that handle a Intel HD3000 (the first tests run on a Radeon 4870HD): 62, 178, 124, 163, 97, 207fps. VBO with triangle list is much faster this time. I'm starting to be confused...


If you gave us more details about the way you have measured the time, maybe we could find the cause. SwapBuffers is not a time-consuming instruction. The reason it take time is waiting for drawing to finish. That implies your measured time is incorrect. How did you measured it?

I'm measuring it with gDEBugger, setting SwapBuffer as end-of-frame. With the profiling of Visual Studio, I can see clearly that SwapBuffers takes the 50% of the CPU in a single frame.