Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 26 Oct 2006
Offline Last Active Feb 27 2015 12:34 PM

Topics I've Started

Legal stuff when using free art / models in an own game

25 June 2014 - 09:27 AM

I am wondering how and to what extent free art / sprites / 3d models can be used in an own game.


For example, lets say in the case of an animated mickey-mouse sprite, where the readme inside states:


"This sprite may be freely distributed UNALTERED.  Which means, you can't pull the readme file out of the zip,or add your own stuff to it and pass it along as your own!"


I believe including it in the game an selling it bundled will not be allowed.

On the other hand, if the user downloads the free model himself and installs it in the game-folder, there wont be any problem.


Now where is the boundary ?

Lets consider the following cases:


Like what happens if the game does not contain this sprite and 


-downloads and installs it automatically after installation ?

-downloads and installs it on request (pushing a button) ?

-contains a script to download and install the sprite ?

High Speed Quadric Mesh Simplification without problems (Resolved)

10 May 2014 - 05:51 AM

Update May 13th: Download the working version here  

MIT license, well documented and working without errors.

The latest source is also included in the lowermost reply.



Summary I am currently on a new implementation of a Quadric based mesh simplification algorithm. Its quite short (130 lines for the main part) , but apparently there is a bug somewhere. I searched already a while but couldnt find it yet. If one of you can spot it, any help is appreciated. 


In turn, everyone gets a great mesh simplification method for free once it works smile.png

Its able to reduce 90% of 650.000 triangles in about 1 sec. (Single threaded) by using less memory than most other methods.


Algorithm Different from the original method, it does not store the per-edge-error in an edges list, but per triangle. This avoids the edges list completely as the edge list is slow creating and updating. On the other hand, every error is computed twice - but that's not so serious.



calculate_error calculates the error between two vertices and output the vertex an edge might be reduced to.

reduce_polygons main function to reduce the mesh


The Problem Here a screenshot of the bug: Some faces are flipped and look displaced..




Here the source code

struct Triangle{ int v[3];double err[3];bool deleted; };
struct Vertex{ vec3f p,n;int dst,dirty;Matrix q; };
std::vector<Triangle> triangles;
std::vector<Vertex> vertices;

double vertex_error(Matrix q, double x, double y, double z);
double calculate_error(int id_v1, int id_v2, vec3f &p_result);

void reduce_polygons()
    // Init defaults
    loopi(0,triangles.size()) triangles[i].deleted=0;
    // Init Quadric by Plane
        Triangle &t=triangles[i]; vec3f n,p[3];

        loopj(0,3) p[j]=vertices[t.v[j]].p;

        loopj(0,3) vertices[t.v[j]].q = 
    // Calc Edge Error
        Triangle &t=triangles[i];vec3f p;
        loopj(0,3) t.err[j]=calculate_error(t.v[j],t.v[(j+1)%3],p);
    int deleted_triangles=0; 

    loopl(0,25) // iteration
        // remove vertices & mark deleted triangles
        loopi(0,triangles.size()) if(!triangles[i].deleted)
            Triangle &t=triangles[i];
            if(vertices[t.v[0]].dirty) continue;
            if(vertices[t.v[1]].dirty) continue;
            if(vertices[t.v[2]].dirty) continue;

                int i0=t.v[ j     ]; Vertex &v0 = vertices[i0]; 
                int i1=t.v[(j+1)%3]; Vertex &v1 = vertices[i1];

                bool test1=t.err[j] < 0.00000001*l*l*l*l*l;
                bool test2=(v0.p-v1.p).length()<0.3*l;
                // remove based on edgelength and quadric error
                if(test1 && test2)
        // update connectivity
        loopi(0,triangles.size()) if(!triangles[i].deleted)
            Triangle &t=triangles[i];

                if(t.v[j]==t.v[(j+1)%3] || t.v[j]==t.v[(j+2)%3] )
                    // two equal points -> delete triangle

            // update error
            bool dirty=0;
            loopj(0,3) dirty|=vertices[t.v[j]].dirty;

            // update error
            vec3f p;
            loopj(0,3) t.err[j]=calculate_error(t.v[j],t.v[(j+1)%3],p);
        // clear dirty flag
        loopi(0,vertices.size()) vertices[i].dirty=0;

double vertex_error(Matrix q, double x, double y, double z)
     return   q[0]*x*x + 2*q[1]*x*y + 2*q[2]*x*z + 2*q[3]*x + q[5]*y*y
          + 2*q[6]*y*z + 2*q[7]*y + q[10]*z*z + 2*q[11]*z + q[15];

double calculate_error(int id_v1, int id_v2, vec3f &p_result)
    Matrix q_bar = vertices[id_v1].q + vertices[id_v2].q;
    Matrix q_delta (  q_bar[0], q_bar[1],  q_bar[2],  q_bar[3],
                      q_bar[4], q_bar[5],  q_bar[6],  q_bar[7], 
                      q_bar[8], q_bar[9], q_bar[10], q_bar[11], 
                             0,        0,          0,        1);
    if ( double det = q_delta.det(0, 1, 2, 4, 5, 6, 8, 9, 10) ) 
        p_result.x = -1/det*(q_delta.det(1, 2, 3, 5, 6, 7, 9, 10, 11));
        p_result.y =  1/det*(q_delta.det(0, 2, 3, 4, 6, 7, 8, 10, 11));
        p_result.z = -1/det*(q_delta.det(0, 1, 3, 4, 5, 7, 8, 9, 11));
        vec3f p1=vertices[id_v1].p;
        vec3f p2=vertices[id_v1].p;
        vec3f p3=(p1+p2)/2;
        double error1 = vertex_error(q_bar, p1.x,p1.y,p1.z);
        double error2 = vertex_error(q_bar, p2.x,p2.y,p2.z);
        double error3 = vertex_error(q_bar, p3.x,p3.y,p3.z);
        double min_error = min(error1, min(error2, error3));
        if (error1 == min_error) p_result=p1;
        if (error2 == min_error) p_result=p2;
        if (error3 == min_error) p_result=p3;
    double min_error = vertex_error(q_bar, p_result.x, p_result.y, p_result.z);
    return min_error;

You can download the full source+mesh data here DOWNLOAD

400% Raytracing Speed-Up by Re-Projection (Image Warping)

03 May 2014 - 09:42 PM

Intro I have been working a while on this technology and since real-time raytracing is getting faster like with the Brigade Raytracer e.g., I believe this can be an important contribution to this area, as it might bring raytracing one step closer to being usable for video games.


Algorithm: The technology exploits temporal coherence between two consecutive rendered images to speed up ray-casting. The idea is to store the x- y- and z-coordinate for each pixel in the scene in a coordinate-buffer and re-project it into the following screen using the differential view matrix. The resulting image will look as below.
The method then gathers empty 2x2 pixel blocks on the screen and stores them into an indexbuffer for raycasting the holes. Raycasting single pixels too inefficient. Small holes remaining after the hole-filling pass are closed by a simple image filter. To improve the overall quality, the method updates the screen in tiles (8x4) by raycasting an entire tile and overwriting the cache. Doing so, the entire cache is refreshed after 32 frames. Further, a triple buffer system is used. That means two image caches which are copied to alternately and one buffer that is written to. This is done since it often happens that a pixel is overwritten in one frame, but becomes visible already in the next frame. Therefore, before the hole filling starts, the two cache buffers are projected to the main image buffer.
Results: Most of the pixels can be re-used this way as only a fraction of the original needs to be raycated, The speed up is significant and up to 5x the original speed, depending on the scene. The implementation is applied to voxel octree raycasting using open cl, but it can eventhough be used for conventional triangle based raycasting.
Limitations: The method also comes with limitations of course. So the speed up depends on the motion in the scene obviously, and the method is only suitable for primary rays and pixel properties that remain constant over multiple frames, such as static ambient lighting. Further, during fast motions, the silhouettes of geometry close to the camera tends to loose precision and geometry in the background will not move as smooth as if the scene is fully raytraced each time. There, future work might include creating suitable image filters to avoid these effects.
How to overcome the Limitations: 
Ideas to solve noisy silhouettes near the camera while fast motion:

1. suppress unwanted pixels with a filter by analyzing the depth values in a small window around each pixel. In experiments it removed some artifacts but not all - also had quite an impact on the performance.

2. (not fully explored yet) Assign a speed value to each pixel and use that to filter out unwanted pixels

3. create a quad-tree-like triangle mesh in screenspace from the raycasted result. the idea is to get a smoother frame-to-frame coherence with less pixel noise for far pixels and let the zbuffer do the job of overlapping pixels. Its sufficient to convert one tile from the raycasted result to a mesh per frame. Problem of this method : The mesh tiles dont fit properly together as they are raycasted at different time steps. Using raycasting to fill in holes was not very simple which is why I stopped exploring this method further
http://www.farpeek.com/papers/IIW/IIW-EG2012.pdf fig 5b

** Untested Ideas **

4. compute silhouettes based on the depth discontinuity and remove pixels crossing them
5. somehow do a reverse trace in screenspace between two frames and test for intersection
6. use splats to rasterize voxels close to the camera so speckles will be covered
You can find the full text here, including paper references.

Minecraft like Game with Voxel Sculpting & GPU Raycasting

02 April 2014 - 10:24 PM

Here the progress of my current project : Voxel Master.


Its a combination of high-res voxel sculpting and minecraft-like sandbox editing.

The Terrain rendering is done by polygons, the voxel rendering is done using sparse voxel octree raycasting (SVO).


I have uploaded a work-in progress footage showing the current progress: 



GPU Skinned Skeletal Animation Tutorial

02 March 2014 - 12:01 PM

Today I completed a skinned skeletal animation tutorial, which is very helpful if you are just about to start with game development. 


Different from the other tutorials I found in the web, this one is very light weight ( < 800 lines for the main mesh & animation code ) and works well with most modeling environments.



It has the following properties / features:

  • GPU Skinning / Matrix Palette Skinning
  • Bump Mapping (automatic normal map generation)
  • Spheric environment mapping
  • Ogre XML Based
  • Shaders in GLSL
  • Visual Studio 2010
  • Select LOD level with F1..F5


It is ready to use, which means you can load and display the animated models in a few lines of code:

static Mesh halo("halo.material",//	 required material file)
		 "halo.mesh.xml",//	 required mesh file
		 "halo.skeleton.xml");// optional skeleton file

int idle = halo.animation.GetAnimationIndexOf("idle");

halo.animation.SetPose(idle,	   // animation id (2 animations, 0 and 1, are available)
  		    time_elapsed); // time in seconds. animation loops if greater than animation time

halo.Draw( vec3f(0,0,0),	// position
	   vec3f(0,0,0),	// rotation
	    0);			// LOD level 


Also getting a bone matrix to put a weapon in the hand of the player e.g. is very simple:

int index  = halo.animation.GetBoneIndexOf("joint1"); 
matrix44 m = halo.animation.bones[ index ].matrix;


Setting the arm joint individually for shooting a weapon e.g. works as follows:( press F6 in the demo ):

// get the index
int index  = halo.animation.GetBoneIndexOf("joint2"); 
// get / modify / set the matrix
matrix44 m = halo.animation.bones[ index ].matrix;
m.y_component()=vec3f(0,1,0); // set the rotation to identity
halo.animation.bones[ index ].matrix=m;
// re-evaluate the child bones
loopi(0,halo.animation.bones[ index ].childs.size())
        halo.animation.bones[ index ].childs[i], // bone id
        halo.animation.animations[0],            // animation
        -1);                                     // key frame -1 means not use the animation



  • Design the Model in Maya/MAX/Blender/etc.
  • Export the model using the OgreExporter
  • Convert the model from Ogre binary to Ogre XML (batch file is included)
  • Load the model in the tutorial code


Main Skinning in GLSL: 


The main skinning is done in the vertex shader and only requires a few lines.

For the shader, the skinning weights are stored in the color information as 3 floats. 

The bone IDs are unpacked from the w coordinate of the position information.

The bone matrixes are stored as simple matrix array.


uniform mat4 bones[100];
uniform int  use_skinning;

void main(void)
    mat4 mfinal = gl_ModelViewMatrix ;

    // skinning
    	vec3 weights= gl_Color.xyz;
    	vec3 boneid = gl_Vertex.w * vec3( 1.0/128.0 , 1.0 , 128.0 );
    	boneid = (boneid - floor(boneid))*128.0;

    	mat4 mskin  =	bones[int(boneid.x)]*weights.x+
    	mfinal = mfinal * mskin;

    gl_Position	= gl_ProjectionMatrix * mfinal * vec4(gl_Vertex.xyz,1.0);

Animating Notes for Maya
For Maya, put all animations in one time-line and export them as separate animations.
Ogre Export
Tangents need to be exported as 4D, to include the handedness. The tutorial version does not generate the tangents in the shader as it is faster to read them from the memory.
Bump Mapping
For the Ogre Material file, the bump map needs to be added manually by hand using the texture_bump parameter as follows:
texture Texture\masterchief_base.tif
texture_bump Texture\masterchief_bump_DISPLACEMENT.bmp
Environment Mapping
Environment mapping can be switched on in the material file using the following parameter:
env_map spherical