Jump to content
  • Advertisement
Sign in to follow this  
NameInUse

OpenGL Opengl 2d texture bleeding

This topic is 916 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello there. Im using opengl 1.4. In short, i suffer from this well know problem, described in detail in the following links:

 

http://gamedev.stackexchange.com/questions/46963/how-to-avoid-texture-bleeding-in-a-texture-atlas

 

Now, what irritates me is that i can play 2d games without experimenting those problems (Say, ala spelunky), but i cannot fix this issue even tho i tried every possible way (border in texture atlas, + 0.5 in text coords, using integers for drawing is a no-no). This problem must be common, still i cannot find a solition for the issue. Any hint?

 

 

Share this post


Link to post
Share on other sites
Advertisement

It's a very straightforward issue to deal with, you just need to know how texture coordinates and screen coordinates work, and then be very specific with how you choose your values. Doing stuff like adding a magic 0.5 is not based on understanding.

 

A 2 pixel wide texture is addressed like this -- for the 0th texel, it's left edge is at 0/width=0, it's centre is at 0.5/width=0.25, and it's right edge is at 1/width=0.5:

z9dxcYW.png

 

A 2 pixel wide back-buffer/FBO is addressed like this -- for the 0th pixel, it's left edge is at (0/width)*2-1=-1, it's centre is at (0.5/width)*2-1=-0.5, and it's right edge is at (1/width)*2-1=0:

INVb76l.png

When drawing 2D graphics, you generally use vertex positions that are placed at the corners of screen pixels, and with UV's at the corners of texels. Pixels are rasterized only if triangles cover their centre point, and it's at that location that the vertex attributes will be interpolated at.

So say you want to draw the above 2x2 texture to this 2x2 screen -- if you have a quad with positions from -1 to +1, and UV's from 0 to 1, then the left pixel with a centre point of -0.5 is covered, so the attributes get interpolated there, producing a UV value of 0.25, which lies exactly at a texel centre, so no bleeding can possibly occur.

Share this post


Link to post
Share on other sites

But that is exactly what i did and still i see these in between lines:

    gW      = float32(graphic.w)
    gH      = float32(graphic.h)
    gX      = float32(graphic.x)
    gY      = float32(graphic.y)
    
    oX      = 0.5 / gW
    oY      = 0.5 / gH
    
    textW   = 1f32 / float32(texture.w)
    textH   = 1f32 / float32(texture.h)

    left    = (gX + oX + (float32(flipX)     * gW)) * textW
    right   = (gX + oX + (float32(not flipX) * gW)) * textW
    top     = (gY + oY + (float32(flipY)     * gH)) * textH
    bottom  = (gY + oY + (float32(not flipY) * gH)) * textH

    width   = float32(gW) * dg.scaleX
    height  = float32(gH) * dg.scaleY

  glTexCoord2f  left,       bottom
  glVertex2f    x,          y
  glTexCoord2f  right,      bottom
  glVertex2f    x + width,  y       
  glTexCoord2f  right,      top
  glVertex2f    x + width,  y + height

  glTexCoord2f  left,       bottom
  glVertex2f    x,          y
  glTexCoord2f  right,      top
  glVertex2f    x + width,  y + height
  glTexCoord2f  left,       top
  glVertex2f    x,          y + height
Edited by NameInUse

Share this post


Link to post
Share on other sites

textW = 1f32 / float32(texture.w)
textH = 1f32 / float32(texture.h)

 

i wont even ask what are those, floats are inprecise you are better writing arb vp/fp shader that reads back proper texel. you think tou define 1.0, anywayz disable mipmaps write shader, It is possible that you get that result only on your graphics card

 

 

and maybe define an integer or byte that represents texture like that

you coudl even use gltexcoord2d but that shouldn't help

struct TTexturePosition
{
int type;
float x;
float y;
float w;
float h;
};
const int tex_dirt = 0;
const int tex_grass = 1;
TTexturePosition TypeToAtlasPos(int type)
{
if (type == tex_dirt)
{
some bullcrap around here either pass hardcoeded valuse or use modulo adding and multiplying to find based not on type but on index
}
}

MOV result.texcoord[0], vertex.texcoord; <- texture coord for acitve texutre 0, this is how you pass texcoord to fp

 

 

 

fetch and throw actual tex coord texel.

TEMP ACT_TEX_COL;
TEX ACT_TEX_COL, fragment.texcoord[0], texture[0], 2D;
#fragment.color.r g b a




MOV result.color.r, ACT_TEX_COL.x;
MOV result.color.g, ACT_TEX_COL.y;
MOV result.color.b, ACT_TEX_COL.z;
MOV result.color.a, 1.0;

this is where magic begins because now you have the full ability to find whats exactly happening.

 

 

 

here is a complete vertex program / fragment program for ogl lighting. (it may be not working well i don't remember if it was wrong)

[spoiler]

!!ARBvp1.0

#world(model) * view* projection matrix
PARAM MVP1 = program.local[1];
PARAM MVP2 = program.local[2];
PARAM MVP3 = program.local[3];
PARAM MVP4 = program.local[4];
#lightpos
PARAM LPOS = program.local[5];
#light diff
PARAM LDIFF = program.local[6];
#light amb
PARAM LAMB = program.local[7];

#world matrix
PARAM WM1 = program.local[8];
PARAM WM2 = program.local[9];
PARAM WM3 = program.local[10];
PARAM WM4 = program.local[11];


TEMP vertexClip;

#transform vertex for to view it
DP4 vertexClip.x, MVP1, vertex.position;
DP4 vertexClip.y, MVP2, vertex.position;
DP4 vertexClip.z, MVP3, vertex.position;
DP4 vertexClip.w, MVP4, vertex.position;



TEMP vertexWorld;

#transform vertex to actual world position this is the most true position of all
DP4 vertexWorld.x, WM1, vertex.position;
DP4 vertexWorld.y, WM2, vertex.position;
DP4 vertexWorld.z, WM3, vertex.position;
DP4 vertexWorld.w, WM4, vertex.position;


TEMP TRANSFORMED_NORMAL;
TEMP TRANS_NORMAL_LEN;

#transform normal
DP3 TRANSFORMED_NORMAL.x, WM1, vertex.normal;
DP3 TRANSFORMED_NORMAL.y, WM2, vertex.normal;
DP3 TRANSFORMED_NORMAL.z, WM3, vertex.normal;

DP3 TRANS_NORMAL_LEN.x, TRANSFORMED_NORMAL, TRANSFORMED_NORMAL;
RSQ TRANS_NORMAL_LEN.x, TRANS_NORMAL_LEN.x;
MUL TRANSFORMED_NORMAL.x, TRANSFORMED_NORMAL.x, TRANS_NORMAL_LEN.x;
MUL TRANSFORMED_NORMAL.y, TRANSFORMED_NORMAL.y, TRANS_NORMAL_LEN.x;
MUL TRANSFORMED_NORMAL.z, TRANSFORMED_NORMAL.z, TRANS_NORMAL_LEN.x;


#vector from light to vertex
#helper var
TEMP LIGHT_TO_VERTEX_VECTOR;
TEMP InvSqrLen;

#get direction from Light pos to transformed vertex
SUB LIGHT_TO_VERTEX_VECTOR, vertexWorld, LPOS;

#calculate 1.0 / length = 1.0 / sqrt( LIGHT_TO_VERTEX_VECTOR^2 );
DP3 InvSqrLen.x, LIGHT_TO_VERTEX_VECTOR, LIGHT_TO_VERTEX_VECTOR;
RSQ InvSqrLen.x, InvSqrLen.x;


TEMP LIGHT_TO_VERTEX_NORMAL;

#normalize normal of light-vertex vector
MUL LIGHT_TO_VERTEX_NORMAL.x, LIGHT_TO_VERTEX_VECTOR.x, InvSqrLen.x;
MUL LIGHT_TO_VERTEX_NORMAL.y, LIGHT_TO_VERTEX_VECTOR.y, InvSqrLen.x;
MUL LIGHT_TO_VERTEX_NORMAL.z, LIGHT_TO_VERTEX_VECTOR.z, InvSqrLen.x;






#dot product of normalized vertex normal and light to vertex direction
TEMP DOT;


#dot
DP3 DOT, LIGHT_TO_VERTEX_NORMAL, TRANSFORMED_NORMAL;

#new vertex color
TEMP NEW_VERTEX_COLOR;

#since normals that face each other produce negative dot product we do 0 - dotp
SUB NEW_VERTEX_COLOR.x, 0.0, DOT;
SUB NEW_VERTEX_COLOR.y, 0.0, DOT;
SUB NEW_VERTEX_COLOR.z, 0.0, DOT;


#clamp to 0
MAX NEW_VERTEX_COLOR.x, 0.0, NEW_VERTEX_COLOR.x;
MAX NEW_VERTEX_COLOR.y, 0.0, NEW_VERTEX_COLOR.y;
MAX NEW_VERTEX_COLOR.z, 0.0, NEW_VERTEX_COLOR.z;



ADD NEW_VERTEX_COLOR.x, 0.12, NEW_VERTEX_COLOR.x;
ADD NEW_VERTEX_COLOR.y, 0.12, NEW_VERTEX_COLOR.y;
ADD NEW_VERTEX_COLOR.z, 0.26, NEW_VERTEX_COLOR.z;

#clamp to 1
MIN NEW_VERTEX_COLOR.x, 1.0, NEW_VERTEX_COLOR.x;
MIN NEW_VERTEX_COLOR.y, 1.0, NEW_VERTEX_COLOR.y;
MIN NEW_VERTEX_COLOR.z, 1.0, NEW_VERTEX_COLOR.z;

#dodatkowe
MUL NEW_VERTEX_COLOR.x, 0.5, NEW_VERTEX_COLOR.x;
MUL NEW_VERTEX_COLOR.y, 0.5, NEW_VERTEX_COLOR.y;
MUL NEW_VERTEX_COLOR.z, 0.5, NEW_VERTEX_COLOR.z;

MUL NEW_VERTEX_COLOR, vertex.color, NEW_VERTEX_COLOR;







MOV result.position, vertexClip;
MOV result.texcoord[0], vertex.texcoord;
MOV result.color, NEW_VERTEX_COLOR;
MOV result.texcoord[1].x, NEW_VERTEX_COLOR.x;
MOV result.texcoord[1].y, NEW_VERTEX_COLOR.y;
MOV result.texcoord[2].x, NEW_VERTEX_COLOR.z;

END 

[/spoiler]

 

 

fragment program:

[spoiler]

!!ARBfp1.0

TEMP texcol;

TEMP FLOOR_TEX_COORD;
FLR FLOOR_TEX_COORD, fragment.texcoord[0]; 

TEMP ACT_TEX_COORD;

SUB ACT_TEX_COORD, fragment.texcoord[0], FLOOR_TEX_COORD;

TEX texcol, ACT_TEX_COORD, texture[0], 2D;


TEMP V_COLOR;
MOV V_COLOR.x, fragment.texcoord[1].x;
MOV V_COLOR.y, fragment.texcoord[1].y;
MOV V_COLOR.z, fragment.texcoord[2].x;


MUL texcol, texcol, V_COLOR;

MOV result.color, texcol;

END
 

[/spoiler]

 

 

 

and this is actually c++ builder code so you will have to develop your own text file loading code:

 

[spoiler]

template <class T>
T * readShaderFile( AnsiString FileName )
{
TStringList * s = new TStringList();
s->LoadFromFile(FileName);

	T *buffer = new T[s->Text.Length()];
											  buffer = s->Text.c_str();
	return buffer;
}
struct TASMShaderObject
{
  unsigned int VERTEX;
  unsigned int FRAGMENT;


TStringList * vert_prog;
TStringList * frag_prog;   

unsigned char * vert;
unsigned char * frag;
bool vp;
bool fp;

TASMShaderObject()
{
vp = false;
fp = false;
}


void Enable()
{
if (vp == true) glEnable( GL_VERTEX_PROGRAM_ARB );
if (fp == true) glEnable( GL_FRAGMENT_PROGRAM_ARB );
};


void Disable()
{
if (vp == true) glDisable( GL_VERTEX_PROGRAM_ARB );
if (fp == true) glDisable( GL_FRAGMENT_PROGRAM_ARB );
}

void Bind()
{
if (vp == true) BindVert();
if (fp == true) BindFrag();
}


void BindVert() { glBindProgramARB( GL_VERTEX_PROGRAM_ARB, VERTEX ); }

void BindFrag() { glBindProgramARB( GL_FRAGMENT_PROGRAM_ARB, FRAGMENT ); }

void LoadVertexShader(AnsiString fname)
{
glGenProgramsARB( 1, &VERTEX );
glBindProgramARB( GL_VERTEX_PROGRAM_ARB, VERTEX );

vert = readShaderFile<unsigned char>( fname );
glProgramStringARB( GL_VERTEX_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, strlen((char*) vert), vert );
vp = true;

vert_prog    = new TStringList();
vert_prog->LoadFromFile( fname );

ShowGLProgramERROR("Vertex shader: " + ExtractFileName(fname));
delete vert_prog;
}

void LoadFragmentShader(AnsiString fname)
{
glGenProgramsARB( 1, &FRAGMENT );
glBindProgramARB( GL_FRAGMENT_PROGRAM_ARB, FRAGMENT );

frag = readShaderFile<unsigned char>( fname );
glProgramStringARB( GL_FRAGMENT_PROGRAM_ARB, GL_PROGRAM_FORMAT_ASCII_ARB, strlen((char*) frag), frag );
fp = true;

frag_prog    = new TStringList();
frag_prog->LoadFromFile( fname );
ShowGLProgramERROR("Fragment shader: " + ExtractFileName(fname));
delete frag_prog;
}



void ShowGLProgramERROR(AnsiString s)
{
AnsiString err = ((char*)glGetString(GL_PROGRAM_ERROR_STRING_ARB));
  if (err != "")
{
int i;
	glGetIntegerv(GL_PROGRAM_ERROR_POSITION_ARB, &i);
ShowMessage(s + "  "+IntToStr(i) + "   """ + err +" """);
}


}

};

[/spoiler]

 

 

 

now the usage of the thing:

 

[spoiler]


VertexLighting_ID.Enable();
VertexLighting_ID.Bind();
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_COLOR, GL_ONE);
ACTUAL_MODEL.LoadIdentity();
sunlight->SET_UP_LIGHT(0, D2F(KTMP_LIGHT),t3dpoint<float>(1,0,0),t3dpoint<float>(1,0,1) );
FPP_CAM->SetView();


SetShaderMatrix(ACTUAL_MODEL, ACTUAL_VIEW, ACTUAL_PROJECTION);
SendMatrixToShader(ACTUAL_MODEL.m, 8);

village->DrawSimpleModel(NULL);
glDisable(GL_BLEND);
VertexLighting_ID.Disable();
        

where


template <class T>
mglFrustum(Matrix44<T> & matrix, T l, T r, T b, T t, T n, T f)
{

matrix.m[0] = 	(2.0*n)/(r-l);
matrix.m[1] = 	0.0;
matrix.m[2] =  	(r + l) / (r - l);
matrix.m[3] =	0.0;

matrix.m[4] = 	0.0;
matrix.m[5] = 	(2.0*n) / (t - b);
matrix.m[6] =   (t + b) / (t - b);
matrix.m[7] =   0.0;

matrix.m[8] = 	0.0;
matrix.m[9] =  	0.0;
matrix.m[10] =	-(f + n) / (f-n);
matrix.m[11] =  (-2.0*f*n) / (f-n);

matrix.m[12] =  0.0;
matrix.m[13] =  0.0;
matrix.m[14] =  -1.0;
matrix.m[15] =  0.0;

}

template <class T>
glLookAt(Matrix44<T> &matrix, t3dpoint<T> eyePosition3D,
				  t3dpoint<T> center3D, t3dpoint<T> upVector3D )
{
   t3dpoint<T>  forward, side, up;
   forward = Normalize( vectorAB(eyePosition3D, center3D) );
   side = Normalize( forward * upVector3D );
   up = side * forward;
  matrix.LoadIdentity();

	matrix.m[0] = side.x;
	matrix.m[1] = side.y;
	matrix.m[2] = side.z;

	matrix.m[4] = up.x;
	matrix.m[5] = up.y;
	matrix.m[6] = up.z;

	matrix.m[8] 	= -forward.x;
	matrix.m[9] 	= -forward.y;
	matrix.m[10] 	= -forward.z;



Matrix44<T> transgender;
transgender.Translate(-eyePosition3D.x, -eyePosition3D.y, -eyePosition3D.z);


matrix = transgender * matrix;
}

template <class T>
void SendMatrixToShader(T mat[16], int offset)
{
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, offset, float(mat[0]), float(mat[1]), float(mat[2]), float(mat[3]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, offset+1, float(mat[4]), float(mat[5]), float(mat[6]), float(mat[7]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, offset+2, float(mat[8]), float(mat[9]), float(mat[10]), float(mat[11]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, offset+3, float(mat[12]), float(mat[13]), float(mat[14]), float(mat[15]));
}



template <class T>
void SendParamToShader(t4dpoint<T> param, int offset)
{
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, offset, float(param.x), float(param.y), float(param.z), float(param.w));
}

template <class T>
void SendParamToShader(T a, T b, T c, T d, int offset)
{
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, offset, float(a), float(b), float(c), float(d));
}


template <class T>
void SendParamToFRAGShader(t4dpoint<T> param, int offset)
{
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, offset, float(param.x), float(param.y), float(param.z), float(param.w));
}

template <class T>
void SendParamToFRAGShader(T a, T b, T c, T d, int offset)
{
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, offset, float(a), float(b), float(c), float(d));
}


//Orders for opengl and math
//object transform: Scale * Rotate * Translate
//Shader: model * view * projection

//*****************************
//*****************************
//************NOTE*************
//*****************************
//*****************************

//this function must be replaced since opengl 4.5 doesn't use glLoadMatrix();

template <class T>
SetShaderMatrix(Matrix44<T> model, Matrix44<T> view, Matrix44<T> projection )
{

Matrix44<T> tmp;


tmp = (model * view) * projection;

glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, 1, float(tmp.m[0]), float(tmp.m[1]), float(tmp.m[2]), float(tmp.m[3]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, 2, float(tmp.m[4]), float(tmp.m[5]), float(tmp.m[6]), float(tmp.m[7]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, 3, float(tmp.m[8]), float(tmp.m[9]), float(tmp.m[10]), float(tmp.m[11]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, 4, float(tmp.m[12]), float(tmp.m[13]), float(tmp.m[14]), float(tmp.m[15]));

}


template <class T>
SendMatrixToShader(Matrix44<T> mat, int startoffset)
{
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, startoffset+0, float(mat.m[0]), float(mat.m[1]), float(mat.m[2]), float(mat.m[3]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, startoffset+1, float(mat.m[4]), float(mat.m[5]), float(mat.m[6]), float(mat.m[7]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, startoffset+2, float(mat.m[8]), float(mat.m[9]), float(mat.m[10]), float(mat.m[11]));
glProgramLocalParameter4fARB(GL_VERTEX_PROGRAM_ARB, startoffset+3, float(mat.m[12]), float(mat.m[13]), float(mat.m[14]), float(mat.m[15]));
}


template <class T>
SendMatrixToFRAGShader(Matrix44<T> mat, int startoffset)
{
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, startoffset+0, float(mat.m[0]), float(mat.m[1]), float(mat.m[2]), float(mat.m[3]));
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, startoffset+1, float(mat.m[4]), float(mat.m[5]), float(mat.m[6]), float(mat.m[7]));
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, startoffset+2, float(mat.m[8]), float(mat.m[9]), float(mat.m[10]), float(mat.m[11]));
glProgramLocalParameter4fARB(GL_FRAGMENT_PROGRAM_ARB, startoffset+3, float(mat.m[12]), float(mat.m[13]), float(mat.m[14]), float(mat.m[15]));
}
 

[/spoiler]

 

 

 

so once again disable mimaps, disable linear filtering, make sure texture is power of 2, same for the whole atlas like for each texture in atlas.

in shader define which texel you want to fetch fetch it, push it, draw it, after that change to more modern opengl. to do that i recommeng going to that ylink you gave and study how texels are determined, they already wrote that,.

 

 

i really have no idea what to add.

 

 

tex coord for first in atlas lets say atlas is 256x256 and textures are 64x64

 

float ahue = 64.0/256.0;

 

texcoord left = ahue * float(col_index);

texcoord right = left+ahue;

texcoord top = ahue * float(row_index);

texcoord bottom = top + ahue;

 

 

you could possibly find the case when you are tyring to fetch texel from borders and apply special case that fetches the corect texel.

Edited by WiredCat

Share this post


Link to post
Share on other sites

Thanks for your help WirdeCat. I only use opengl 1.4 so shaders are out of the question. I use nearest filter, texture is power of two, no mipmaps etc. I'll use double instead of float, but i doubt that will make any difference.

Share this post


Link to post
Share on other sites

In your code snippet, what are x and y? Why are you placing vertices at pixel-centers - are your texcoords also at texel-centers to compensate for that unnecessary offset? What kind of viewport and matrices are you using? Is texture.w/h the screen resolution? What's dg?

Share this post


Link to post
Share on other sites

1.4 supports vertex and fragment programs, not shaders <- this is why i placed that pseudo asm code thats your so called shader

 

and hodg can be right you are adding adidtional offset 0.5 this should throw you a texel out of texture in right and top border.

Edited by WiredCat

Share this post


Link to post
Share on other sites

1.4 supports vertex and fragment programs, not shaders


It's more correct to say that GL_ARB_vertex_program and GL_ARB_fragment_program are available as extensions, rather than in core GL, but also that they are potentially available in GL versions going back to 1.3; see GL_ARB_vertex_program and GL_ARB_fragment_program.

 

However, I think the OP will find that the kind of hardware which only supports GL 1.4 but no shaders at all doesn't actually even exist any more.  It's a hypothetical objection to using shaders rather than something you'll encounter in the real world.

Share this post


Link to post
Share on other sites

damn my bad then, so he has maybe this is his only option fix texcoords + make additional pixel on whole border that corresponds that to the pixel for left right border in x axis and for top bottom for y axis.

Share this post


Link to post
Share on other sites

In your code snippet, what are x and y? Why are you placing vertices at pixel-centers - are your texcoords also at texel-centers to compensate for that unnecessary offset? What kind of viewport and matrices are you using? Is texture.w/h the screen resolution? What's dg?

x and y is are the screen coords where i want to display this image.

I'm placing my vertices there because that i how i understand i have to do it, in fact it gave me the best results for now.

Texture.w/h is the image that contains the sub image i want to draw (in case it belongs to a bigger image) like texture atlases.

dg is just the object containing the draw order with the arguments (such as if i want to flip the image, or how much scaling). In this case scaleX/y is always at 1.0

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
    • By Lewa
      So, i stumbled upon the topic of gamma correction.
      https://learnopengl.com/Advanced-Lighting/Gamma-Correction
      So from what i've been able to gather: (Please correct me if i'm wrong)
      Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format  
      Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
      First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)
       
      What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
      vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
      No gamma correction:
      With gamma correction:
       
      The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
      Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631375
    • Total Posts
      2999659
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!