Jump to content
  • Advertisement
Sign in to follow this  
godmodder

OpenGL TSM freakout

This topic is 4594 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, I've tried to implement TSM shadows really hard for a long time. I've read all the papers and tutorials on the net about the subject (not much). Nvidia has a nice demo with source, but it's all in direct3d and the code really gives me a headache. My question: does somebody now a tutorial or explanation on how to do TSM shadow mapping in OpenGL? Maybe someone can send me there code (I've read on the forums here that some people have done it with succes) I'd really appreciate your help, Thanx in advance, Jeroen

Share this post


Link to post
Share on other sites
Advertisement
Guest Anonymous Poster
Have you already looked at the TSM recipe page http://www.comp.nus.edu.sg/~tants/tsm/TSM_recipe.html?

It contains some source code in OpenGL for TSM.

Share this post


Link to post
Share on other sites
Hello,

yes, I had already looked at it. It contains source to compute the trapezoidal transformation. (the easier bit)
I'm having problems with the steps before the transformation. The problem is the following: I've computed my 8 points of the eye-frustum, but I need to transform those points into the post-perspective space of the light and scale them to the unit cube in opengl. I thought you one could do it with gluLookAt() and glOrtho(), but frankly, I have no idea on how to do it exactly in opengl.
When I look at the sourcecode in direct3D on the nvidia SDK, I understand every bit they're doing, but I can't reproduce it in opengl.

Can someone help me with this?

Share this post


Link to post
Share on other sites
How do they do it in the D3D sample ? They're probably using D3DX ?

Frankly, I wouldn't use builtin OpenGL functions to accomplish this. Just do the matrix math manually, either with a homebrew matrix library, or one of the gazillion third party ones floating around the net.

Share this post


Link to post
Share on other sites
1st problem: they first calculate a lightspace basis by using: D3DXVec3TransformNormal(), and some other functions like Cross() etc...
What does D3DXVec3TransformNormal() exactly do? Does it transform the vector just by a rotation matrix or by a normal matrix? (normal matrix is the transpose of the inverse of the modelview matrix)

2nd problem: is D3DXVec3TransformCoordArray() the same as multiplying your vector with a rotation matrix?

3rd problem: do I have to calculate the scene's bounding box and transform all of the vertices of my world into eye-space to get all shadow casters (my world) in front of the near plane? That couldn't be true, right? (well... they do it in the sample, while they explicitly state in the TSM paper that NO scene analysis is required in order for TSM to work properly.

4th problem: they use D3DXMatrixOrhtoOffCenterLH() to produce a ortho projection matrix. Can I use glOrtho() to do this in opengl or do I have to calculate that matrix manually and transform the points like in problem 2?

I've been scrathing my head all day,
The rest of the code is cristal clear...

Share this post


Link to post
Share on other sites
hi,

1) It simply assumes you're transforming a normal. So you won't get the translation information of your matrix (it multiplies (x, y, z, 0) with your matrix, instead of (x, y, z, 1) in the case of D3DXVec3TransformCoord())

2) D3DXVec3TransformCoordArray() multiply each element of the array by the givent matrix. It's exactly the same as :

for (int i = 0; i < size; i++)
D3DXVec3TransformCoord(&output, &input, &matrix);

3 & 4, I don't know, sorry ^^

Share this post


Link to post
Share on other sites
Look, the main problem I'm having is that all these D3DX functions use the direct3d VIEW matrix, but in OpenGL, you have a MODELVIEW matrix.

Share this post


Link to post
Share on other sites
Hi...

First of all the nVidia demo in on PSM and not TSM. This is an other technique.

Here is the code i used to get it working, but i couldn't find a demo using it. I hope it works (i don't remeber how exactly, but i remember that it had worked)!


void CalcEyeFrustumInObjSpace(GEVector3D* p, GECamera* cam)
{
float t = cam->GetZNear() * tan(geDegToRad(cam->GetFOV()) * 0.5f);
float b = -t;
float r = cam->GetAspect() * t;
float l = -r;
float n = cam->GetZNear();
float f = cam->GetZFar();
float fn = f / n;
float lFar = fn * l;
float rFar = fn * r;
float bFar = fn * b;
float tFar = fn * t;

p[0].Set(l, b, -n);
p[1].Set(r, b, -n);
p[2].Set(r, t, -n);
p[3].Set(l, t, -n);

p[4].Set(lFar, bFar, -f);
p[5].Set(rFar, bFar, -f);
p[6].Set(rFar, tFar, -f);
p[7].Set(lFar, tFar, -f);
}

void CalcEyeFrustumInL(GEVector3D* eyeFr, GEVector3D* eyeFrInL, GECamera* eye)
{
GEMatrix4x4 lookAt, proj, eyeProj, camLookAt_inv;
camLookAt_inv.LookAt(eye->Origin, eye->Dir, eye->UpVector);
camLookAt_inv.Invert(camLookAt_inv);

Light->LightView.LookAt(Light->GetCamera()->Origin, Light->GetCamera()->Dir, Light->GetCamera()->UpVector);
Light->LightProj.Perspective(Light->GetCamera()->GetFOV(), 1.0f, Light->GetCamera()->GetZNear(), Light->GetCamera()->GetZFar());

eyeProj.Multiply(Light->LightProj, Light->LightView);
eyeProj.Multiply(eyeProj, camLookAt_inv);

//////////////////////////////////////////////////////////////////////////
// eyeFrustumInLightSpace = P_L * C_L * C_E^-1 * eyeFrustumInObjSpace
//
// Chapter 3, last par. : The eight corner vertices of E are obtained from
// those corner vertices of the eye's frustum in the object space multiplied
// by PL .CL .CE^-1 where CE^-1 is the inverse camera matrix of the eye. We treat
// E as a flattened 2D object on the front face of the light's unit cube.
//
IsEyeFrustumInLightFrustum = true;
for(int i=0;i<8;i++)
{
eyeProj.TransformHomogenPoint(eyeFr, eyeFrInL);
eyeFrInL.z = 1.0f;

if(fabsf(eyeFrInL.x) > 1.0f || fabsf(eyeFrInL.y) > 1.0f)
IsEyeFrustumInLightFrustum = false;
}

RenderFrustum(eyeFrInL);

glBegin(GL_POINTS);
glColor3f(1.0f, 0.0f, 0.0f);
for(i=0;i<4;i++)
glVertex3fv(&eyeFrInL.x);
glColor3f(0.0f, 1.0f, 0.0f);
for(i=4;i<8;i++)
glVertex3fv(&eyeFrInL.x);
glEnd();
}

void CalcCenterLine(GEVector3D* eyeFrInL, GEVector3D* l)
{
GEVector3D v1 = eyeFrInL[1] - eyeFrInL[0];
GEVector3D v2 = eyeFrInL[2] - eyeFrInL[3];

v1 = eyeFrInL[0] + v1 * 0.5f;
v2 = eyeFrInL[3] + v2 * 0.5f;

l[0] = v1 + (v2 - v1) * 0.5f;

v1 = eyeFrInL[5] - eyeFrInL[4];
v2 = eyeFrInL[6] - eyeFrInL[7];
v1 = eyeFrInL[4] + v1 * 0.5f;
v2 = eyeFrInL[7] + v2 * 0.5f;

l[1] = v1 + (v2 - v1) * 0.5f;

glBegin(GL_LINES);
glVertex3fv(&l[0].x);
glVertex3fv(&l[1].x);
glEnd();
}

void CalcTrapezoid(GEVector3D* hull, int numHullPoints, GEVector3D* line, GEVector3D* trapezoid)
{
//////////////////////////////////////////////////////////////////////////
// We need an axis orthogonal to center line.
// Because all the points of the hull are on the z = 1.0f plane, we can take
// the cross product of the center line direction with the z axis vector
// as the perpendicular axis.
//
// FIX : What happens if line[1] = line[0]??? Center line has zero length,
// and we can't find a valid perpendicular axis! This may happen when we
// are near the dualling frusta case.
//
GEVector3D perpAxis = line[1] - line[0];
perpAxis.Normalize();
if(perpAxis.Length() < 1e-5f)
return;

//////////////////////////////////////////////////////////////////////////
// This is because we always want the perpAxis to point to +x world axis.
// The normal way to do it is to change the vectors in the cross product,
// but this makes things "symmetrical"!!!
//
if(perpAxis.x < 0.0f)
perpAxis = perpAxis | GEVector3D(0.0f, 0.0f, 1.0f);
else
perpAxis = perpAxis | GEVector3D(0.0f, 0.0f, -1.0f);

//////////////////////////////////////////////////////////////////////////
// If we transform all the points of the hull, so that the above calculated
// perpendicular axis is the x - axis, and the center line is the y - axis,
// we can calculate an axis-aligned bounding box of hull's points. With this
// aabb we can calculate how much we must travel along the center line in case to
// draw the top and the base lines that encloses the whole convex hull.
// We also translate the the hull so the center line starts at (0,0,0).
//
GEMatrix4x4 rotz, trans;
rotz.MakeRotationMatrix('z', geRadToDeg(-acos(perpAxis.x)));
trans.MakeTranslationMatrix(-line[0].x, -line[0].y, -line[0].z);
rotz.Multiply(rotz, trans);

GEVector3D minv(2.0f, 2.0f, 1.0f), maxv(-2.0f, -2.0f, 1.0f);
GEVector3D transHull[6];

for(int i=0;i<numHullPoints;i++)
{
rotz.TransformPoint(hull, transHull);

//////////////////////////////////////////////////////////////////////////
// We check only y component, because z is always 1.0f, and we are interested,
// in the y-direction only. This is, we only want the top and the bottom of the
// aabbox.
//
if(transHull.y > maxv.y) maxv.y = transHull.y;
if(transHull.y < minv.y) minv.y = transHull.y;
}

//////////////////////////////////////////////////////////////////////////
// We do this (the min/max thing for top_y and base_y) because, we always want to
// have the top line touch the near plane, and the base line touch the far plane.
//
float topy = min(fabs(minv.y), fabs(maxv.y));
float basey = max(fabs(minv.y), fabs(maxv.y));

GEVector3D clineDir = line[1] - line[0];
float cLineLen = clineDir.Length();
clineDir.Normalize();

GEVector3D topLine[2];
GEVector3D baseLine[2];
GEVector3D temp;

//////////////////////////////////////////////////////////////////////////
// We move along the center line, starting from the point on the near plane,
// by a distance of -miny to reach the outer point of the hull. Then with the
// perpendicular axis, we form the top line.
//
temp = line[0] - clineDir * topy;
topLine[0] = temp + perpAxis;
topLine[1] = temp - perpAxis;

//////////////////////////////////////////////////////////////////////////
// The same as above, but now we must move +maxy to reach the cross point of
// the center line and the base line.
//
temp = line[0] + clineDir * basey;
baseLine[0] = temp + perpAxis;
baseLine[1] = temp - perpAxis;

//////////////////////////////////////////////////////////////////////////
// Calculate distance of point q from the top line ('&#951;' in the paper)
// Chapter 6.2.
//
float lineFactor = 0.8f; // 80% line
float ksi = 1.0f - 2.0f * lineFactor; // &#958; in the paper
float lamda = cLineLen + fabs(basey) + fabs(topy); // distance between the top and the base line
float delta = 1.0f - ksi;// + topy;
float n = (lamda * delta + lamda * delta * ksi) / (-lamda - 2.0f * delta - lamda * ksi);

//////////////////////////////////////////////////////////////////////////
// Calculate q
//
temp = line[0] - clineDir * topy; // temp is the cross between center line and top line.
GEVector3D q = temp + clineDir * n;

//////////////////////////////////////////////////////////////////////////
// Find the two points of the hull which combined with q give us the side
// edges of the trapezoid. These points, must be points of the hull.
//

//////////////////////////////////////////////////////////////////////////
// For every point on the hull, form a line from q to the point. We know that
// all points lie on z = +1 plane. So, from this line, we form a plane, and we
// check every other point on the hull against it. In case to have a winner,
// all other points must lie on the same side of the plane.
//
int index[2] = {-1, -1};
int curIndex = 0;
for(i=0;i<numHullPoints;i++)
{
GEPlane plane(hull, (q - hull) | GEVector3D(0.0f, 0.0f, 1.0f));

int numPos = 0, numNeg = 0;
for(int j=0;j<numHullPoints;j++)
{
if(i == j)
continue;
float dist = plane.ClassifyPointf(hull[j]);
if(dist >= 0.0f)
numPos++;
else
numNeg++;
}

if((numPos > 0 && numNeg == 0) || (numPos == 0 && numNeg > 0))
{
index[curIndex++] = i;
if(curIndex > 2)
break;
}
}

if(curIndex != 2)
MessageBox(NULL, "Severe Error : Couldn't find to extreme point for calculating the trapezoid.", "ERROR", MB_OK);

GEVector3D pMin, pMax;
if(transHull[index[0]].x > transHull[index[1]].x)
{
pMax = hull[index[0]];
pMin = hull[index[1]];
}
else
{
pMax = hull[index[1]];
pMin = hull[index[0]];
}

geIntersectLines2D(trapezoid[0], baseLine[0], baseLine[1], q, pMin);
geIntersectLines2D(trapezoid[1], baseLine[0], baseLine[1], q, pMax);
geIntersectLines2D(trapezoid[2], topLine[0], topLine[1], q, pMax);
geIntersectLines2D(trapezoid[3], topLine[0], topLine[1], q, pMin);

//////////////////////////////////////////////////////////////////////////
// Check trapezoid points order.
//
GEVector3D dir1 = trapezoid[1] - trapezoid[0];
GEVector3D dir2 = trapezoid[2] - trapezoid[0];
dir1.Normalize();
dir2.Normalize();
dir1 = dir1 | dir2;
dir1.Normalize();
if(!(dir1 == GEVector3D(0.0f, 0.0f, 1.0f)))
{
dir1 = trapezoid[0];
trapezoid[0] = trapezoid[1];
trapezoid[1] = dir1;

dir1 = trapezoid[3];
trapezoid[3] = trapezoid[2];
trapezoid[2] = dir1;
}

//////////////////////////////////////////////////////////////////////////
// Debug visualization
//
glLineWidth(3.0f);
glColor3f(0.0f, 0.0f, 0.0f);
glBegin(GL_LINE_LOOP);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3fv(&trapezoid[0].x);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3fv(&trapezoid[1].x);
glColor3f(0.0f, 0.0f, 1.0f);
glVertex3fv(&trapezoid[2].x);
glColor3f(1.0f, 1.0f, 1.0f);
glVertex3fv(&trapezoid[3].x);
glEnd();
glColor3f(1.0f, 1.0f, 1.0f);
glLineWidth(1.0f);

glBegin(GL_LINES);
glColor3f(1.0f, 0.0f, 0.0f);
glVertex3fv(&topLine[0].x);
glVertex3fv(&topLine[1].x);
glColor3f(0.0f, 1.0f, 0.0f);
glVertex3fv(&baseLine[0].x);
glVertex3fv(&baseLine[1].x);
glEnd();

glPointSize(3.0f);
glColor3f(0.5f, 1.0f, 0.5f);
glBegin(GL_POINTS);
glVertex3fv(&q.x);
glEnd();
glColor3f(1.0f, 1.0f, 1.0f);
glPointSize(1.0f);
}



And the function order is :


GEVector3D eyeFrustum [8];
GEVector3D eyeFrustumInL [8];
GEVector3D line [2];
GEVector3D hull [6];
GEVector3D trapezoid [4];
GEMatrix4x4* GELight::N_T;

CalcEyeFrustumInObjSpace(&eyeFrustum[0], Camera);
CalcEyeFrustumInL(&eyeFrustum[0], &eyeFrustumInL[0], Camera);
CalcCenterLine(&eyeFrustumInL[0], &line[0]);
numHullPoints = CalcConvexHull2D(&eyeFrustumInL[0], &hull[0]);
RenderConvexHull(&hull[0], numHullPoints);
CalcTrapezoid(&hull[0], numHullPoints, &line[0], &trapezoid[0]);
CalcN_T(&trapezoid[0], &Light->N_T, &eyeFrustum[0]);



I think the code is clear enough for you to understand what's going on. If there is any problem with the naming of variables, or something doesn't look right say it.

HellRaiZer

PS. CalcConvexHull2D() is in Graphic Programming Gems. I couldn't find it in my code, so this isn't included.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By plz717
      Hello, everyone! I hope my problem isn't too 'beginnerish'. I'm doing research on motion synthesis now, trying to implement the Deep Mimic paper (DeepMimic) by BINPENG XUE, in this paper, I need to first retarget character A's motion to another character B to make the reference motion clips for character B, since we don't have character B‘s reference motion. The most important thing is that in the paper, the author copied character A's joint's rotation with respective to joint's local coordinate system (not the parent) to character B. In my personal understanding, the joint's rotation with respective to joint's local coordinate system is something like that in the attached photo, where for the Elbow joint, i need to get the Elbow's rotation in the elbow's local coordinate system (i'm very grateful for you to share your ideas if i have misunderstanding about it 🙂)
      I have searched many materials on the internet about how to extract the local joint's information from FBX, the most relative one i found is the pivot rotation( and geometric transformation, object offset transformation). I'm a beginner in computer graphics, and i'm confused about whether the pivot rotation( or geometric transformation, object offset transformation) is exactly the joint's local rotation i'm seeking? I hope someone that have any ideas can help me, I'd be very grateful for any pointers in the right direction. Thanks in advance! 

    • By nOoNEE
      hello guys , i have some questions  what does glLinkProgram  and  glBindAttribLocation do?  i searched but there wasnt any good resource 
    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631394
    • Total Posts
      2999756
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!