Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!


1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Muftobration

Member Since 22 Aug 2010
Offline Last Active Sep 05 2012 09:01 AM

Topics I've Started

Basic shading problem (FFP)

20 May 2012 - 03:04 PM

I've been having some shading trouble. The sphere here is just a unit sphere created and triangulated in modo (attached), with normals also computed in modo. The light is in the upper left of the picture a few units away. On average, the shading looks right, but why the huge discrepancy between two triangles from the same quad? I've ruled out incorrectly computed normals and back-facing polygons.

Posted Image

OpenGL 3 on a 2.1 device

11 August 2011 - 03:32 PM

I have searched, but nobody seems to be asking this question, nor have I found an answer. Is it possible to run a program containing OpenGL 3 code on a machine that only supports up to 2.1? I would be most interested in some sort of software that wraps around the program and simulates any calls that the hardware cannot handle.

I'm doing some OpenGL 3 development on my X201 and the intel graphics cannot correctly run many of the examples in the OpenGL SuperBible, 5th edition (see attached screen shot, which is supposed to depict a sphere). These examples work correctly on my GTX 460. If it were possible to get a small expresscard slot graphics adapter that provided support for OpenGL 3, that would be excellent, but I don't think such a device exists (besides the ViDock, which is not practical on the go).

Posted Image

Alpha based on depth

14 June 2011 - 12:47 PM

I've been trying to implement a system that will render geometry wherein the rendered geometry fades into the background as the distance increases. The idea is something just like GL_FOG, but fading to the background image (which is not static) instead of a specific color. I'm having difficulty with my implementation and I'm not even sure it's the best way. I'm basing it off of this post (last post in there): http://www.gamedev.n...ge-with-opengl/

The gist of it is, you set up automatic texture coordinate generation and make that depend on the depth. Then, you make a 1D texture of values 0-255 and bind that. Finally, you render your geometry with that texture applied, making close geometry opaque and far geometry transparent. I haven't been able to get it to work. I have a small test case I made, shown below. I render a blue quad for the background, then I render three white squares at different distances in front of that blue quad. The white squares close to the quad should be somewhat blue, but they are still pure white. Does anyone know what I've done wrong?

glDisable(GL_LIGHTING);
 
 // Blue bar in the background
 glColor3f(0.0f, 0.0f, 1.0f);
 glBegin(GL_QUADS);
  glVertex3f(3.0f, 1.0f, -15.0f);
  glVertex3f(-3.0f, 1.0f, -15.0f);
  glVertex3f(-3.0f, -1.0f, -15.0f);
  glVertex3f(3.0f, -1.0f, -15.0f);
 glEnd();

 glEnable(GL_TEXTURE_1D);
 GLuint depthAlphaTexture;      // Texture reference for the 1D texture produced in the loop below
 glGenTextures(1, &depthAlphaTexture);   // Tell OpenGL to make space for the texture
 glBindTexture(GL_TEXTURE_1D, depthAlphaTexture);// Make this the currently used texture
 glPixelStorei(GL_UNPACK_ALIGNMENT, 1);   // Pixel data is aligned in byte order
 glTexParameteri(GL_TEXTURE_1D, GL_TEXTURE_WRAP_S, GL_CLAMP); // Don't repeat the texture! We want clamping of alpha values.
 
 glEnable(GL_BLEND);
 glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);

 GLubyte alphaTexture[256];
 for (int i = 0; i < 256; i++)
 {
  alphaTexture[i] = 255 - i;
 }
 glTexImage1D(GL_TEXTURE_1D, 0, 1, 256, 0, GL_ALPHA, GL_UNSIGNED_BYTE, alphaTexture); // Give OpenGL the texture data

 glEnable(GL_TEXTURE_GEN_S);       // Automatically generate texture coordinates based on depth
 glTexGeni(GL_S, GL_TEXTURE_GEN_MODE, GL_EYE_LINEAR);
 
 glPushMatrix();
 glLoadIdentity();
 GLfloat splane[4] = {0.0f, 0.0f, -1.0f/25, 0.0f}; // Change the s texture coordinate to depend on depth (z value) not the x value
 glTexGenfv(GL_S, GL_EYE_PLANE, splane);
 glPopMatrix();

  // White squares
 glColor3f(1.0f, 1.0f, 1.0f);
 glBegin(GL_QUADS); // Farthest
  glVertex3f(2.0f, 0.5f, -10.0f);
  glVertex3f(1.0f, 0.5f, -10.0f);
  glVertex3f(1.0f, -0.5f, -10.0f);
  glVertex3f(2.0f, -0.5f, -10.0f);
 glEnd();

 glBegin(GL_QUADS);
  glVertex3f(0.333f, 0.333f, -5.0f);
  glVertex3f(-0.333f, 0.333f, -5.0f);
  glVertex3f(-0.333f, -0.333f, -5.0f);
  glVertex3f(0.333f, -0.333f, -5.0f);
 glEnd();

 glBegin(GL_QUADS); // Closest
  glVertex3f(-0.333f, 0.166f, 0.0f);
  glVertex3f(-0.667f, 0.166f, 0.0f);
  glVertex3f(-0.667f, -0.166f, 0.0f);
  glVertex3f(-0.333f, -0.166f, 0.0f);
 glEnd();

 glEnable(GL_LIGHTING);



Strange rendering with blurred depth texture shader (simple SSAO)

30 April 2011 - 07:23 PM

I'm working on a simple type of SSAO that involves unsharp masking the depth buffer. To verify that I correctly loaded the depth texture (a texture the same size as the screen containing depth values), I created a simple shader that just colors each fragment according to its depth. These two images show a test object I made with the shader off and on:

Posted Image


Posted Image




That all seems to be working well, so I moved on to unsharp masking the depth texture (the next step). I wrote a shader to blur the depth texture and display the blurred version, but I'm getting a weird artifact that I can't track down. The render only appears in the lower left quadrant:

Posted Image




Here's the fragment shader code for the first depth image (the second image):

uniform sampler2D depthValues;

void main()
{
	gl_FragColor = texture2D(depthValues, gl_FragCoord.xy/1024.0);
}




And here's the fragment shader code for the shader that's misbehaving (third image):

uniform sampler2D depthValues;

float[25] gauss = float[25] (0.0030, 0.0133, 0.0219, 0.0133, 0.0030,
						0.0133, 0.0596, 0.0983, 0.0596, 0.0133,
						0.0219, 0.0983, 0.1621, 0.0983, 0.0219,
						0.0133, 0.0596, 0.0983, 0.0596, 0.0133,
						0.0030, 0.0133, 0.0219, 0.0133, 0.0030);
const float kernelDimension = 5.0;
const float screenDimension = 1024.0;

void main()
{
	vec4 sum = vec4(0,0,0,0);
	int iter = 0;
	int i = int(gl_FragCoord.x);
	int j = int(gl_FragCoord.y);
	int maxX = i + int(floor(kernelDimension/2.0));
	int maxY = j + int(floor(kernelDimension/2.0));
	float sampX;
	float sampY;
	
	for (int x = i - int(floor(kernelDimension/2.0)); x < maxX; x++)
	{
		for (int y = j - int(floor(kernelDimension/2.0)); y < maxY; y++, iter++)
		{
			sampX = (gl_FragCoord.x + float(x)) / screenDimension;
			sampY = (gl_FragCoord.y + float(y)) / screenDimension;
			if (sampX >= 0.0 && sampX <= 1.0 && sampY >= 0.0 && sampY <= 1.0)
			{
				sum	+= texture2D(depthValues, vec2(sampX, sampY)) * gauss[iter];
			}
		}
	}
	
	gl_FragColor = texture2D(depthValues, gl_FragCoord.xy / screenDimension) - sum;
}


The second shader blurs the depth texture with a 5x5 Gaussian kernel (gauss). If you know why I'm getting these results, please help me solve the issue.

PARTNERS