Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 27 Mar 2008
Offline Last Active Nov 19 2015 04:46 AM

Topics I've Started

Horrible perf and weird spike pattern when rendering wireframe on NVIDIA

24 January 2015 - 06:32 AM

Hey Everyone, 


I'm having some really horrible performance when rendering my game in wireframe. It's becoming particularly annoying while working on terrain geometry LOD stuff. I only noticed the problem after upgrading my GPU from Nvidia GTX 670 to GTX 970. It could also be the drivers I updated to right after installing the new card, but I don't know. I should also mention I have the latest drivers(347.25) and they still have the problem. Before the GPU upgrade I had absolutely no performance change when switching between wireframe and fill. I'm using Windows 8.1 and have not tested the wireframe performance on AMD. This is most likely unrelated, but the gpu does make a high pitched wining sound that the internet says is a coil wine and it only happens when fps is really really high without v-sync on. 


The issue is frame rate tanks whenever wireframe rendering is enabled and there is a bizarre spike pattern. I attached a screenshot of my performance graph. Ignore the CPU tag at the bottom of the graph its actually overall frame time, all the wireframe performance issues are coming from the GPU. You can see on the graph after wireframe is enabled the frame time more than doubles and then the spike pattern starts. I also noticed that whenever I make the camera look down the frame time goes progressively up from about 50ms to over 300ms when looking directly down the vertical axis. 




I do wireframe with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); with some stuff like final gbuffer render converting back to GL_FILL. I tried only calling glPolygonMode at wireframe enable and disable time, but that did not change the performance pattern at all. The actual tessellation of the meshes in the scene doesn't change the wireframe performance overhead at all. 


I'm hoping some of you might have some more insight into what the hell is going on, maybe you have some ideas on what the GPU/drivers are doing to cause the weird valley performance spikes. Any info would be appreciated. 



David Tse

New indie dev blog for Survive, an open world zombie survival fps

14 April 2013 - 08:40 PM



My name is David Tse and I am have recently dropped out of college to start a game development company called Subsurface Games. I graduated from high school last year(may 2012) and made it a few months in college before I decided it was a waste of my time and that it was time for me to go make games. Right now I am the only one working on the game apart from a few contract artists. I am making a zombie survival shooter that is set in a procedural open world. The idea of the game is what would you do in the zombie apocalypse. you have to scavenge the unlimited procedural open world for food, water and guns to survive. You can enter every single building in the world to find supplies or build a fort to survive.


I am making a 3D game engine completely from scratch for this project using OpenGL and C++. The only libraries I am really using are SpeedTree for vegetation and PhysX for physics, cloth, and destruction. I have also started a development blog for this game where I post game updates and other behind the scenes stuff. When the game reaches the alpha stage(it is currently in pre-alpha) people who pre-order the game will be able to download the game and play it before its done.


I hope you guys can check out the blog and tell me what you think about the idea for the game, and the current state of the engine/game. Here is a link to the blog/company website: http://www.subsurfacegames.com/, and here is the latest video that I posted to the blog:





Reconstructing World Position from Depth

23 December 2012 - 09:41 PM

Hello, I am in the process of converting to a deferred renderer, and I am a little stuck at position reconstruction from depth. I have been reading alot about it. I have read all of MJP's blog posts and the thread that started them, and feel like I have a somewhat solid understanding of how it works, but my implementation has some issues. If some of you could give me some insight into my problems I would appreciate it.


I have tried many variations on the way I am at right now, but this one gets the closest results to what it should be.


First I get the frustum points in I think camera space:


Vector3f NearCenterPosition = Look * nearplane;
Vector3f FarCenterPosition = Look * farplane;
float angle = fov*0.0174532925; 
float NearHeight = 2*(tan((angle)/2)*nearplane);
float NearWidth = NearHeight *  (aspectratio);
float FarHeight = 2*(tan((angle)/2)*farplane);
float FarWidth = FarHeight * (aspectratio);  
FrustumPoints[0] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[1] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) + Up*(FarHeight/2));
FrustumPoints[2] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[3] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) + Up*(FarHeight/2)); 


Then I pass those to my lighting shader as a uniform. Then I give each vertex of the full screen quad an index into these frustum points



Vector3f FullScreenVert1 = Vector3f(1.0,   -1.0, 0.0);
int FullScreenVert1FurstumIndex = 1;

Vector3f FullScreenVert2 = Vector3f(1.0,   1.0, 0.0);
int FullScreenVert2FurstumIndex = 0;

Vector3f FullScreenVert3 = Vector3f(-1.0,   1.0, 0.0);
int FullScreenVert3FurstumIndex = 2;

Vector3f FullScreenVert4 = Vector3f(-1.0,   -1.0, 0.0);
int FullScreenVert4FurstumIndex = 3;


In the lighting vert shader I just index into the frustum points and pass it into the pixel shader.


Lighting Vert Shader:



out vec3 CameraRay;
void main(void)
	CameraRay =  FrustumPoints[ index ];   


Then in the lighting frag shader I first convert the depth to a linear using:

float DepthToLinear(float depth)
	vec2 g_ProjRatio   = vec2( ViewClipFar / (ViewClipFar-ViewClipNear), ViewClipNear / (ViewClipNear-ViewClipFar) );
	return g_ProjRatio.y/(depth-g_ProjRatio.x);


Finally I get the world position by multiplying the interpolated camera ray with the linear depth and the view clip far.


vec3 WorldPosition = CameraPosition - ( CameraRay * (LinearDepth*ViewClipFar ) ) ;

I know your supposed to add the camera position, but subtracting like this gets closest to the desired results. I am comparing it to just outputting the pixel position in the G-buffer pass.


Here are some screenshots showing the comparisons.


The Correct Results, what I am expecting:



Depth reconstructed results:



Also when I move the camera higher up the z value of all the world positions increases turning the green light blue and the yellow white, ect. When I turn the camera up the "horizon line" where the z value changes from 0 to 1 moves down, and when I look down it moves up. Then when I move the camera in the x and y cross slowly creeps in the oposite direction of the movement. I thought this might be because of the subtracting the cam pos, but when I add the camera position it moves twice as fast in the other direction. If you need more information on the behavior of the implementation just let me know. Its hard to explain and show in screenshots, but hopefully you can see what I'm doing wrong from the code. Any help is greatly appreciated.


I should also mention that my engine uses Z as the up axis.





Self shadowing polygon seems

29 June 2012 - 12:40 AM

Hello everyone, I recently implemented shadow mapping in my engine and im trying to get it too look just right. When I added a sphere to the scene I noticed there where seams in the self shadowing along the polygons of the underlying mesh and it was not the smooth self shadowing I was expecting for a sphere. I didn’t really notice this before because I was using more complex models with more dense geometry. here is a picture of what i'm talking about

Posted Image
Posted Image

Im not sure why this would be the case I would think there should be a nice smooth line on the part of the sphere that is in shadow. it must be a problem with something in my shaders here is how I do the shadowing.

Vert Shader:

ProjShadow = TextureMatrix * WorldMatrix * in_Position;

Frag Shader:

float depth = textureProj(ShadowMap, ProjShadow).x;
float R = ProjShadow.p / ProjShadow.q;
R += 0.0005;
float shadowValue = (R <= depth) ? 1.0 : 0.0;

If anyone knows what I am doing wrong and could help me out I would really appreciate it.

Thanks, David

I really need some help getting texturing to work

07 April 2012 - 04:09 PM

I have been trying for days to get a texturing to work in opengl, and I just cant figure out the problem. I have worked with opengl before in java and had no problems with texturing so I kind of know what im doing. If someone could please look at my code and tell me what I am doing wrong. I cannot get texture2D to return anything other than black(I also tried using just texture(sampler, coords)). This just renders all the meshes as solid black texture2D returns black whatever I try, I dont think its a problem with the texture coordinates because I dont think it would be solid black if the texture coords where screwed up. I really dont like posting chunks of code and asking someone to find what im doing wrong but I just cant for the life of me figure out what I am doing wrong.

Frag Shader:

#version 400

uniform sampler2D Texture0 ;
in vec2 out_MultiTextureCoord1;
in vec4 ex_Color;
out vec4 out_Color;
void main(void)
	 out_Color =   texture(Texture0, out_MultiTextureCoord1);

Vert Shader:

#version 400

layout(location=0) in vec4 in_Position;
layout(location=1) in vec4 in_Color;
layout(location=2) in vec4 in_Normal;
layout(location=3) in vec2 in_MultiTextureCoord1;
uniform mat4 ViewMatrix;
uniform mat4 ProjectionMatrix;
uniform mat4 WorldMatrix;
out vec2 out_MultiTextureCoord1;
out vec4 ex_Color;

void main(void)
   gl_Position =  (ProjectionMatrix * ViewMatrix  * WorldMatrix) * in_Position;
   ex_Color = in_Color;
   out_MultiTextureCoord1 = in_MultiTextureCoord1;

Texture Loading Code:
GLuint CResourceManager::LoadTexture(std::string fileName)
ILuint imageID;	// Create an image ID as a ULuint
GLuint textureID;   // Create a texture ID as a GLuint
ILboolean success;   // Create a flag to keep track of success/failure
ILenum error;	// Create a flag to keep track of the IL error state
ilGenImages(1, &imageID);   // Generate the image ID
ilBindImage(imageID);	// Bind the image
success = ilLoadImage(fileName.c_str());  // Load the image file

// If we managed to load the image, then we can start to do things with it...
if (success)
  // If the image is flipped (i.e. upside-down and mirrored, flip it the right way up!)
  ILinfo ImageInfo;
  if (ImageInfo.Origin == IL_ORIGIN_UPPER_LEFT)

  // Convert the image into a suitable format to work with
  // NOTE: If your image contains alpha channel you can replace IL_RGB with IL_RGBA
  success = ilConvertImage(IL_RGB, IL_UNSIGNED_BYTE);

  // Quit out if we failed the conversion
  if (!success)
   error = ilGetError();
   std::cout << "Image conversion failed - IL reports error: " << error << " - " << iluErrorString(error) << std::endl;

  // Generate a new texture
  glGenTextures(1, &textureID);

  // Bind the texture to a name
  glBindTexture(GL_TEXTURE_2D, textureID);

  // Set texture clamping method
  // Set texture interpolation method to the highest visual quality it can be:
  glGenerateMipmap(GL_TEXTURE_2D); // Note: This requires OpenGL 3.0 or higher
  // Specify the texture specification
  glTexImage2D(GL_TEXTURE_2D,	 // Type of texture
	  0,	// Pyramid level (for mip-mapping) - 0 is the top level
	  ilGetInteger(IL_IMAGE_BPP), // Image colour depth
	  ilGetInteger(IL_IMAGE_WIDTH), // Image width
	  ilGetInteger(IL_IMAGE_HEIGHT), // Image height
	  0,	// Border width in pixels (can either be 1 or 0)
	  ilGetInteger(IL_IMAGE_FORMAT), // Image format (i.e. RGB, RGBA, BGR etc.)
	  GL_UNSIGNED_BYTE,  // Image data type
	  ilGetData());   // The actual image data itself
   else // If we failed to open the image file in the first place...
  error = ilGetError();
  std::cout << "Image load failed - IL reports error: " << error << " - " << iluErrorString(error) << std::endl;

  ilDeleteImages(1, &imageID); // Because we have already copied image data into texture data we can release memory used by image.
std::cout << "Texture creation successful." << std::endl;
return textureID; // Return the GLuint to the texture so you can use it!

Render code:

for(int x = 0; x < m_MeshCount; ++x)
  glUniform1i(glGetUniformLocation(m_Shader, "Texture0"), 0);
  glActiveTexture( GL_TEXTURE0  );
  glBindTexture(GL_TEXTURE_2D, m_Meshes[x].m_Textures[0]);
  glDrawElements(m_Meshes[x].m_RenderType, m_Meshes[x].m_PrimitiveCount, GL_UNSIGNED_INT, 0);

If there is any other part of the code that may be causing the problem please tell me so I can post it

Thank you,