Jump to content

  • Log In with Google      Sign In   
  • Create Account

DavidTse

Member Since 27 Mar 2008
Offline Last Active May 01 2016 06:05 AM

Topics I've Started

Best place to store roughness and other 1 channel maps?

25 April 2016 - 08:27 PM

Like the title says i'm curious about what's the most efficient place to store these one channel maps? Right now i'm rewriting my texture pipeline and I don't know if I should pack the roughness in the alpha channel of the diffuse and compress it with BC3 or just put the roughness into its own texture compressed with BC4 then do the diffuse with BC1. I don't have a super deep understanding of exactly how GPUs work yet so i'm not sure which one would end up being better, so If someone with some more insight could weigh in I would appreciate it.

 

Thanks, 

David 


Horrible perf and weird spike pattern when rendering wireframe on NVIDIA

24 January 2015 - 06:32 AM

Hey Everyone, 

 

I'm having some really horrible performance when rendering my game in wireframe. It's becoming particularly annoying while working on terrain geometry LOD stuff. I only noticed the problem after upgrading my GPU from Nvidia GTX 670 to GTX 970. It could also be the drivers I updated to right after installing the new card, but I don't know. I should also mention I have the latest drivers(347.25) and they still have the problem. Before the GPU upgrade I had absolutely no performance change when switching between wireframe and fill. I'm using Windows 8.1 and have not tested the wireframe performance on AMD. This is most likely unrelated, but the gpu does make a high pitched wining sound that the internet says is a coil wine and it only happens when fps is really really high without v-sync on. 

 

The issue is frame rate tanks whenever wireframe rendering is enabled and there is a bizarre spike pattern. I attached a screenshot of my performance graph. Ignore the CPU tag at the bottom of the graph its actually overall frame time, all the wireframe performance issues are coming from the GPU. You can see on the graph after wireframe is enabled the frame time more than doubles and then the spike pattern starts. I also noticed that whenever I make the camera look down the frame time goes progressively up from about 50ms to over 300ms when looking directly down the vertical axis. 

 

oS8iEGq.jpg

 

I do wireframe with glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); with some stuff like final gbuffer render converting back to GL_FILL. I tried only calling glPolygonMode at wireframe enable and disable time, but that did not change the performance pattern at all. The actual tessellation of the meshes in the scene doesn't change the wireframe performance overhead at all. 

 

I'm hoping some of you might have some more insight into what the hell is going on, maybe you have some ideas on what the GPU/drivers are doing to cause the weird valley performance spikes. Any info would be appreciated. 

 

Thanks, 

David Tse


New indie dev blog for Survive, an open world zombie survival fps

14 April 2013 - 08:40 PM

Hello,

 

My name is David Tse and I am have recently dropped out of college to start a game development company called Subsurface Games. I graduated from high school last year(may 2012) and made it a few months in college before I decided it was a waste of my time and that it was time for me to go make games. Right now I am the only one working on the game apart from a few contract artists. I am making a zombie survival shooter that is set in a procedural open world. The idea of the game is what would you do in the zombie apocalypse. you have to scavenge the unlimited procedural open world for food, water and guns to survive. You can enter every single building in the world to find supplies or build a fort to survive.

 

I am making a 3D game engine completely from scratch for this project using OpenGL and C++. The only libraries I am really using are SpeedTree for vegetation and PhysX for physics, cloth, and destruction. I have also started a development blog for this game where I post game updates and other behind the scenes stuff. When the game reaches the alpha stage(it is currently in pre-alpha) people who pre-order the game will be able to download the game and play it before its done.

 

I hope you guys can check out the blog and tell me what you think about the idea for the game, and the current state of the engine/game. Here is a link to the blog/company website: http://www.subsurfacegames.com/, and here is the latest video that I posted to the blog:

 

 

Thanks,

David


Reconstructing World Position from Depth

23 December 2012 - 09:41 PM

Hello, I am in the process of converting to a deferred renderer, and I am a little stuck at position reconstruction from depth. I have been reading alot about it. I have read all of MJP's blog posts and the thread that started them, and feel like I have a somewhat solid understanding of how it works, but my implementation has some issues. If some of you could give me some insight into my problems I would appreciate it.

 

I have tried many variations on the way I am at right now, but this one gets the closest results to what it should be.

 

First I get the frustum points in I think camera space:

 

Vector3f NearCenterPosition = Look * nearplane;
Vector3f FarCenterPosition = Look * farplane;
float angle = fov*0.0174532925; 
float NearHeight = 2*(tan((angle)/2)*nearplane);
float NearWidth = NearHeight *  (aspectratio);
float FarHeight = 2*(tan((angle)/2)*farplane);
float FarWidth = FarHeight * (aspectratio);  
FrustumPoints[0] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[1] = cml::normalize(FarCenterPosition - Right*(FarWidth/2) + Up*(FarHeight/2));
FrustumPoints[2] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) - Up*(FarHeight/2));
FrustumPoints[3] = cml::normalize(FarCenterPosition + Right*(FarWidth/2) + Up*(FarHeight/2)); 

 

Then I pass those to my lighting shader as a uniform. Then I give each vertex of the full screen quad an index into these frustum points

 

 

Vector3f FullScreenVert1 = Vector3f(1.0,   -1.0, 0.0);
int FullScreenVert1FurstumIndex = 1;

Vector3f FullScreenVert2 = Vector3f(1.0,   1.0, 0.0);
int FullScreenVert2FurstumIndex = 0;

Vector3f FullScreenVert3 = Vector3f(-1.0,   1.0, 0.0);
int FullScreenVert3FurstumIndex = 2;

Vector3f FullScreenVert4 = Vector3f(-1.0,   -1.0, 0.0);
int FullScreenVert4FurstumIndex = 3;

 

In the lighting vert shader I just index into the frustum points and pass it into the pixel shader.

 

Lighting Vert Shader:

 

 

out vec3 CameraRay;
void main(void)
{  
	CameraRay =  FrustumPoints[ index ];   
         ...
}

 

Then in the lighting frag shader I first convert the depth to a linear using:

float DepthToLinear(float depth)
{ 
	vec2 g_ProjRatio   = vec2( ViewClipFar / (ViewClipFar-ViewClipNear), ViewClipNear / (ViewClipNear-ViewClipFar) );
	return g_ProjRatio.y/(depth-g_ProjRatio.x);
}

 

Finally I get the world position by multiplying the interpolated camera ray with the linear depth and the view clip far.

 

vec3 WorldPosition = CameraPosition - ( CameraRay * (LinearDepth*ViewClipFar ) ) ;

I know your supposed to add the camera position, but subtracting like this gets closest to the desired results. I am comparing it to just outputting the pixel position in the G-buffer pass.

 

Here are some screenshots showing the comparisons.

 

The Correct Results, what I am expecting:

http://farm9.staticflickr.com/8075/8301716099_44e9f527dc_k.jpg

 

Depth reconstructed results:

http://farm9.staticflickr.com/8213/8301716005_9d86ec6cc4_k.jpg

 

Also when I move the camera higher up the z value of all the world positions increases turning the green light blue and the yellow white, ect. When I turn the camera up the "horizon line" where the z value changes from 0 to 1 moves down, and when I look down it moves up. Then when I move the camera in the x and y cross slowly creeps in the oposite direction of the movement. I thought this might be because of the subtracting the cam pos, but when I add the camera position it moves twice as fast in the other direction. If you need more information on the behavior of the implementation just let me know. Its hard to explain and show in screenshots, but hopefully you can see what I'm doing wrong from the code. Any help is greatly appreciated.

 

I should also mention that my engine uses Z as the up axis.

 

Thanks,

David

 


Self shadowing polygon seems

29 June 2012 - 12:40 AM

Hello everyone, I recently implemented shadow mapping in my engine and im trying to get it too look just right. When I added a sphere to the scene I noticed there where seams in the self shadowing along the polygons of the underlying mesh and it was not the smooth self shadowing I was expecting for a sphere. I didn’t really notice this before because I was using more complex models with more dense geometry. here is a picture of what i'm talking about


Posted Image
Posted Image

Im not sure why this would be the case I would think there should be a nice smooth line on the part of the sphere that is in shadow. it must be a problem with something in my shaders here is how I do the shadowing.

(glsl)
Vert Shader:

ProjShadow = TextureMatrix * WorldMatrix * in_Position;

Frag Shader:

float depth = textureProj(ShadowMap, ProjShadow).x;
float R = ProjShadow.p / ProjShadow.q;
R += 0.0005;
float shadowValue = (R <= depth) ? 1.0 : 0.0;

If anyone knows what I am doing wrong and could help me out I would really appreciate it.

Thanks, David

PARTNERS