• Create Account

We're offering banner ads on our site from just \$5!

# pancakedice

Member Since 22 Sep 2006
Offline Last Active Sep 04 2012 12:03 PM

### Book on mathematics of programming

22 August 2012 - 07:01 AM

Hi all

I hope you can help me out with some detective work I cannot figure out myself.

I remember reading a about a book here on gamedev.net a few years ago and now I cannot find it on google. Back then it was hailed as a programming book that was advanced, but really opened the eyes for many and that the posters here on gamedev.net saw it as a must read for everyone on their team.

The topic was general programming from a mathematical perspective, and the author(s) defined functions, general objects and from a everything was performed generically, and implemeted in what I recall as being C++. The first chapter was available online. I might have mixed something up, but I think that the author(s) had an Eastern European sounding name. It was not about implementing algebraic structures or doing math in programming, but about using mathematical theory as an approach to better your generic designs, if I recall correctly.

I know it's a long shot, but I really hope anyone might recall a book like that mentioned here (or elsewhere).

(I hope this is the right place to put the post..)

### Depth Buffer and deferred rendering

28 July 2010 - 05:24 AM

Hi,

I'm currently working on a deferred renderer and now trying to optimize the lighting calculation. The way I have done it so far is the a color attachment to store depth value linearly, which I then use to reconstruct position. Then, I figured that I need to do depth testing for more efficient lighting calculation - or rather not to do the calculations. However, early z does only work with the default depth buffer as far as I understand, and thus I have to duplicate depth information, stored in two different ways.

How do you normally solve this? Use the default depth buffer and letting opengl write to that itself (possibly flipping far and near for more precision), or are you simply saving the info in a dedicated color attachment? If you save it in a depth attachment, do you do it as a texture or as a renderbuffer? Can you do depth testing on a MRT with an attached texture as depth attachment, and is there a performance penalty for this?

(The way I currently store depth is the negated view-space z value divided by zfar, and I figure I need to fetch it much differently if I get it as z/w after perspective transformation from the standard depth buffer)

Tell me if I need to elaborate anything.

### Q3 BSP Rendering and Vertex Buffers

14 March 2010 - 12:04 AM

I'm currently working on a Q3 BSP renderer and I've managed to get everything drawing nicely including the PVS tests and frustum culling. However, I'm now trying to refactor things while upping the performance of the rendering. The BSP format stores a huge array of vertices and along with that a huge array of indices. When using a non-VBO approach, I can fast plow through all surfaces and draw everything with nice performance; with VBO's my performance drops dramatically. First, I tried to have every surface given its own vertexbuffer, but that makes a huge overhead because a lot of the surfaces in the bsp format have only two triangles. I also tried to save all vertex and index data in one two large buffers (they're HUGE!), but performance is still not on par with the non-VBO approach. As rendering without VBO's is getting deprecated, I'd like to use the VBO's but not suffer from a performance loss. How would you go about storing a huge level in vertex buffers to gain performance at least equivalent to a non-VBO approach? It should be noted that I program on OS X 10.6, on a MacBook Pro 13'' without dedicated VRAM, and thus I guess the large vertexbuffers could be a problem because of this, and I should not expect BETTER performance form using VBO's, but hopefully not worse either.

### Mac: Extending Python with C + OpenGL / libs

17 July 2009 - 12:32 PM

20 February 2009 - 02:39 AM

I've been trying to implement shadowmapping in my current project, but something seems to be acting weird. First of all, I've implemented projective texturing and that works exactly like it should. Then I made my lightsource a projector, and that works as well. After that, I started working with FBO's, reading in More OpenGL Game Programming besides the article "FBO 101" on this site. I could currectly render the scene from my light source and then project the result onto the scene. This, however, was not a depth map, but just color information. The Cg tutorial (I'm using Cg on a GF6600GT) says that the underlying hardware implementation figures out that it is a depth texture, and therefore does the comparison itself. I've then tried to draw to a depth texture instead of just color, and this has only been successful to a certain extent: I can correctly create shadows, but almost all of the picture turns out black due to some weird artifact. The images below should show the scene (1) without shadows applied, and (2) with the shadow map applied. When the ball moves around, (for example near the walls or under the shadow of the brick flying in the air, it casts shadows and is correctly not lit under the brick. So shadowing somewhat works. Now that you can see the problem, the question is then why I have it. I have not been able to figure that out, and therefore hope that you can shed some light on this. This is the way I initialize my Framebuffer, renderbuffer and texture (I use OpenTK and C#):
```GL.Ext.GenFramebuffers(1, out frameBufferId);
GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, frameBufferId);

GL.Ext.GenRenderbuffers(1, out depthBufferId);
GL.Ext.BindRenderbuffer(RenderbufferTarget.RenderbufferExt, depthBufferId);
GL.Ext.RenderbufferStorage(RenderbufferTarget.RenderbufferExt, (RenderbufferStorage)All.DepthComponent32, 512, 512);
GL.Ext.FramebufferRenderbuffer(FramebufferTarget.FramebufferExt, FramebufferAttachment.DepthAttachmentExt, RenderbufferTarget.RenderbufferExt, depthBufferId);

GL.GenTextures(1, out textureID);
GL.BindTexture(TextureTarget.Texture2D, textureID);

GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Nearest);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.Repeat);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.Repeat);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.DepthTextureMode, (int)All.Intensity);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureCompareFunc, (int)All.Lequal);
GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureCompareMode, (int)All.CompareRToTexture);

GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.DepthComponent24, 512, 512, 0, PixelFormat.DepthComponent, PixelType.UnsignedByte, IntPtr.Zero);

GL.Ext.GenerateMipmap(GenerateMipmapTarget.Texture2D);

GL.Ext.FramebufferTexture2D(FramebufferTarget.FramebufferExt, FramebufferAttachment.DepthAttachmentExt, TextureTarget.Texture2D, textureID, 0);

FramebufferErrorCode ecode = GL.Ext.CheckFramebufferStatus(FramebufferTarget.FramebufferExt);
if (ecode != FramebufferErrorCode.FramebufferCompleteExt)
{  //breakpoint here, but not reached.

}

GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, 0);
```
This is how I setup the rendering for the FBO:
```GL.ClearColor(Color.White);
this.camera = camera; // the light is given as a camera
this.lights = lights; // lights wont get drawn.

GL.Disable(EnableCap.Texture2D);
GL.Disable(EnableCap.Lighting);

GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, frameBufferId);

GL.DrawBuffer(DrawBufferMode.None);

GL.Viewport(0, 0, 512, 512);

// clear buffers from last frame
GL.Color3(new Vector3(1, 1, 1));

// rendering objects from lights perspective, using fixed function pipeline.
// Background white, objects white. This should not matter.

GL.PopAttrib();
GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, 0);
GL.DrawBuffer(DrawBufferMode.Back);
```
I then just render the scene from the actualy camera's perspective, using the Cg effect that should apply things. I use TEXUNIT3 for my shadowmap, and I do like this before and after drawing:
``` GL.ActiveTexture(TextureUnit.Texture3);
GL.BindTexture(TextureTarget.Texture2D, textureID);
GL.Ext.GenerateMipmap(GenerateMipmapTarget.Texture2D);
// draw with shader from camera's perspective
GL.ActiveTexture(TextureUnit.Texture3);
GL.BindTexture(TextureTarget.Texture2D, 0);
```
I have tried with other texture units as well, but without success. This is my shader code utilizing the shadowmap:
```float4 ShadowFragment(outputVertex IN, sampler2D diffuseTexture : TEXUNIT0, sampler2D shadowTexture : TEXUNIT3) : COLOR {

// GENERATE emissive, ambient, diffuse specular as per pixel..... DONE.
// diffuseTexture not used, but can be.