Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 22 Sep 2006
Offline Last Active Sep 04 2012 12:03 PM

Topics I've Started

Book on mathematics of programming

22 August 2012 - 07:01 AM

Hi all

I hope you can help me out with some detective work I cannot figure out myself.

I remember reading a about a book here on gamedev.net a few years ago and now I cannot find it on google. Back then it was hailed as a programming book that was advanced, but really opened the eyes for many and that the posters here on gamedev.net saw it as a must read for everyone on their team.

The topic was general programming from a mathematical perspective, and the author(s) defined functions, general objects and from a everything was performed generically, and implemeted in what I recall as being C++. The first chapter was available online. I might have mixed something up, but I think that the author(s) had an Eastern European sounding name. It was not about implementing algebraic structures or doing math in programming, but about using mathematical theory as an approach to better your generic designs, if I recall correctly.

I know it's a long shot, but I really hope anyone might recall a book like that mentioned here (or elsewhere).

Thanks in advance!

(I hope this is the right place to put the post..)

Depth Buffer and deferred rendering

28 July 2010 - 05:24 AM


I'm currently working on a deferred renderer and now trying to optimize the lighting calculation. The way I have done it so far is the a color attachment to store depth value linearly, which I then use to reconstruct position. Then, I figured that I need to do depth testing for more efficient lighting calculation - or rather not to do the calculations. However, early z does only work with the default depth buffer as far as I understand, and thus I have to duplicate depth information, stored in two different ways.

How do you normally solve this? Use the default depth buffer and letting opengl write to that itself (possibly flipping far and near for more precision), or are you simply saving the info in a dedicated color attachment? If you save it in a depth attachment, do you do it as a texture or as a renderbuffer? Can you do depth testing on a MRT with an attached texture as depth attachment, and is there a performance penalty for this?

(The way I currently store depth is the negated view-space z value divided by zfar, and I figure I need to fetch it much differently if I get it as z/w after perspective transformation from the standard depth buffer)

Thanks in advance.

Tell me if I need to elaborate anything.

Q3 BSP Rendering and Vertex Buffers

14 March 2010 - 12:04 AM

I'm currently working on a Q3 BSP renderer and I've managed to get everything drawing nicely including the PVS tests and frustum culling. However, I'm now trying to refactor things while upping the performance of the rendering. The BSP format stores a huge array of vertices and along with that a huge array of indices. When using a non-VBO approach, I can fast plow through all surfaces and draw everything with nice performance; with VBO's my performance drops dramatically. First, I tried to have every surface given its own vertexbuffer, but that makes a huge overhead because a lot of the surfaces in the bsp format have only two triangles. I also tried to save all vertex and index data in one two large buffers (they're HUGE!), but performance is still not on par with the non-VBO approach. As rendering without VBO's is getting deprecated, I'd like to use the VBO's but not suffer from a performance loss. How would you go about storing a huge level in vertex buffers to gain performance at least equivalent to a non-VBO approach? It should be noted that I program on OS X 10.6, on a MacBook Pro 13'' without dedicated VRAM, and thus I guess the large vertexbuffers could be a problem because of this, and I should not expect BETTER performance form using VBO's, but hopefully not worse either.

Mac: Extending Python with C + OpenGL / libs

17 July 2009 - 12:32 PM

I'm currently diving into the world of Python and OS X development and I am already enjoying working with it. Unfortunately, I have found that PyOpenGL and numpy together are horrifyingly slow, which is to be expected. The code has been very fast to write, and my plan has always been to later move bottlenecks into C/++ as a python extension module. This is where I am now. I have managed to build the Python spam example and run that just fine, with distutils and setup.py scripts. I had to add a bit to my PATH environment variables since when installing Xcode everything, including standard C and Cpp headers were not where gcc expected them (usr/local etc). I've got everything to work, and I can compile the boost test example as well, so I hope that at least all headers are set up properly. Now, the problem is that when including GL/gl.h I have many different directories to find that it, and I've no idea what the different include directories will imply. I did not choose either SDK folders (10.4 and 10.5) but instead /System/Library/Frameworks/OpenGL.framework/Headers where the headers are also located. The file compiles without a problem when running python setup.py build and python setup.py install installs it. When I then try to load the module from the Python interpreter, I get the following error: ImportError: dlopen(/Library/Python/2.5/site-packages/spam.so, 2): Symbol not found: _glBegin I'd guess that it cannot load the opengl dynamic library. That's why I have also tried to add library_dirs = ['/System/Library/Frameworks/OpenGL.framework/Libraries/'] and libraries = ['libGL'] to my Extension constructor in the setup.py script, but without any luck. As I am still new to Mac OS X I can't really figure out which files I have to put where. In windows I'd have known that the opengl32.dll file would be found, but in Mac land I'm lost. Any help?

Weird shadowmap behavior, FBO

20 February 2009 - 02:39 AM

I've been trying to implement shadowmapping in my current project, but something seems to be acting weird. First of all, I've implemented projective texturing and that works exactly like it should. Then I made my lightsource a projector, and that works as well. After that, I started working with FBO's, reading in More OpenGL Game Programming besides the article "FBO 101" on this site. I could currectly render the scene from my light source and then project the result onto the scene. This, however, was not a depth map, but just color information. The Cg tutorial (I'm using Cg on a GF6600GT) says that the underlying hardware implementation figures out that it is a depth texture, and therefore does the comparison itself. I've then tried to draw to a depth texture instead of just color, and this has only been successful to a certain extent: I can correctly create shadows, but almost all of the picture turns out black due to some weird artifact. The images below should show the scene (1) without shadows applied, and (2) with the shadow map applied. When the ball moves around, (for example near the walls or under the shadow of the brick flying in the air, it casts shadows and is correctly not lit under the brick. So shadowing somewhat works. Without shadows With shadows Now that you can see the problem, the question is then why I have it. I have not been able to figure that out, and therefore hope that you can shed some light on this. This is the way I initialize my Framebuffer, renderbuffer and texture (I use OpenTK and C#):
GL.Ext.GenFramebuffers(1, out frameBufferId);
            GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, frameBufferId);
            GL.Ext.GenRenderbuffers(1, out depthBufferId);
            GL.Ext.BindRenderbuffer(RenderbufferTarget.RenderbufferExt, depthBufferId);
            GL.Ext.RenderbufferStorage(RenderbufferTarget.RenderbufferExt, (RenderbufferStorage)All.DepthComponent32, 512, 512);
            GL.Ext.FramebufferRenderbuffer(FramebufferTarget.FramebufferExt, FramebufferAttachment.DepthAttachmentExt, RenderbufferTarget.RenderbufferExt, depthBufferId);

            GL.GenTextures(1, out textureID);
            GL.BindTexture(TextureTarget.Texture2D, textureID);


            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMinFilter, (int)TextureMinFilter.Nearest);
            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureMagFilter, (int)TextureMagFilter.Nearest);
            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapS, (int)TextureWrapMode.Repeat);
            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureWrapT, (int)TextureWrapMode.Repeat);
            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.DepthTextureMode, (int)All.Intensity);
            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureCompareFunc, (int)All.Lequal);
            GL.TexParameter(TextureTarget.Texture2D, TextureParameterName.TextureCompareMode, (int)All.CompareRToTexture);

            GL.TexImage2D(TextureTarget.Texture2D, 0, PixelInternalFormat.DepthComponent24, 512, 512, 0, PixelFormat.DepthComponent, PixelType.UnsignedByte, IntPtr.Zero);


            GL.Ext.FramebufferTexture2D(FramebufferTarget.FramebufferExt, FramebufferAttachment.DepthAttachmentExt, TextureTarget.Texture2D, textureID, 0);

            FramebufferErrorCode ecode = GL.Ext.CheckFramebufferStatus(FramebufferTarget.FramebufferExt);
            if (ecode != FramebufferErrorCode.FramebufferCompleteExt)
            {  //breakpoint here, but not reached. 


            GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, 0);
This is how I setup the rendering for the FBO:
            this.camera = camera; // the light is given as a camera
            this.lights = lights; // lights wont get drawn.


            GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, frameBufferId);


            GL.Viewport(0, 0, 512, 512);

            // clear buffers from last frame
            GL.Clear(ClearBufferMask.ColorBufferBit |
            GL.Color3(new Vector3(1, 1, 1));
            // rendering objects from lights perspective, using fixed function pipeline. 
            // Background white, objects white. This should not matter.

            GL.Ext.BindFramebuffer(FramebufferTarget.FramebufferExt, 0);
I then just render the scene from the actualy camera's perspective, using the Cg effect that should apply things. I use TEXUNIT3 for my shadowmap, and I do like this before and after drawing:
                GL.BindTexture(TextureTarget.Texture2D, textureID);
// draw with shader from camera's perspective
                GL.BindTexture(TextureTarget.Texture2D, 0);
I have tried with other texture units as well, but without success. This is my shader code utilizing the shadowmap:
float4 ShadowFragment(outputVertex IN, sampler2D diffuseTexture : TEXUNIT0, sampler2D shadowTexture : TEXUNIT3) : COLOR {
	// GENERATE emissive, ambient, diffuse specular as per pixel..... DONE. 
	// diffuseTexture not used, but can be.

	float4 shadowCoefficient = tex2Dproj(shadowTexture, IN.texCoordProj);
	// all things are now obtained, return
	float4 color;
	color.xyz =  (emissive + ambient + diffuseSpecular) * shadowCoefficient.r; //
	color.w = 1; // alpha
	return color;
I'd really hope someone will take the time to help me spot my errors, as I've been going insane after trying for days. Thanks in advance.