mrt slows down app 5 times??

Started by
1 comment, last by 3dmaniac 16 years, 8 months ago
Hello, some time ago i wrote a deffered shading demo which usues mrt. That time i had radeon 9800 and it was running with ~~50fps. Now i've got x1950pro and...i've got still ~~50fps. Why is that??? When i disable rendering to multiple buffers i've got ~~250fps...strange. Help is welcome :) code snippet: void main() { vec3 viewVec = normalize(v_viewVec); vec2 newST = ComputeNewCoords(viewVec); vec3 normal = TBN_Matrix * GetNormalVec(newST); gl_FragData[0] = vec4(normal, 0.0); gl_FragData[1] = vec4(fragPos, 0.0); }
Advertisement
I'm going to take a guess and say you are using Windows XP, correct?

The reason for this slow down is because of a (bloody stupid) bug in ATI's OpenGL implimentation which, when you change the number of render targets, recompiles the shaders. This does evil things to your frame rate as you've noticed.

The good news is the Vista driver is fixed (I've Vista and a ATI card), the bad news is I don't know when/if the Vista driver (which is the fabled OpenGL rewrite) will get back ported to XP (if at all).

Throw Longs Peak into the mix and god knows where the driver fixes are going to go in the future.
Thanks for answer, although its a sad info :( Yes i'm running under winXP, i thought i can make use of mrt in some bigger project, now i see it wont be so ease to implement some nice lighting:(

This topic is closed to new replies.

Advertisement