Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

242 Neutral

About Punika

  • Rank
  1. Punika

    Doom3 is the proof that "keep it simple" works.

        I don't see why "Classes" help here? Ok you can protect your member variables from access and then add sync primitves into your member functions. But this is a bad way to do concurrent programming. So a better way would be to divide your data to "threads, work, task" -> you name it.   This has been done on the render code i think, and as you can see this can be done equally in C too.
  2. Hi Everyone I have a question regarding Final Gathering in my Photon Mapping Lightmapper. At the moment I have a working Lightmapper which gathers Photons directly at the Texels in my Lightmap. To get Normal Mapped Surfaces I gather them with 3 Normals. (The three Normals from the HL2 Presentation). This works rather nice as I need tons of Photons. Now I would like to implement Final Gathering. As I understand it, I need to cast Rays over the Hemisphere from the Patch Position and when I hit my Geometry I am doing my Radiance Estimate at that Point. Now my question is, which Normal do I choose for that? I think the way without Normal Mapping would be to choose the Normal Vector from my hit Geometry, as the Normal from my Patch would point to the false direction for Incoming Light, as I am saving only Incoming Light in my Photon map. (Photon Direction should always face to my Surfaces). But what I really do not get is how the alter the Surface Normal so I can use them for Normal Mapping. Or should I reverse my 3 Basis Normals and use them? Could anyone explain me the right way of doing this. Thank you in advance punika P.S. Sorry if I misspelled something, it's not my native language.
  3. A Little bit Off-topic, but every now and then someone says, it has been done before... And I don't think soo. I mean, "some but not every one"* thinks it is a Sparse Octree with Raycasting... Then I like to see a Demo which produces 30 FPS at 1024x768 which such a deep Octree Level. I have seen none on a CPU, which they claim to use... * Fixed that
  4. Hello I am having problem doing Directional Lighting using the Pre Pass Lighting method. As long as I am doing Lighting in Object Space it is working correctly. You can see below that when I pass my t,b,n vectors unmodified to the output target and using a light direction vector in object space that i get the right result. But when I transform my t,b,n vectors to eye space using the normal matrix and passing the light direction in eye space, the lighting depends on the view angle from my camera. I think my light direction vector is okay, because when I pass a world space direction vector and transform it with the normal matrix, I get the same results. I don't see the point where I am doing something terrible wrong. Have I misunderstood something like transforming my vectors from eye space into tangent space? Any help would be appreciated Thanks punika This Shader is creates my Depth and Normal Buffers // inputs attribute vec4 position; attribute vec4 texcoord0; attribute vec4 normal; attribute vec4 tangent; attribute vec4 bitangent; // outputs varying float depth; varying vec3 viewSpaceNormal; varying vec3 viewSpaceTangent; varying vec3 viewSpaceBitangent; //varying mat3 tangentToView; void main() { // position in view space vec3 viewSpacePos = vec3( gl_ModelViewMatrix * position ); // pass depth depth = viewSpacePos.z; // view space normal, tangent and bitangent viewSpaceNormal = normalize(gl_NormalMatrix *; viewSpaceTangent = normalize(gl_NormalMatrix *; viewSpaceBitangent = normalize(gl_NormalMatrix *; // tangent space... // viewSpaceNormal =; // viewSpaceTangent =; // viewSpaceBitangent =; // tangentToView = mat3( tangent, bitangent, normal ); // set texture coordinates gl_TexCoord[0] = texcoord0; // transform into clip space gl_Position = gl_ModelViewProjectionMatrix * position; } // vertex Shader inputs varying float depth; varying vec3 viewSpaceNormal; varying vec3 viewSpaceTangent; varying vec3 viewSpaceBitangent; //varying mat3 tangentToView; // far clip plane uniform float farClip; // Compiler Textures //@usertextures // Compiler Constant Inputs //@userconstants // Compiler uniform Input //œuservars // material inputs vec3 matNormalInput; // Normal Map to View Space Normal vec3 NormalMapToSpaceNormal(vec3 normalMap, vec3 normal, vec3 tangent, vec3 bitangent) { normalMap = normalMap * 2.0 - 1.0; // fixme do we to transform it in range [0,1] (see above) normalMap = vec3(normal * normalMap.z + normalMap.x * tangent - normalMap.y * bitangent); return normalMap; } void main() { //@matNormalInput // bring normal from [-1,1] to [0,1] vec3 n = NormalMapToSpaceNormal(matNormalInput, viewSpaceNormal, viewSpaceTangent, viewSpaceBitangent ); // vec3 n = (matNormalInput * 2.0 - 1.0) * tangentToView; // Normals need to be adjusted from [-1, 1] to [0, 1] n = 0.5 * (normalize(n) + 1.0); vec2 encodeN = encodeNormal( n ); // depth buffer gl_FragData[0] = packFloatTo4x8( -depth / farClip ); // normal buffer gl_FragData[1] = pack2FloatTo4x8( encodeN ); } And this one does the Directional Lighting using a fullscreen quad // user inputs uniform sampler2D depthTexture; uniform sampler2D normalTexture; //TODO //uniform vec3 viewPosition; //uniform vec3 viewDirection; // light dir in view space uniform vec3 lightDir; // vertex shader inputs varying vec3 frustumRay; void main() { const vec3 lightColor = vec3( 0.7, 0.7, 0.7 ); const float specularPower = 16.0; // get dpeth float depth = unpack4x8ToFloat( texture2D(depthTexture, gl_TexCoord[0].xy) ); // Reconstruct position from the depth value, making use of the ray pointing towards the far clip plane vec3 pos = frustumRay * depth; // decode and unpack normal vec3 normal = decodeNormal( vec4( unpack4x8To2Float( texture2D(normalTexture, gl_TexCoord[0].xy) ), 0.0, 0.0 ) ); //FIXME Convert normal back from [0,1] to [-1,1] normal = (normal * 2.0) - 1.0; // N dot L lighting term float nl = clamp( dot(normal, -lightDir), 0.0, 1.0 ); // lighting gl_FragColor = vec4( lightColor * nl, 1.0 ); // depth // gl_FragColor = vec4( depth, depth, depth, 1.0 ); // normal // gl_FragColor = vec4( normal.x, normal.y, normal.z, 1.0 ); } // stream inputs attribute vec4 position; attribute vec4 texcoord0; // outputs varying vec3 frustumRay; // user inputs uniform vec2 lightbufferSize; uniform vec3 frustumCorners[4]; // get the right frustum ray vec3 GetFrustumRay( vec2 texCoord ) { int index = int(texCoord.x + (texCoord.y * 2.0)); return frustumCorners[index]; } void main() { gl_TexCoord[0] = texcoord0; frustumRay = GetFrustumRay( texcoord0.xy ); // add half pixel gl_TexCoord[0].xy += lightbufferSize; gl_Position = vec4(position.xy, 0.0, 1.0 ); }
  5. Thanks for your answer I have two questions though. When using the Basis Vectors for calulating cosTheta -> I do not have to use the raw Basis Vectors or? I think I have to alter them. But I don't know how? The raw Basis Vectors seems to right for a 0,0,1 Vector but what should I do when I have a 1,0,0 Normal or -1,0,0 Normal???? I think I can ignore the last past with the cosine distrubtion, because I am shooting from one patch to another or??? Thanks punika
  6. Hey Everyone Excuse me if I nerve. But I have not found any thread that discusses this. I have a halfway working Radiosity Solver. I think I understand the Valve Papers. My problem is not the rendering part but how I can integrate Normal Mapping into my Radiosity Solver. I am doing all Radiosity through Ray tracing. My form factor looks like this: Fij = cosTheta_i * cosTheta_j / (pi * distanceSquared) * Hij * dAj Now when I am doing Normal Mapping Radiosity than Radiosity, what should I do?? - cosTheta_i and cosTheta_j calculation using the Basis Vectors for our Collector? (e.g. not cosTheta_j = collector.Normal * emitterDir -> cosTheta_j = collector.BasisNormal * emitterDir - what about cosTheta_i -> using emitter normal or emitter basis normals? - Do I have to transport the Basis Vectors into Tangent Space?? with a TBN matrix? my radiosity solver is working in world space - emitterDir and collector Dir in Tangent Space? - Another method? using Fij - and taking the basis Vectors into account - e.g. emittterDir in Tangent Space dot Basis Vector in Tangent space multiplied Fij is the incoming energy? - collector.incident = Fij * (emitterDir * collector.BasisNormal) * emitter.excident ???? - emitterDir in tangent space? As you can see, I have a big understanding problem :) Sorry for my bad english... :) Thanks for the answers punika [Edited by - Punika on October 16, 2010 6:32:36 AM]
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!