Jump to content
• Advertisement

# EnlightenedOne

Member

448

143 Neutral

• Rank
Member

• Interests
Programming
1. ## Calculating Corner "Normals"

Hi Everyone, Thank you for the detailed replies, sorry I wanted to avoid explaining the full drawn out scenario by abstracting it to its base elements and in doing so I obfuscated things, I am trying to calculate coordinates off of a delaunay dual graph, I hit upon a scenario where I could make some variation in how something visually appears if I projected outward from a set of points along a particular plane. -I am not clear on what this normalize function would do subtract one param from the other and then normalise the result of that? -Actually this is slightly off of what I was trying to do but it is something I am very interested in doing, if I create shapes from my voronoi and decide to scale them out to render a perimeter or something this will make life very easy, thanks! -Thanks for the reference I was failing to recall the word uniform. -This solved it for me I had tunnel vision trying to solve for an angle when the vector from the corner to the center was exactly what I needed: Vector2 midPoint = (pos1 + pos2 + pos3)/3; targetPoint = midPoint + (pos2 - midPoint) * 1.5f; Many Thanks, EO
2. ## Calculating Corner "Normals"

Hi All, Imagine the triangle with points ABC is being scaled about its center to the points A2B2C2: First off, this has been driving me crazy but does the direction/angle each corner will move in have a name? I want to call them corner normals or is the correct term corner bisectors? Given I am operating in 2D how do I get from knowing points ABC to position B2 given an arbitrary magnitude? Under the hood I want to do this for polygons by walking along the points that define the shape but until I can do it with a triangle I am going to feel like an imbecile. Here is what I am trying to do now to calculate B2 manually: // Calculate the vectors Vector2 vectorA = posA - posB; Vector2 vectorB = posC - posB; //Calculate the angle between the vectors and combine them into a single angle float thetaA = Mathf.Atan2(vectorA.x, vectorA.y); float thetaB = Mathf.Atan2(vectorB.x, vectorB.y); float thetaMid = (thetaA + thetaB) / 2; //Move the point in the direction of the angle by an arbitrary factor so I can visually validate it is correct or not Vector2 target = new Vector2(posB.x, posB.y); Move(ref target, 1, thetaMid); newPosB = target; SetPositions(posA, posB, posC, newPosB); public static void Move(ref Vector2 target, float magnitude, float heading) { target.x += magnitude * Mathf.Cos(heading); target.y += magnitude * Mathf.Sin(heading); } The above code fails as the calculation for the angle is wrong, if you can see why or have a more efficient solution please put me out of my misery! Kind Regards, EO
3. ## Ambiguous Licensing for Precomputed Atmospheric Scattering

Hi All,   A bit of a curiosity, I am fascinated by this subject and paper: http://hal.inria.fr/docs/00/28/87/58/PDF/article.pdf   I have a copy of the shaders for the technique and they all say this:   /**  * Precomputed Atmospheric Scattering  * Copyright (c) 2008 INRIA  * All rights reserved.  *  * Redistribution and use in source and binary forms, with or without  * modification, are permitted provided that the following conditions  * are met:  * 1. Redistributions of source code must retain the above copyright  *    notice, this list of conditions and the following disclaimer.  * 2. Redistributions in binary form must reproduce the above copyright  *    notice, this list of conditions and the following disclaimer in the  *    documentation and/or other materials provided with the distribution.  * 3. Neither the name of the copyright holders nor the names of its  *    contributors may be used to endorse or promote products derived from  *    this software without specific prior written permission.  *  * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"  * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE  * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE  * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR  * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF  * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS  * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN  * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)  * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF  * THE POSSIBILITY OF SUCH DAMAGE.  */   /**  * Author: Eric Bruneton  */   I really want to integrate this technique into my game but there is no qualification about commercial use of this shader. From a legal stand point, do I have any right to use it within a game without seeking permission from INRIA? I cannot find anything on licensing it. Am I doing my bit if I add this to my license and add it digitally to an about page for the game along with including it in the distributed source?   Thanks, EO
4. ## Raising the Deferred Depth Buffer Reconstruction Bar

Ok so I decided to take the advice of Ohforf sake and not try and second guess the behaviour and tricks of the DX implementation using assumptions due to the trouble it could bring.   I decided that I knew very well view * projection to clip space /w to NDC is what I wanted and I had all the wits to get there somehow: Vector4f lsPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); Vector4f lsPos2 = new Vector4f(lsPos); lsPos.z -= lightRadius; if (lsPos.z*-1 > 0.5/*NEAR_DEPTH*/) { //Convert lPos to NDC lsPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lsPos.x, lsPos.y, lsPos.z, lsPos.w)); lsPos.z = lsPos.z / lsPos.w; lsPos2.z = lsPos2.z + lightRadius; lsPos2 = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lsPos2.x, lsPos2.y, lsPos2.z, lsPos2.w)); lsPos2.z = lsPos2.z / lsPos2.w; Vector2f zBounds = new Vector2f(); zBounds.y = lsPos.z; zBounds.x = lsPos2.z; if (zBounds.y > 1) { zBounds.y = 1; } else if (zBounds.y < 0) { zBounds.y = 0; } if (zBounds.x > 1) { zBounds.x = -1; } else if (zBounds.x < 0) { zBounds.x = 0; } It is three vector matrix calculations per light but Humus's genius clearly trumps mine for now. Here is the final product with 0.1 added to the pixels within the calc:   Sure there are clip space lights but I still find this enchanting    If anyone can spot any optimisations to cut down on matrix calcs and can explain Humus's black magic please enlighten me.   Here is something I made less crappy earlier: public static Vector4f columnVectorMultiplyMatrixVector(Matrix4f matrix, Vector4f vector) { Vector4f returnVec = new Vector4f(); returnVec.setX(matrix.m00 * vector.getX() + matrix.m10 * vector.getY() + matrix.m20 * vector.getZ() + matrix.m30 * vector.getW()); returnVec.setY(matrix.m01 * vector.getX() + matrix.m11 * vector.getY() + matrix.m21 * vector.getZ() + matrix.m31 * vector.getW()); returnVec.setZ(matrix.m02 * vector.getX() + matrix.m12 * vector.getY() + matrix.m22 * vector.getZ() + matrix.m32 * vector.getW()); returnVec.setW(matrix.m03 * vector.getX() + matrix.m13 * vector.getY() + matrix.m23 * vector.getZ() + matrix.m33 * vector.getW()); return returnVec; } Maybe you haven't read any of the new posts but I am pleased to be roughly 95% there in terms of matching the original matter, thank you very much for your input!
5. ## Raising the Deferred Depth Buffer Reconstruction Bar

I should be able to multiply the lightView (+ radius?) with the projection then divide the z value by w to get the z depth in NDC as I would in a shader. The lightRadius by my account should screw up the vector of projection causing me to fail to reach NDC... //Compute z-bounds Vector4f lvPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); float z1 = lvPos.z + lightRadius; if (z1 * -1 > 0.5) { float z0 = Math.max(lvPos.z - lightRadius, 0.5); //Move from view to clip space Vector4f z0p = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lvPos.x, lvPos.y, z1, 1.0f)); Vector4f z1p = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lvPos.x, lvPos.y, z0, 1.0f)); //NDC z0p.z = z0p.z / z0p.w; z1p.z = z1p.z / z1p.w; The above does not work but I did not expect it to, I am clearly missing something important that allows for the projection to be manipulated to extrapolate the depth in NDC.
6. ## Raising the Deferred Depth Buffer Reconstruction Bar

Before moving on I decided the gains from getting the near far boundaries of the light sphere clipping were well worth it. I set my light shader to just output white on anything it drew on within the zboundary so I could test the clipping.   Below is some info on my setup but I can boil it down to one core problem: "I know that I need to get the depth in NDC space within the clipped boundaries from the projection but I am not sure how to get there in OpenGL from the view space position of my light (lVec)."   My Projection Matrix public static void createProjection(Matrix4f projectionMatrix, float fov, float aspect, float znear, float zfar) { float scale = (float) Math.tan((Math.toRadians(fov)) * 0.5f) * znear; float r = aspect * scale; float l = -r; float t = scale; float b = -t; projectionMatrix.m00 = 2 * znear / (r-l); projectionMatrix.m01 = 0; projectionMatrix.m02 = 0; projectionMatrix.m03 = 0; projectionMatrix.m10 = 0; projectionMatrix.m11 = 2 * znear / (t-b); projectionMatrix.m12 = 0; projectionMatrix.m13 = 0; projectionMatrix.m20 = (r + l) / (r-l); projectionMatrix.m21 = (t+b)/(t-b); projectionMatrix.m22 = -(zfar + znear) / (zfar-znear); projectionMatrix.m23 = -1; projectionMatrix.m30 = 0; projectionMatrix.m31 = 0; projectionMatrix.m32 = -2 * zfar * znear / (zfar - znear); projectionMatrix.m33 = 0; } Output: 1.1179721 0.0 0.0 0.0 0.0 1.428148 0.0 0.0 0.0 0.0 -1.0000666, -1.0000334 0.0 0.0 ,-1.0 0.0   m32 = -1.0000334 m23= -1   Vector2f zw = new Vector2f(projection.m22, projection.m32);   Vector4f lvPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); public static Vector4f columnVectorMultiplyMatrixVector(Matrix4f matrix, Vector4f vector) { Vector4f returnVec = new Vector4f(); returnVec.x = Vector4f.dot(new Vector4f(matrix.m00,matrix.m10,matrix.m20,matrix.m30), vector); returnVec.y = Vector4f.dot(new Vector4f(matrix.m01,matrix.m11,matrix.m21,matrix.m31), vector); returnVec.z = Vector4f.dot(new Vector4f(matrix.m02,matrix.m12,matrix.m22,matrix.m32), vector); returnVec.w = Vector4f.dot(new Vector4f(matrix.m03,matrix.m13,matrix.m23,matrix.m33), vector); return returnVec; } - I know that is a suboptimal method it will get optimised when I have this technique down. OPTIMISED REVISION BELOW   If I put the camera at xyz(-30.19929, 5.049999, 24.870947)   I notice that the light view position z is always negative, so I decided to flip the z with: Vector4f lvPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(0, 0, 0, -1.0f));   float 53.809433(z1) = 38.809433(lvPos.z) + 15(lightRadius);   if (z1 > 0.5f) { float 23.809433(z0) = Math.max(lvPos.z - lightRadius, 0.5f);   Now from what I understand the equation here: float2 zBounds; zBounds.y = saturate(zw.x + zw.y / z0); zBounds.x = saturate(zw.x + zw.y / z1);   Is clipSpaceDepth = near clip + farClip / position in view space.   Unfortunately Humus transforms his projection into a D3D projection further obfuscating what the true values of zw represent. Vector2f zBounds = new Vector2f(); zBounds.y = (zw.x + zw.y / z0); zBounds.x = (zw.x + zw.y / z1); //Crude saturate just for qualifying that I cover this step if (zBounds.y > 1) { zBounds.y = 1; } else if (zBounds.y < 0) { zBounds.y = 0; } if (zBounds.x > 1) { zBounds.x = 1; } else if (zBounds.x < 0) { zBounds.x = 0; } Worse still if I make my zw values identical to the values Humus's arrives at zw(-0.000033318996, 0.50001669) and apply the above calculation from the same position with a fairly close view angle I get some incorrect values:   0.020451155(zBounds.y) = (-0.000033318996(zw.x) + 0.5000167(zw.y) / 24.409546(z0)); 0.009156551(zBounds.x) = (-0.000033318996(zw.x) + 0.5000167(zw.y) / 54.409546(z1));   Any assistance welcome
7. ## Raising the Deferred Depth Buffer Reconstruction Bar

My girlfriend drew the original texture for me to test out a digital track-pad and pen quickly. My Uni had paid for a license for CrazyBump and believe it or not the normal map and the height map are the result of plumbing that texture through it and clearly setting the details badly ;)   After some rather unpleasant experience trying to draw it out and having my expectations fall short of digital reality I found the holy grail I was after: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_06   I tweaked the NBT generation slightly: protected void buildQuadNBTData() { //Build the data if required. switch (enumVertType) { case VERTEX_TYPE_POS_UV_NTB: //Lock the vertices in the VBO, first bind the VBO for updating GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId); //Put the new data in a ByteBuffer (in the view of a FloatBuffer) FloatBuffer vertexFloatBuffer = verticesByteBuffer.asFloatBuffer(); Vector3f normal = null; Vector3f binormal = null; Vector3f tangent = null; //Compute triangle bitangents and tangents for (int i = 0; i < vertices.length; i+=4) { Vert_PosUVNBT vertex0, vertex1, vertex2, vertex3; vertex0 = (Vert_PosUVNBT) vertices[i+0]; vertex1 = (Vert_PosUVNBT) vertices[i+1]; vertex2 = (Vert_PosUVNBT) vertices[i+2]; vertex3 = (Vert_PosUVNBT) vertices[i+3]; // Shortcuts for vertices Vector3f v0 = vertex0.getXYZV3(); Vector3f v1 = vertex1.getXYZV3(); Vector3f v2 = vertex2.getXYZV3(); // Shortcuts for UVs Vector2f uv0 = vertex0.getUVV2(); Vector2f uv1 = vertex1.getUVV2(); Vector2f uv2 = vertex2.getUVV2(); Vector3f deltaPos1 = null; Vector3f deltaPos2 = null; // Edges of the triangle : position delta deltaPos1 = Vector3f.sub(v1, v0, null); deltaPos2 = Vector3f.sub(v2, v0, null); // UV delta float s1 = uv1.x - uv0.x; float t1 = uv1.y - uv0.y; float s2 = uv2.x - uv0.x; float t2 = uv2.y - uv0.y; float tmp = 0.0f; if(Math.abs(s1*t2 - s2*t1) <= 0.0001f) { //Prevent Divide by zero tmp = 1.0f; } else { tmp = 1.0f/(s1*t2 - s2*t1); } tangent = new Vector3f((t1*deltaPos2.x - t2*deltaPos1.x), (t1*deltaPos2.y - t2*deltaPos1.y), (t1*deltaPos2.z - t2*deltaPos1.z)); binormal = new Vector3f((s1*deltaPos2.x - s2*deltaPos1.x), (s1*deltaPos2.y - s2*deltaPos1.y), (s1*deltaPos2.z - s2*deltaPos1.z)); normal = OpenGLHelper.calculateNormal(v0, v1, v2); tangent.set(tangent.x*tmp, tangent.y*tmp, tangent.z*tmp); binormal.set(binormal.x*tmp, binormal.y*tmp, binormal.z*tmp); if (Vector3f.dot(Vector3f.cross(normal, tangent, null), binormal) < 0) { tangent = (Vector3f) tangent.negate(); } binormal = (Vector3f) binormal.negate(); vertex0.setNormalXYZ(normal.x, normal.y, normal.z); vertex0.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex0.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertex1.setNormalXYZ(normal.x, normal.y, normal.z); vertex1.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex1.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertex2.setNormalXYZ(normal.x, normal.y, normal.z); vertex2.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex2.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertex3.setNormalXYZ(normal.x, normal.y, normal.z); vertex3.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex3.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertexFloatBuffer.put(vertex0.getElements()); vertexFloatBuffer.put(vertex1.getElements()); vertexFloatBuffer.put(vertex2.getElements()); vertexFloatBuffer.put(vertex3.getElements()); } vertexFloatBuffer.flip(); /*for (int i = 0; i < vertices.length; i++) { System.out.println(i + " " + vertices[i].toString()); }*/ //Rebuffer the data to the gcard ready for drawing GL15.glBufferSubData(GL15.GL_ARRAY_BUFFER, 0, verticesByteBuffer); //Unlock the buffer GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); break; default: break; } } For anyone reading after generating NBT you can get the vertex out facing normal with just the triangle above and the NBT per vertex if you have defined the objects UV mapping.   I scanned everything and concluded that the box looked valid but my lighting seemed wrong, the specular was blotched and all the lighting was biased toward one vector, that is when I realised it.   In my frag shader above "vec4 normal = texture(normalTexture, UV);//2.0f * texture(normalTexture, UV) - 1.0f;"   Why do something so stupid? Because to draw my floor I have a very basic position colour vertex type using a basic shader to fill the GBuffer. I set the normal on it to be vec4(0,1,0,0) and used the simplicity of this as the basis for proving my light was ok (I was clinging to the light appearing to be right).   The valid version of my simpler normal is up test was: normalOut = vec4(0.5f, 1.0f, 0.5f, 0.5f);   If you had not made me put a microscope over my normal textures I would never have realised it.   Thank you very much! Here is what I have now: The light is close to the ground and the floor looks a bit dark in this picture but it looks much brighter pre jpeged.   With my positions and the basic bearings back under control I can confidently progress and optimise I have many things to review for starters I will shift my normal buffer over to view space to simplify the majority of light calculations. The one thing I still do not have a handle on is the optimisation from the sample for clipping lights by the near far z depth. I will keep you posted when I have spent more time analysing how that should play out. I cant wait to plumb in some beautiful post processing
8. ## Raising the Deferred Depth Buffer Reconstruction Bar

Qualifying per pixel normals and ignoring the normal map temporarily It appears that the texture I sent was some form of thumbnail rather than the full size image, its 128x128 in the renders above highest I have it at is 512x512.   - Can you qualify this?   Lets strip this back to basics of a normal map. The NBT is used to represent the Normal Bitangent Tangent axis for arbitrary rotation in model space about a vertex the normal map is the process by which this NBT is rotated per pixel along XYZ to produce a new normal.   The normal map stores axis per RGB where RGB = XYZ within the texture r0 to r256 map to normals -1x to 1x when the texture is read in the normal map values are scaled (rgb* 2 -1) to transform them between -1 and 1. In order to keep the local normal scaled relative to the tangents direction the blue channel is always kept above 128.   So flipping the green and blue channel sounds like a dangerous move as it will lower the brightness when the light is directly facing the surface vs Arguably the pitch of the tangent might not be massively influential on the surface dulling the roughness of the surface but for this rough wall why worry? (remember the green is the inseam of mortar beween bricks so making the light increase when the light is above the face weakens realism by not faking occlusion :p   Based on my grasp of normals we can forgoe worrying about the image itself by just using a normal map which is a all 128r,128g,256b this should at least allow the NBT to be analysed without distortions, even better we can cut the middle man and multiply the tbnMatrix by vec3(0,0,1).   GeomFrag Update: vec3 normalMap = 2.0f * texture( normalTexture, v2UVHeightDisplacement ).xyz - 1.0f; normalMap = vec3(0,0,1); normalMap = tbnMatrix * normalMap; normalMap = normalize(normalMap); normalOut.xyz = 0.5f * (normalMap + 1.0f); The Normal Fight Continues: I am using these as my foundation for analysis: http://www.gamedev.net/topic/347799-mirrored-uvs-and-tangent-space-solved/ http://www.catalinzima.com/xna/tutorials/deferred-rendering-in-xna/custom-content-processor-and-normal-mapping/   I decided that because generating an arbitrary axis is hard I would try and get blender to output a plain text xyz uv n (xyz) b(xyz) t(xyz) model but it appears this is the holy grail of things to attain. Actually it proved so difficult just to get a cube formatted in such a way that I could either read it in notepad+ or get the indicies to match up I decided to start somewhere else.   So I adapted a normal solver I had been using with certainty on a 2 axis basis but had seen elsewhere as a certified triangle normal generator and decided to see if I could not generate my NBT from the pos and uv cutting down on the number of candidate problems and giving me a very useful reusable bit of code if ever I decide to generate 3d terrain from a heightmap etc.   I am still in the process of proving that the outputs are valid before running over my math to validate the order of the UV. I will keep you posted when I have more.
9. ## Raising the Deferred Depth Buffer Reconstruction Bar

- Absolutely I am not expecting zero bleed through on critical angles, I think I may have just taken some unclear screen shots in my last post.   I have been trying to map in world rather than face UV orientation which could potentially explain why the normal might not be as I expect, the only trouble is the sheer volume of permutations I have tested which make me suspect a deeper fault.   Let me have another go at those images with simpler angles.   xPositiveLightNear (light behind camera): xPositiveLightFar (light behind box):   The above look good slight bleed through on critical angles in reflection as qualified by your description.   xNegativeLightNear (light behind camera on negative side) (please ignore the lines in the distance that is an irrelevant attempt at loading a model from a file): xPositiveLightFar (light behind surface perfectly lit):   If you take a look you can see the normal looks as if it is on the other side of the face. If the NBT is so wrong that I am causing this by design as you suggest I will continue to investigate by manual modification trying to use the UV orientation of the face and your description as a guide. Validating it against the working faces before proceeding.   I know it might sound silly but being able to write a normal mapped cube with indices is definitely something I feel I must force myself to become competent at   Here is the normal map:   Finally and this is just for my sake to show the normal is sound, here is a DX9 forward renderer I wrote a couple of years back using the same tex, the FPS is low as I have a couple of copies of my OpenGL cube apps GBuffer sitting on the GCard and this was my stress test world with some shadow maps and demonstrative overstated parallax relief mapping etc:   I will keep picking away at the assembly of the normals as you have had a glance at the shaders, I might even try flipping it to see if the normals come through on the negative axis.   Thank you for all of your input so far.
10. ## Raising the Deferred Depth Buffer Reconstruction Bar

I am not out of the woods yet. Here is what I have so far: +   The remaining hurdle involves the negative normals of my cube being lit when the light is hitting the back of the surface (as if the normal is the inverse of the way I would like it to be) I have tried many permutations of NBT on the cube face I have left empty but even with no NBT data the flat lighting still behaves as though the face was facing the other way, I am lead to believe it has to a bug in my lighting.     The faces toward the camera are becoming unlit as the light approaches what should be their normal. The behaviour is as if both faces on each axis normals were facing toward the positive direction. I am not sure if my home brew Cube is at fault or my shader code:   Here are both for those with interest: VertGeom: #version 330 in vec3 inPos; in vec2 inUV; in vec3 inNormal; in vec3 inBinormal; in vec3 inTangent; uniform mat4 mWorld; //World matrix of object with all transformations (excluding view and projection) uniform vec3 v3EyePos; //The location of the camera untransformed uniform mat4 wvpMatrix; uniform mat4 mWVMatrix; uniform mat4 mWUnscaled; //World matrix of object with rotation and movement but no scale uniform int materialId; uniform float texDepthScale; //Depth of the parallax map (height map effect in lamens terms) uniform float texBias; //Bias of the parallax map out vec4 mWVPPosition; out vec2 texCoord; out mat3 tbnMatrix; void main(void) { mWVPPosition = wvpMatrix * vec4(inPos, 1.0f); gl_Position = mWVPPosition; texCoord = inUV; vec4 v4WorldPos = mWorld * vec4(inPos, 1.0f); mat3 mTBNMatrix = mat3(mWUnscaled) * mat3(inTangent, inBinormal, inNormal); v3ViewDir = mTBNMatrix * (v3EyePos - v4WorldPos.xyz); tbnMatrix = mTBNMatrix; } FragGeom: #version 330 in vec4 mWVPPosition; in vec2 texCoord; in vec3 v3ViewDir; in mat3 tbnMatrix; in vec3 vNormal; in vec3 vBinormal; in vec3 vTangent; uniform mat4 mWorld; //World matrix of object with all transformations (excluding view and projection) uniform vec3 v3EyePos; //The location of the camera untransformed uniform mat4 wvpMatrix; uniform mat4 mWVMatrix; uniform mat4 mWUnscaled; //World matrix of object with rotation and movement but no scale uniform int materialId; uniform float texDepthScale; //Depth of the parallax map (height map effect in lamens terms) uniform float texBias; //Bias of the parallax map uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D heightTexture; uniform sampler2D specularTexture; layout (location = 0) out vec4 colourOut; layout (location = 1) out vec4 normalOut; void main(void) { //Determine the height of this pixel float fltHeight = texDepthScale * texture(heightTexture, texCoord).r - texBias; //Compute the new texture coordinate to use based on the parallax vec2 v2UVHeightDisplacement = (fltHeight * v3ViewDir).xy; //Create true offset v2UVHeightDisplacement += texCoord; colourOut = texture(diffuseTexture, v2UVHeightDisplacement); //Specular exponent vec3 specExp = texture(specularTexture, v2UVHeightDisplacement).rgb; colourOut.w = (specExp.x + specExp.y + specExp.z) / 3; vec3 normalMap = texture( normalTexture, v2UVHeightDisplacement ).xyz * 2.0f - 1.0f; normalMap = tbnMatrix * normalMap; normalMap = normalize(normalMap.xyz); normalOut.xyz = 0.5f * (normalMap + 1.0f); normalOut.w = 1; } VertPointLight: #version 330 in vec3 inPos; in vec2 inUV; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform mat4 mWVMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; uniform float projNFLinearScalarA; uniform float projNFLinearScalarB; out vec4 mWVPPosition; out vec3 viewRay; void main(void) { vec3 position = inPos; position *= lightRadius; position += lightPos; mWVPPosition = wvpMatrix * vec4(position, 1.0f); gl_Position = mWVPPosition; } GeomPointLight: #version 330 in vec4 mWVPPosition; in vec3 viewRay; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform mat4 mWVMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; uniform float projNFLinearScalarA; uniform float projNFLinearScalarB; uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D depthTexture; layout (location = 0) out vec4 colourOut; void main(void) { vec3 euclClipSpacePosition = mWVPPosition.xyz / mWVPPosition.w; vec2 UV = euclClipSpacePosition.xy * 0.5 + vec2(0.5); float depth = texture(depthTexture, UV).x * 2 -1; vec3 addedLight = vec3(0,0,0); //if (depth >= zBounds.x && depth <= zBounds.y) { vec4 diffuseTex = texture(diffuseTexture, UV); vec4 normal = texture(normalTexture, UV);//2.0f * texture(normalTexture, UV) - 1.0f; // Clip-space position vec4 cPos = vec4(euclClipSpacePosition.xy, depth, 1); // World-space position vec4 wPos = ivpMatrix * cPos; vec3 pos = wPos.xyz / wPos.w; // Lighting vectors vec3 lVec = (lightPos - pos) * invRadius; vec3 lightVec = normalize(lVec); vec3 viewVec = normalize(camPos - pos); // Attenuation that falls off to zero at light radius float atten = clamp(1.0f - dot(lVec, lVec), 0.0f, 1.0f); atten *= atten; // Lighting float diffuse = clamp(dot(normal.xyz, lightVec), 0.0f, 1.0f); float specular_intensity = diffuseTex.w * 0.4f; float specular = specular_intensity * pow(clamp(dot(reflect(-viewVec, normal.xyz), lightVec),0.0f, 1.0f), 10.0f); addedLight = (diffuseTex.xyz + specular) * vec3(diffuse,diffuse,diffuse) * atten;// * (diffuse * diffuseTex + specular); } colourOut = vec4(addedLight.xyz, 1); } The cubes vertex data being provided to the shader is an iron clad guarantee. The uncertainty is how badly I have setup the NBT data, I have tried many permutations to flip the normal with no success: To keep this simple I have just provided indicies and a vertex definition. indices = new byte[] { 0, 1, 2,//Front (this is the negative z face with no normal that I am struggling with) 2, 3, 0, 6, 5, 4,//Back 4, 7, 6, 8, 9, 10,//Top 10, 11, 8, 14, 13, 12,//Bottom 12, 15, 14, 16, 17, 18,//Right 18, 19, 16, 22, 21, 20,//Left 20, 23, 22 }; Verticies for the curious 0 Vert VertPos: 0.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 1 Vert VertPos: 0.0x, 20.0y, 0.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 2 Vert VertPos: 20.0x, 20.0y, 0.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 3 Vert VertPos: 20.0x, 0.0y, 0.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 4 Vert VertPos: 0.0x, 0.0y, 20.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 5 Vert VertPos: 0.0x, 20.0y, 20.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 6 Vert VertPos: 20.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 7 Vert VertPos: 20.0x, 0.0y, 20.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 8 Vert VertPos: 0.0x, 20.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 9 Vert VertPos: 0.0x, 20.0y, 20.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 10 Vert VertPos: 20.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 11 Vert VertPos: 20.0x, 20.0y, 0.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 12 Vert VertPos: 0.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 13 Vert VertPos: 0.0x, 0.0y, 20.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 14 Vert VertPos: 20.0x, 0.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 15 Vert VertPos: 20.0x, 0.0y, 0.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 16 Vert VertPos: 0.0x, 20.0y, 0.0z VertUV: 0.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 17 Vert VertPos: 0.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 18 Vert VertPos: 0.0x, 0.0y, 20.0z VertUV: -1.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 19 Vert VertPos: 0.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 20 Vert VertPos: 20.0x, 20.0y, 0.0z VertUV: 0.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z 21 Vert VertPos: 20.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z 22 Vert VertPos: 20.0x, 0.0y, 20.0z VertUV: -1.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z 23 Vert VertPos: 20.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z This is effectively world space based depth reconstruction/calculations to provide normal mapping and parallax relief mapping via a deferred approach. Everything appears working bar these normals. I am not yet importing any model data from external formats.
11. ## Raising the Deferred Depth Buffer Reconstruction Bar

I uncovered a couple of mistakes thanks to your explanation and discovered a key bug whereby an optimisation to my ambient lighting phase got my point light lighting phase normal plumbed in where I had expected my depth texture to be. When the depth reconstruction was corrected I could see the texture in a corner and realised immediately why so many things had gone so wrong and made me doubt some core understanding which whilst shaky was valid.   - Yes this was intentional experimentation although it echoed the core mistake I had made.   I also just realised that one big cavet is that I have been plumbing a normal map into a bump map which is another reason the behaviour was so peculiar.
12. ## Raising the Deferred Depth Buffer Reconstruction Bar

Analysis has not gone well my light is still dependent on the viewer position. I have noticed some old unpleasantries from my original delve into reconstruction here are some pictures I am curious to understand. Shader code is at the end all the light values are just attenuation into RGB:   Above is the light at 0,0,0 from 0,0.5,0:   Above is the light from close up (-1,0.5,-1 ish).   Above is the box from roughly 0,0.5,-1 note that the intensity looks relatively accurate, I am not sure quite why the clipping for the light is not much more closely in line with the clipping for the camera (I want to write it off as a precision issue but it is much to early to do so):   Here is a snapshot of the box from various distances: The corner marked as A is always the same. Top Left - The box at a distance where no lighting is visible far outside the light sphere. Top Right - The box from near the edge of the light sphere, note that the far corners and corner A are becoming lit at the same time. Bottom Left - The box closer (the intensity is much lower than at 0,0,0!) the unpleasant flaw I want to focus on is point x the center of the light. Bottom Right - The box from the same position as the Bottom Left image with the camera turned right, note that the lights projection position has moved relative to the camera tilt in the inverse from the screens center, this behaviour is mirrored in both pitch and yaw via the x and y axis.   Shader code for reference: vec3 euclClipSpacePosition = mWVPPosition.xyz / mWVPPosition.w; vec2 UV = euclClipSpacePosition.xy * 0.5 + vec2(0.5); float depth = texture(diffuseTexture, UV).x * 2 -1; vec3 addedLight = vec3(0,0,0); vec4 diffuseTex = texture(diffuseTexture, UV); vec4 normal = texture(normalTexture, UV); // Clip-space position vec4 cPos = vec4(euclClipSpacePosition.xy, depth, 1); // World-space position vec4 wPos = ivpMatrix * cPos; vec3 pos = wPos.xyz / wPos.w; // Lighting vectors, lightPos = 0,0,0, invRadius = 1 for the moment. vec3 lVec = (lightPos - pos.xyz) * invRadius; vec3 lightVec = normalize(lVec); vec3 viewVec = normalize(camPos - pos.xyz); // Attenuation that falls off to zero at light radius float atten = clamp(1.0f - dot(lVec, lVec), 0.0, 1.0); atten *= atten; Any hints welcome, still chugging along but as you can see above I am not having a good time of it!
13. ## Raising the Deferred Depth Buffer Reconstruction Bar

- Yes sorry about that! Corrected it now.   - It is clear my understanding is shakey. I can reason my way out of most scenarios and I know to analyse things in colour as you suggested :) I think that number of potential points of failure for each visual error made me concious that I might miss some subtle nuance and have it cost me more time than I can imagine, but I appreciate the frank fact that I need to have a firm handle on these processes before I move on from this topic. I want to get precomputed atmospheric scattering and this reconstruction is the sticking sore point for me, all my attempts have failed to get it practically going. I wanted a foundation I could refer to as an iron clad solution so I can break it and understand how it comes together from there. It is not my usual approach but it is (in my opinion with my background) faster than a ground up approach with something this complex.   A good example of my rational for trying to approach this from a basic top down is there are so many unknowns, consider this from the original sample: // Pre-scale-bias the matrix so we can use the screen position directly float4x4 viewProjInv = (!viewProj) * (translate(-1.0f, 1.0f, 0.0f) * scale(2.0f, -2.0f, 1.0f)); renderer->setShaderConstant4x4f("ViewProjInv", viewProjInv * scale(1.0f / width, 1.0f / height, 1.0f)); Why does he have to scale his inverse view projection into what I believe would be defined as NDC before applying it?
14. ## Raising the Deferred Depth Buffer Reconstruction Bar

I have identified that the point of failure still centers around the camera or camera position in the reconstruction.   I did some diagnostics by looking at the viewVec around the camera in space in the DX sample and the GL sample, the flipped UV seems fine to me, the issue I can see is that when my camera moves the viewVec does not move with it:   DX:   GL (before camera movement):   GL (after moving):   I will keep experimenting to try and correct this, keep me posted if you spot the bug
15. ## Raising the Deferred Depth Buffer Reconstruction Bar

That was very concise and informative thank you so much!   I took a look but did not find anything doing a very quick search, I will search again based on the profile, I have seen roughly 4 forum chains on this forum specifically relating to reconstruction.   Point 1, it is [row][column], I am sitting atop the LWJGL maths framework. I experimented with transposing the projection matrix post construction but I noted no visual changes in behaviour.   Point 2, excellent correction, mistake in my understanding.   Point 3, you were absolutely right you also cleaned up my concern about depth normalisation as I knew OpenGL was -1 to 1 I just didnt know when to try and modify elements to best utilise the depth buffer.   Point 4, removing the scalar has got me something more consistent.   I am still trying to identify what I am looking at now:   The point light is creating a cone or event horizon aligned to the angle between the camera and the center of the point light (when viewed from negative aproximately x-1, y0.5, z-1:   Note that the rotation of the camera appears to have offset the light in a way I had not anticipated.   I will keep experimenting here are the corrections based on your analysis:   PointLightFragShader: #version 330 in vec4 mWVPPosition; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D depthTexture; layout (location = 0) out vec4 colourOut; void main(void) { vec3 euclClipSpacePosition = mWVPPosition.xyz / mWVPPosition.w; vec2 UV = euclClipSpacePosition.xy * 0.5 + vec2(0.5); float depth = texture(diffuseTexture, UV).x; vec3 addedLight = vec3(0,0,0); //if (depth >= zBounds.x && depth <= zBounds.y) { vec4 diffuseTex = texture(diffuseTexture, UV); vec4 normal = texture(normalTexture, UV); // Clip-space position vec4 cPos = vec4(euclClipSpacePosition.xy, depth*2 -1, 1); // World-space position vec4 wPos = ivpMatrix * cPos; vec3 pos = wPos.xyz / wPos.w; // Lighting vectors vec3 lVec = (lightPos - pos) * invRadius; vec3 lightVec = normalize(lVec); vec3 viewVec = normalize(camPos - pos); // Attenuation that falls off to zero at light radius float atten = clamp(1.0f - dot(lVec, lVec), 0.0, 1.0); atten *= atten; // Lighting float colDiffuse = clamp(dot(lightVec, normal.xyz), 0, 1); float specular_intensity = diffuseTex.w * 0.4f; float specular = specular_intensity * pow(clamp(dot(reflect(-viewVec, normal.xyz), lightVec), 0.0, 1.0), 10.0f); addedLight = atten * (colDiffuse * diffuseTex.xyz + specular); } colourOut = vec4(addedLight.xyz, 1); } Shader Binding: Matrix4f inverseViewProjection = new Matrix4f(); Vector3f camPos = cameraController.getActiveCameraPos(); GL20.glUniform3f(shader.getLocCamPos(), camPos.x, camPos.y, camPos.z); inverseViewProjection = cameraController.getActiveVPMatrixInverse(); //inverseViewProjection = inverseViewProjection.translate(new Vector3f(-1f, 1f, 0)); //inverseViewProjection = inverseViewProjection.scale(new Vector3f(2, -2, 1)); //inverseViewProjection = inverseViewProjection.scale(new Vector3f(1f/engineParams.getDisplayWidth(), 1f/engineParams.getDisplayHeight(), 1)); GL20.glUniformMatrix4(shader.getLocmIVPMatrix(), false, OpenGLHelper.getMatrix4ScratchBuffer(inverseViewProjection)); float nearTest = 0, farTest = 0; Matrix4f projection = new Matrix4f(cameraController.getCoreCameraProjection()); GL20.glUniformMatrix4(shader.getLocmWVP(), false, OpenGLHelper.getMatrix4ScratchBuffer(cameraController.getActiveViewProjectionMatrix())); Vector2f zw = new Vector2f(projection.m22, projection.m23); //Vector4f testLightViewSpace = new Vector4f(lightPos.getX(), lightPos.getY(), lightPos.getZ(), 1); //testLightViewSpace = OpenGLHelper.columnVectorMultiplyMatrixVector((Matrix4f)cameraController.getActiveCameraView(), testLightViewSpace); // Compute z-bounds Vector4f lPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); float z1 = lPos.z + lightRadius; //if (z1 > NEAR_DEPTH) { float z0 = Math.max(lPos.z - lightRadius, NEAR_DEPTH); nearTest = (zw.x + zw.y / z0); farTest = (zw.x + zw.y / z1); if (nearTest > 1) { nearTest = 1; } else if (nearTest < 0) { nearTest = 0; } if (farTest > 1) { farTest = 1; } else if (farTest < 0) { farTest = 0; } GL20.glUniform3f(shader.getLocLightPos(), lightPos.getX(), lightPos.getY(), lightPos.getZ()); GL20.glUniform3f(shader.getLocLightColour(), lightColour.getX(), lightColour.getY(), lightColour.getZ()); GL20.glUniform1f(shader.getLocLightRadius(), lightRadius); GL20.glUniform1f(shader.getLocInvRadius(), 1f/lightRadius); GL20.glUniform1f(shader.getLocLightFalloff(), lightFalloff); GL20.glUniform2f(shader.getLocZBounds(), nearTest, farTest); } - Absolutely I have seen some of the optimisation tricks but until I have a firm handle I will stick to getting the basics going.
• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!