• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

EnlightenedOne

Members
  • Content count

    446
  • Joined

  • Last visited

Community Reputation

142 Neutral

About EnlightenedOne

  • Rank
    Member

Personal Information

  • Location
    UK
  1. Hi All,   A bit of a curiosity, I am fascinated by this subject and paper: http://hal.inria.fr/docs/00/28/87/58/PDF/article.pdf   I have a copy of the shaders for the technique and they all say this:   /**  * Precomputed Atmospheric Scattering  * Copyright (c) 2008 INRIA  * All rights reserved.  *  * Redistribution and use in source and binary forms, with or without  * modification, are permitted provided that the following conditions  * are met:  * 1. Redistributions of source code must retain the above copyright  *    notice, this list of conditions and the following disclaimer.  * 2. Redistributions in binary form must reproduce the above copyright  *    notice, this list of conditions and the following disclaimer in the  *    documentation and/or other materials provided with the distribution.  * 3. Neither the name of the copyright holders nor the names of its  *    contributors may be used to endorse or promote products derived from  *    this software without specific prior written permission.  *  * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"  * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE  * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE  * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE  * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR  * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF  * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS  * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN  * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)  * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF  * THE POSSIBILITY OF SUCH DAMAGE.  */   /**  * Author: Eric Bruneton  */   I really want to integrate this technique into my game but there is no qualification about commercial use of this shader. From a legal stand point, do I have any right to use it within a game without seeking permission from INRIA? I cannot find anything on licensing it. Am I doing my bit if I add this to my license and add it digitally to an about page for the game along with including it in the distributed source?   Thanks, EO
  2. Ok so I decided to take the advice of Ohforf sake and not try and second guess the behaviour and tricks of the DX implementation using assumptions due to the trouble it could bring.   I decided that I knew very well view * projection to clip space /w to NDC is what I wanted and I had all the wits to get there somehow: Vector4f lsPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); Vector4f lsPos2 = new Vector4f(lsPos); lsPos.z -= lightRadius; if (lsPos.z*-1 > 0.5/*NEAR_DEPTH*/) { //Convert lPos to NDC lsPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lsPos.x, lsPos.y, lsPos.z, lsPos.w)); lsPos.z = lsPos.z / lsPos.w; lsPos2.z = lsPos2.z + lightRadius; lsPos2 = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lsPos2.x, lsPos2.y, lsPos2.z, lsPos2.w)); lsPos2.z = lsPos2.z / lsPos2.w; Vector2f zBounds = new Vector2f(); zBounds.y = lsPos.z; zBounds.x = lsPos2.z; if (zBounds.y > 1) { zBounds.y = 1; } else if (zBounds.y < 0) { zBounds.y = 0; } if (zBounds.x > 1) { zBounds.x = -1; } else if (zBounds.x < 0) { zBounds.x = 0; } It is three vector matrix calculations per light but Humus's genius clearly trumps mine for now. Here is the final product with 0.1 added to the pixels within the calc:   Sure there are clip space lights but I still find this enchanting    If anyone can spot any optimisations to cut down on matrix calcs and can explain Humus's black magic please enlighten me.   Here is something I made less crappy earlier: public static Vector4f columnVectorMultiplyMatrixVector(Matrix4f matrix, Vector4f vector) { Vector4f returnVec = new Vector4f(); returnVec.setX(matrix.m00 * vector.getX() + matrix.m10 * vector.getY() + matrix.m20 * vector.getZ() + matrix.m30 * vector.getW()); returnVec.setY(matrix.m01 * vector.getX() + matrix.m11 * vector.getY() + matrix.m21 * vector.getZ() + matrix.m31 * vector.getW()); returnVec.setZ(matrix.m02 * vector.getX() + matrix.m12 * vector.getY() + matrix.m22 * vector.getZ() + matrix.m32 * vector.getW()); returnVec.setW(matrix.m03 * vector.getX() + matrix.m13 * vector.getY() + matrix.m23 * vector.getZ() + matrix.m33 * vector.getW()); return returnVec; } Maybe you haven't read any of the new posts but I am pleased to be roughly 95% there in terms of matching the original matter, thank you very much for your input!
  3. I should be able to multiply the lightView (+ radius?) with the projection then divide the z value by w to get the z depth in NDC as I would in a shader. The lightRadius by my account should screw up the vector of projection causing me to fail to reach NDC... //Compute z-bounds Vector4f lvPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); float z1 = lvPos.z + lightRadius; if (z1 * -1 > 0.5) { float z0 = Math.max(lvPos.z - lightRadius, 0.5); //Move from view to clip space Vector4f z0p = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lvPos.x, lvPos.y, z1, 1.0f)); Vector4f z1p = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getCoreCameraProjection(), new Vector4f(lvPos.x, lvPos.y, z0, 1.0f)); //NDC z0p.z = z0p.z / z0p.w; z1p.z = z1p.z / z1p.w; The above does not work but I did not expect it to, I am clearly missing something important that allows for the projection to be manipulated to extrapolate the depth in NDC.
  4. Before moving on I decided the gains from getting the near far boundaries of the light sphere clipping were well worth it. I set my light shader to just output white on anything it drew on within the zboundary so I could test the clipping.   Below is some info on my setup but I can boil it down to one core problem: "I know that I need to get the depth in NDC space within the clipped boundaries from the projection but I am not sure how to get there in OpenGL from the view space position of my light (lVec)."   My Projection Matrix public static void createProjection(Matrix4f projectionMatrix, float fov, float aspect, float znear, float zfar) { float scale = (float) Math.tan((Math.toRadians(fov)) * 0.5f) * znear; float r = aspect * scale; float l = -r; float t = scale; float b = -t; projectionMatrix.m00 = 2 * znear / (r-l); projectionMatrix.m01 = 0; projectionMatrix.m02 = 0; projectionMatrix.m03 = 0; projectionMatrix.m10 = 0; projectionMatrix.m11 = 2 * znear / (t-b); projectionMatrix.m12 = 0; projectionMatrix.m13 = 0; projectionMatrix.m20 = (r + l) / (r-l); projectionMatrix.m21 = (t+b)/(t-b); projectionMatrix.m22 = -(zfar + znear) / (zfar-znear); projectionMatrix.m23 = -1; projectionMatrix.m30 = 0; projectionMatrix.m31 = 0; projectionMatrix.m32 = -2 * zfar * znear / (zfar - znear); projectionMatrix.m33 = 0; } Output: 1.1179721 0.0 0.0 0.0 0.0 1.428148 0.0 0.0 0.0 0.0 -1.0000666, -1.0000334 0.0 0.0 ,-1.0 0.0   m32 = -1.0000334 m23= -1   Vector2f zw = new Vector2f(projection.m22, projection.m32);   Vector4f lvPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); public static Vector4f columnVectorMultiplyMatrixVector(Matrix4f matrix, Vector4f vector) { Vector4f returnVec = new Vector4f(); returnVec.x = Vector4f.dot(new Vector4f(matrix.m00,matrix.m10,matrix.m20,matrix.m30), vector); returnVec.y = Vector4f.dot(new Vector4f(matrix.m01,matrix.m11,matrix.m21,matrix.m31), vector); returnVec.z = Vector4f.dot(new Vector4f(matrix.m02,matrix.m12,matrix.m22,matrix.m32), vector); returnVec.w = Vector4f.dot(new Vector4f(matrix.m03,matrix.m13,matrix.m23,matrix.m33), vector); return returnVec; } - I know that is a suboptimal method it will get optimised when I have this technique down. OPTIMISED REVISION BELOW   If I put the camera at xyz(-30.19929, 5.049999, 24.870947)   I notice that the light view position z is always negative, so I decided to flip the z with: Vector4f lvPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(0, 0, 0, -1.0f));   float 53.809433(z1) = 38.809433(lvPos.z) + 15(lightRadius);   if (z1 > 0.5f) { float 23.809433(z0) = Math.max(lvPos.z - lightRadius, 0.5f);   Now from what I understand the equation here: float2 zBounds; zBounds.y = saturate(zw.x + zw.y / z0); zBounds.x = saturate(zw.x + zw.y / z1);   Is clipSpaceDepth = near clip + farClip / position in view space.   Unfortunately Humus transforms his projection into a D3D projection further obfuscating what the true values of zw represent. Vector2f zBounds = new Vector2f(); zBounds.y = (zw.x + zw.y / z0); zBounds.x = (zw.x + zw.y / z1); //Crude saturate just for qualifying that I cover this step if (zBounds.y > 1) { zBounds.y = 1; } else if (zBounds.y < 0) { zBounds.y = 0; } if (zBounds.x > 1) { zBounds.x = 1; } else if (zBounds.x < 0) { zBounds.x = 0; } Worse still if I make my zw values identical to the values Humus's arrives at zw(-0.000033318996, 0.50001669) and apply the above calculation from the same position with a fairly close view angle I get some incorrect values:   0.020451155(zBounds.y) = (-0.000033318996(zw.x) + 0.5000167(zw.y) / 24.409546(z0)); 0.009156551(zBounds.x) = (-0.000033318996(zw.x) + 0.5000167(zw.y) / 54.409546(z1));   Any assistance welcome
  5. My girlfriend drew the original texture for me to test out a digital track-pad and pen quickly. My Uni had paid for a license for CrazyBump and believe it or not the normal map and the height map are the result of plumbing that texture through it and clearly setting the details badly ;)   After some rather unpleasant experience trying to draw it out and having my expectations fall short of digital reality I found the holy grail I was after: http://en.wikibooks.org/wiki/OpenGL_Programming/Modern_OpenGL_Tutorial_06   I tweaked the NBT generation slightly: protected void buildQuadNBTData() { //Build the data if required. switch (enumVertType) { case VERTEX_TYPE_POS_UV_NTB: //Lock the vertices in the VBO, first bind the VBO for updating GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, vboId); //Put the new data in a ByteBuffer (in the view of a FloatBuffer) FloatBuffer vertexFloatBuffer = verticesByteBuffer.asFloatBuffer(); Vector3f normal = null; Vector3f binormal = null; Vector3f tangent = null; //Compute triangle bitangents and tangents for (int i = 0; i < vertices.length; i+=4) { Vert_PosUVNBT vertex0, vertex1, vertex2, vertex3; vertex0 = (Vert_PosUVNBT) vertices[i+0]; vertex1 = (Vert_PosUVNBT) vertices[i+1]; vertex2 = (Vert_PosUVNBT) vertices[i+2]; vertex3 = (Vert_PosUVNBT) vertices[i+3]; // Shortcuts for vertices Vector3f v0 = vertex0.getXYZV3(); Vector3f v1 = vertex1.getXYZV3(); Vector3f v2 = vertex2.getXYZV3(); // Shortcuts for UVs Vector2f uv0 = vertex0.getUVV2(); Vector2f uv1 = vertex1.getUVV2(); Vector2f uv2 = vertex2.getUVV2(); Vector3f deltaPos1 = null; Vector3f deltaPos2 = null; // Edges of the triangle : position delta deltaPos1 = Vector3f.sub(v1, v0, null); deltaPos2 = Vector3f.sub(v2, v0, null); // UV delta float s1 = uv1.x - uv0.x; float t1 = uv1.y - uv0.y; float s2 = uv2.x - uv0.x; float t2 = uv2.y - uv0.y; float tmp = 0.0f; if(Math.abs(s1*t2 - s2*t1) <= 0.0001f) { //Prevent Divide by zero tmp = 1.0f; } else { tmp = 1.0f/(s1*t2 - s2*t1); } tangent = new Vector3f((t1*deltaPos2.x - t2*deltaPos1.x), (t1*deltaPos2.y - t2*deltaPos1.y), (t1*deltaPos2.z - t2*deltaPos1.z)); binormal = new Vector3f((s1*deltaPos2.x - s2*deltaPos1.x), (s1*deltaPos2.y - s2*deltaPos1.y), (s1*deltaPos2.z - s2*deltaPos1.z)); normal = OpenGLHelper.calculateNormal(v0, v1, v2); tangent.set(tangent.x*tmp, tangent.y*tmp, tangent.z*tmp); binormal.set(binormal.x*tmp, binormal.y*tmp, binormal.z*tmp); if (Vector3f.dot(Vector3f.cross(normal, tangent, null), binormal) < 0) { tangent = (Vector3f) tangent.negate(); } binormal = (Vector3f) binormal.negate(); vertex0.setNormalXYZ(normal.x, normal.y, normal.z); vertex0.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex0.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertex1.setNormalXYZ(normal.x, normal.y, normal.z); vertex1.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex1.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertex2.setNormalXYZ(normal.x, normal.y, normal.z); vertex2.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex2.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertex3.setNormalXYZ(normal.x, normal.y, normal.z); vertex3.setBinormalXYZ(binormal.x, binormal.y, binormal.z); vertex3.setTangentXYZ(tangent.x, tangent.y, tangent.z); vertexFloatBuffer.put(vertex0.getElements()); vertexFloatBuffer.put(vertex1.getElements()); vertexFloatBuffer.put(vertex2.getElements()); vertexFloatBuffer.put(vertex3.getElements()); } vertexFloatBuffer.flip(); /*for (int i = 0; i < vertices.length; i++) { System.out.println(i + " " + vertices[i].toString()); }*/ //Rebuffer the data to the gcard ready for drawing GL15.glBufferSubData(GL15.GL_ARRAY_BUFFER, 0, verticesByteBuffer); //Unlock the buffer GL15.glBindBuffer(GL15.GL_ARRAY_BUFFER, 0); break; default: break; } } For anyone reading after generating NBT you can get the vertex out facing normal with just the triangle above and the NBT per vertex if you have defined the objects UV mapping.   I scanned everything and concluded that the box looked valid but my lighting seemed wrong, the specular was blotched and all the lighting was biased toward one vector, that is when I realised it.   In my frag shader above "vec4 normal = texture(normalTexture, UV);//2.0f * texture(normalTexture, UV) - 1.0f;"   Why do something so stupid? Because to draw my floor I have a very basic position colour vertex type using a basic shader to fill the GBuffer. I set the normal on it to be vec4(0,1,0,0) and used the simplicity of this as the basis for proving my light was ok (I was clinging to the light appearing to be right).   The valid version of my simpler normal is up test was: normalOut = vec4(0.5f, 1.0f, 0.5f, 0.5f);   If you had not made me put a microscope over my normal textures I would never have realised it.   Thank you very much! Here is what I have now: The light is close to the ground and the floor looks a bit dark in this picture but it looks much brighter pre jpeged.   With my positions and the basic bearings back under control I can confidently progress and optimise I have many things to review for starters I will shift my normal buffer over to view space to simplify the majority of light calculations. The one thing I still do not have a handle on is the optimisation from the sample for clipping lights by the near far z depth. I will keep you posted when I have spent more time analysing how that should play out. I cant wait to plumb in some beautiful post processing
  6. Qualifying per pixel normals and ignoring the normal map temporarily It appears that the texture I sent was some form of thumbnail rather than the full size image, its 128x128 in the renders above highest I have it at is 512x512.   - Can you qualify this?   Lets strip this back to basics of a normal map. The NBT is used to represent the Normal Bitangent Tangent axis for arbitrary rotation in model space about a vertex the normal map is the process by which this NBT is rotated per pixel along XYZ to produce a new normal.   The normal map stores axis per RGB where RGB = XYZ within the texture r0 to r256 map to normals -1x to 1x when the texture is read in the normal map values are scaled (rgb* 2 -1) to transform them between -1 and 1. In order to keep the local normal scaled relative to the tangents direction the blue channel is always kept above 128.   So flipping the green and blue channel sounds like a dangerous move as it will lower the brightness when the light is directly facing the surface vs Arguably the pitch of the tangent might not be massively influential on the surface dulling the roughness of the surface but for this rough wall why worry? (remember the green is the inseam of mortar beween bricks so making the light increase when the light is above the face weakens realism by not faking occlusion :p   Based on my grasp of normals we can forgoe worrying about the image itself by just using a normal map which is a all 128r,128g,256b this should at least allow the NBT to be analysed without distortions, even better we can cut the middle man and multiply the tbnMatrix by vec3(0,0,1).   GeomFrag Update: vec3 normalMap = 2.0f * texture( normalTexture, v2UVHeightDisplacement ).xyz - 1.0f; normalMap = vec3(0,0,1); normalMap = tbnMatrix * normalMap; normalMap = normalize(normalMap); normalOut.xyz = 0.5f * (normalMap + 1.0f); The Normal Fight Continues: I am using these as my foundation for analysis: http://www.gamedev.net/topic/347799-mirrored-uvs-and-tangent-space-solved/ http://www.catalinzima.com/xna/tutorials/deferred-rendering-in-xna/custom-content-processor-and-normal-mapping/   I decided that because generating an arbitrary axis is hard I would try and get blender to output a plain text xyz uv n (xyz) b(xyz) t(xyz) model but it appears this is the holy grail of things to attain. Actually it proved so difficult just to get a cube formatted in such a way that I could either read it in notepad+ or get the indicies to match up I decided to start somewhere else.   So I adapted a normal solver I had been using with certainty on a 2 axis basis but had seen elsewhere as a certified triangle normal generator and decided to see if I could not generate my NBT from the pos and uv cutting down on the number of candidate problems and giving me a very useful reusable bit of code if ever I decide to generate 3d terrain from a heightmap etc.   I am still in the process of proving that the outputs are valid before running over my math to validate the order of the UV. I will keep you posted when I have more.
  7. - Absolutely I am not expecting zero bleed through on critical angles, I think I may have just taken some unclear screen shots in my last post.   I have been trying to map in world rather than face UV orientation which could potentially explain why the normal might not be as I expect, the only trouble is the sheer volume of permutations I have tested which make me suspect a deeper fault.   Let me have another go at those images with simpler angles.   xPositiveLightNear (light behind camera): xPositiveLightFar (light behind box):   The above look good slight bleed through on critical angles in reflection as qualified by your description.   xNegativeLightNear (light behind camera on negative side) (please ignore the lines in the distance that is an irrelevant attempt at loading a model from a file): xPositiveLightFar (light behind surface perfectly lit):   If you take a look you can see the normal looks as if it is on the other side of the face. If the NBT is so wrong that I am causing this by design as you suggest I will continue to investigate by manual modification trying to use the UV orientation of the face and your description as a guide. Validating it against the working faces before proceeding.   I know it might sound silly but being able to write a normal mapped cube with indices is definitely something I feel I must force myself to become competent at   Here is the normal map:   Finally and this is just for my sake to show the normal is sound, here is a DX9 forward renderer I wrote a couple of years back using the same tex, the FPS is low as I have a couple of copies of my OpenGL cube apps GBuffer sitting on the GCard and this was my stress test world with some shadow maps and demonstrative overstated parallax relief mapping etc:   I will keep picking away at the assembly of the normals as you have had a glance at the shaders, I might even try flipping it to see if the normals come through on the negative axis.   Thank you for all of your input so far.
  8. I am not out of the woods yet. Here is what I have so far: +   The remaining hurdle involves the negative normals of my cube being lit when the light is hitting the back of the surface (as if the normal is the inverse of the way I would like it to be) I have tried many permutations of NBT on the cube face I have left empty but even with no NBT data the flat lighting still behaves as though the face was facing the other way, I am lead to believe it has to a bug in my lighting.     The faces toward the camera are becoming unlit as the light approaches what should be their normal. The behaviour is as if both faces on each axis normals were facing toward the positive direction. I am not sure if my home brew Cube is at fault or my shader code:   Here are both for those with interest: VertGeom: #version 330 in vec3 inPos; in vec2 inUV; in vec3 inNormal; in vec3 inBinormal; in vec3 inTangent; uniform mat4 mWorld; //World matrix of object with all transformations (excluding view and projection) uniform vec3 v3EyePos; //The location of the camera untransformed uniform mat4 wvpMatrix; uniform mat4 mWVMatrix; uniform mat4 mWUnscaled; //World matrix of object with rotation and movement but no scale uniform int materialId; uniform float texDepthScale; //Depth of the parallax map (height map effect in lamens terms) uniform float texBias; //Bias of the parallax map out vec4 mWVPPosition; out vec2 texCoord; out mat3 tbnMatrix; void main(void) { mWVPPosition = wvpMatrix * vec4(inPos, 1.0f); gl_Position = mWVPPosition; texCoord = inUV; vec4 v4WorldPos = mWorld * vec4(inPos, 1.0f); mat3 mTBNMatrix = mat3(mWUnscaled) * mat3(inTangent, inBinormal, inNormal); v3ViewDir = mTBNMatrix * (v3EyePos - v4WorldPos.xyz); tbnMatrix = mTBNMatrix; } FragGeom: #version 330 in vec4 mWVPPosition; in vec2 texCoord; in vec3 v3ViewDir; in mat3 tbnMatrix; in vec3 vNormal; in vec3 vBinormal; in vec3 vTangent; uniform mat4 mWorld; //World matrix of object with all transformations (excluding view and projection) uniform vec3 v3EyePos; //The location of the camera untransformed uniform mat4 wvpMatrix; uniform mat4 mWVMatrix; uniform mat4 mWUnscaled; //World matrix of object with rotation and movement but no scale uniform int materialId; uniform float texDepthScale; //Depth of the parallax map (height map effect in lamens terms) uniform float texBias; //Bias of the parallax map uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D heightTexture; uniform sampler2D specularTexture; layout (location = 0) out vec4 colourOut; layout (location = 1) out vec4 normalOut; void main(void) { //Determine the height of this pixel float fltHeight = texDepthScale * texture(heightTexture, texCoord).r - texBias; //Compute the new texture coordinate to use based on the parallax vec2 v2UVHeightDisplacement = (fltHeight * v3ViewDir).xy; //Create true offset v2UVHeightDisplacement += texCoord; colourOut = texture(diffuseTexture, v2UVHeightDisplacement); //Specular exponent vec3 specExp = texture(specularTexture, v2UVHeightDisplacement).rgb; colourOut.w = (specExp.x + specExp.y + specExp.z) / 3; vec3 normalMap = texture( normalTexture, v2UVHeightDisplacement ).xyz * 2.0f - 1.0f; normalMap = tbnMatrix * normalMap; normalMap = normalize(normalMap.xyz); normalOut.xyz = 0.5f * (normalMap + 1.0f); normalOut.w = 1; } VertPointLight: #version 330 in vec3 inPos; in vec2 inUV; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform mat4 mWVMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; uniform float projNFLinearScalarA; uniform float projNFLinearScalarB; out vec4 mWVPPosition; out vec3 viewRay; void main(void) { vec3 position = inPos; position *= lightRadius; position += lightPos; mWVPPosition = wvpMatrix * vec4(position, 1.0f); gl_Position = mWVPPosition; } GeomPointLight: #version 330 in vec4 mWVPPosition; in vec3 viewRay; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform mat4 mWVMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; uniform float projNFLinearScalarA; uniform float projNFLinearScalarB; uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D depthTexture; layout (location = 0) out vec4 colourOut; void main(void) { vec3 euclClipSpacePosition = mWVPPosition.xyz / mWVPPosition.w; vec2 UV = euclClipSpacePosition.xy * 0.5 + vec2(0.5); float depth = texture(depthTexture, UV).x * 2 -1; vec3 addedLight = vec3(0,0,0); //if (depth >= zBounds.x && depth <= zBounds.y) { vec4 diffuseTex = texture(diffuseTexture, UV); vec4 normal = texture(normalTexture, UV);//2.0f * texture(normalTexture, UV) - 1.0f; // Clip-space position vec4 cPos = vec4(euclClipSpacePosition.xy, depth, 1); // World-space position vec4 wPos = ivpMatrix * cPos; vec3 pos = wPos.xyz / wPos.w; // Lighting vectors vec3 lVec = (lightPos - pos) * invRadius; vec3 lightVec = normalize(lVec); vec3 viewVec = normalize(camPos - pos); // Attenuation that falls off to zero at light radius float atten = clamp(1.0f - dot(lVec, lVec), 0.0f, 1.0f); atten *= atten; // Lighting float diffuse = clamp(dot(normal.xyz, lightVec), 0.0f, 1.0f); float specular_intensity = diffuseTex.w * 0.4f; float specular = specular_intensity * pow(clamp(dot(reflect(-viewVec, normal.xyz), lightVec),0.0f, 1.0f), 10.0f); addedLight = (diffuseTex.xyz + specular) * vec3(diffuse,diffuse,diffuse) * atten;// * (diffuse * diffuseTex + specular); } colourOut = vec4(addedLight.xyz, 1); } The cubes vertex data being provided to the shader is an iron clad guarantee. The uncertainty is how badly I have setup the NBT data, I have tried many permutations to flip the normal with no success: To keep this simple I have just provided indicies and a vertex definition. indices = new byte[] { 0, 1, 2,//Front (this is the negative z face with no normal that I am struggling with) 2, 3, 0, 6, 5, 4,//Back 4, 7, 6, 8, 9, 10,//Top 10, 11, 8, 14, 13, 12,//Bottom 12, 15, 14, 16, 17, 18,//Right 18, 19, 16, 22, 21, 20,//Left 20, 23, 22 }; Verticies for the curious 0 Vert VertPos: 0.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 1 Vert VertPos: 0.0x, 20.0y, 0.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 2 Vert VertPos: 20.0x, 20.0y, 0.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 3 Vert VertPos: 20.0x, 0.0y, 0.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, 0.0z 4 Vert VertPos: 0.0x, 0.0y, 20.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 5 Vert VertPos: 0.0x, 20.0y, 20.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 6 Vert VertPos: 20.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 7 Vert VertPos: 20.0x, 0.0y, 20.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 0.0y, 1.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 1.0x, 0.0y, 0.0z 8 Vert VertPos: 0.0x, 20.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 9 Vert VertPos: 0.0x, 20.0y, 20.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 10 Vert VertPos: 20.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 11 Vert VertPos: 20.0x, 20.0y, 0.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 12 Vert VertPos: 0.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 13 Vert VertPos: 0.0x, 0.0y, 20.0z VertUV: 0.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 14 Vert VertPos: 20.0x, 0.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 15 Vert VertPos: 20.0x, 0.0y, 0.0z VertUV: -1.0u, 1.0v VertNormal: 0.0x, 1.0y, 0.0z VertBinormal: -1.0x, 0.0y, 0.0z VertTangent: 0.0x, 0.0y, -1.0z 16 Vert VertPos: 0.0x, 20.0y, 0.0z VertUV: 0.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 17 Vert VertPos: 0.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 18 Vert VertPos: 0.0x, 0.0y, 20.0z VertUV: -1.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 19 Vert VertPos: 0.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 1.0y, 0.0z VertTangent: 0.0x, 0.0y, 1.0z 20 Vert VertPos: 20.0x, 20.0y, 0.0z VertUV: 0.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z 21 Vert VertPos: 20.0x, 0.0y, 0.0z VertUV: 0.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z 22 Vert VertPos: 20.0x, 0.0y, 20.0z VertUV: -1.0u, 1.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z 23 Vert VertPos: 20.0x, 20.0y, 20.0z VertUV: -1.0u, 0.0v VertNormal: 1.0x, 0.0y, 0.0z VertBinormal: 0.0x, 0.0y, 1.0z VertTangent: 0.0x, 1.0y, 0.0z This is effectively world space based depth reconstruction/calculations to provide normal mapping and parallax relief mapping via a deferred approach. Everything appears working bar these normals. I am not yet importing any model data from external formats.
  9. I uncovered a couple of mistakes thanks to your explanation and discovered a key bug whereby an optimisation to my ambient lighting phase got my point light lighting phase normal plumbed in where I had expected my depth texture to be. When the depth reconstruction was corrected I could see the texture in a corner and realised immediately why so many things had gone so wrong and made me doubt some core understanding which whilst shaky was valid.   - Yes this was intentional experimentation although it echoed the core mistake I had made.   I also just realised that one big cavet is that I have been plumbing a normal map into a bump map which is another reason the behaviour was so peculiar.
  10. Analysis has not gone well my light is still dependent on the viewer position. I have noticed some old unpleasantries from my original delve into reconstruction here are some pictures I am curious to understand. Shader code is at the end all the light values are just attenuation into RGB:   Above is the light at 0,0,0 from 0,0.5,0:   Above is the light from close up (-1,0.5,-1 ish).   Above is the box from roughly 0,0.5,-1 note that the intensity looks relatively accurate, I am not sure quite why the clipping for the light is not much more closely in line with the clipping for the camera (I want to write it off as a precision issue but it is much to early to do so):   Here is a snapshot of the box from various distances: The corner marked as A is always the same. Top Left - The box at a distance where no lighting is visible far outside the light sphere. Top Right - The box from near the edge of the light sphere, note that the far corners and corner A are becoming lit at the same time. Bottom Left - The box closer (the intensity is much lower than at 0,0,0!) the unpleasant flaw I want to focus on is point x the center of the light. Bottom Right - The box from the same position as the Bottom Left image with the camera turned right, note that the lights projection position has moved relative to the camera tilt in the inverse from the screens center, this behaviour is mirrored in both pitch and yaw via the x and y axis.   Shader code for reference: vec3 euclClipSpacePosition = mWVPPosition.xyz / mWVPPosition.w; vec2 UV = euclClipSpacePosition.xy * 0.5 + vec2(0.5); float depth = texture(diffuseTexture, UV).x * 2 -1; vec3 addedLight = vec3(0,0,0); vec4 diffuseTex = texture(diffuseTexture, UV); vec4 normal = texture(normalTexture, UV); // Clip-space position vec4 cPos = vec4(euclClipSpacePosition.xy, depth, 1); // World-space position vec4 wPos = ivpMatrix * cPos; vec3 pos = wPos.xyz / wPos.w; // Lighting vectors, lightPos = 0,0,0, invRadius = 1 for the moment. vec3 lVec = (lightPos - pos.xyz) * invRadius; vec3 lightVec = normalize(lVec); vec3 viewVec = normalize(camPos - pos.xyz); // Attenuation that falls off to zero at light radius float atten = clamp(1.0f - dot(lVec, lVec), 0.0, 1.0); atten *= atten; Any hints welcome, still chugging along but as you can see above I am not having a good time of it!
  11. - Yes sorry about that! Corrected it now.   - It is clear my understanding is shakey. I can reason my way out of most scenarios and I know to analyse things in colour as you suggested :) I think that number of potential points of failure for each visual error made me concious that I might miss some subtle nuance and have it cost me more time than I can imagine, but I appreciate the frank fact that I need to have a firm handle on these processes before I move on from this topic. I want to get precomputed atmospheric scattering and this reconstruction is the sticking sore point for me, all my attempts have failed to get it practically going. I wanted a foundation I could refer to as an iron clad solution so I can break it and understand how it comes together from there. It is not my usual approach but it is (in my opinion with my background) faster than a ground up approach with something this complex.   A good example of my rational for trying to approach this from a basic top down is there are so many unknowns, consider this from the original sample: // Pre-scale-bias the matrix so we can use the screen position directly float4x4 viewProjInv = (!viewProj) * (translate(-1.0f, 1.0f, 0.0f) * scale(2.0f, -2.0f, 1.0f)); renderer->setShaderConstant4x4f("ViewProjInv", viewProjInv * scale(1.0f / width, 1.0f / height, 1.0f)); Why does he have to scale his inverse view projection into what I believe would be defined as NDC before applying it?
  12. I have identified that the point of failure still centers around the camera or camera position in the reconstruction.   I did some diagnostics by looking at the viewVec around the camera in space in the DX sample and the GL sample, the flipped UV seems fine to me, the issue I can see is that when my camera moves the viewVec does not move with it:   DX:   GL (before camera movement):   GL (after moving):   I will keep experimenting to try and correct this, keep me posted if you spot the bug
  13. That was very concise and informative thank you so much!   I took a look but did not find anything doing a very quick search, I will search again based on the profile, I have seen roughly 4 forum chains on this forum specifically relating to reconstruction.   Point 1, it is [row][column], I am sitting atop the LWJGL maths framework. I experimented with transposing the projection matrix post construction but I noted no visual changes in behaviour.   Point 2, excellent correction, mistake in my understanding.   Point 3, you were absolutely right you also cleaned up my concern about depth normalisation as I knew OpenGL was -1 to 1 I just didnt know when to try and modify elements to best utilise the depth buffer.   Point 4, removing the scalar has got me something more consistent.   I am still trying to identify what I am looking at now:   The point light is creating a cone or event horizon aligned to the angle between the camera and the center of the point light (when viewed from negative aproximately x-1, y0.5, z-1:   Note that the rotation of the camera appears to have offset the light in a way I had not anticipated.   I will keep experimenting here are the corrections based on your analysis:   PointLightFragShader: #version 330 in vec4 mWVPPosition; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D depthTexture; layout (location = 0) out vec4 colourOut; void main(void) { vec3 euclClipSpacePosition = mWVPPosition.xyz / mWVPPosition.w; vec2 UV = euclClipSpacePosition.xy * 0.5 + vec2(0.5); float depth = texture(diffuseTexture, UV).x; vec3 addedLight = vec3(0,0,0); //if (depth >= zBounds.x && depth <= zBounds.y) { vec4 diffuseTex = texture(diffuseTexture, UV); vec4 normal = texture(normalTexture, UV); // Clip-space position vec4 cPos = vec4(euclClipSpacePosition.xy, depth*2 -1, 1); // World-space position vec4 wPos = ivpMatrix * cPos; vec3 pos = wPos.xyz / wPos.w; // Lighting vectors vec3 lVec = (lightPos - pos) * invRadius; vec3 lightVec = normalize(lVec); vec3 viewVec = normalize(camPos - pos); // Attenuation that falls off to zero at light radius float atten = clamp(1.0f - dot(lVec, lVec), 0.0, 1.0); atten *= atten; // Lighting float colDiffuse = clamp(dot(lightVec, normal.xyz), 0, 1); float specular_intensity = diffuseTex.w * 0.4f; float specular = specular_intensity * pow(clamp(dot(reflect(-viewVec, normal.xyz), lightVec), 0.0, 1.0), 10.0f); addedLight = atten * (colDiffuse * diffuseTex.xyz + specular); } colourOut = vec4(addedLight.xyz, 1); } Shader Binding: Matrix4f inverseViewProjection = new Matrix4f(); Vector3f camPos = cameraController.getActiveCameraPos(); GL20.glUniform3f(shader.getLocCamPos(), camPos.x, camPos.y, camPos.z); inverseViewProjection = cameraController.getActiveVPMatrixInverse(); //inverseViewProjection = inverseViewProjection.translate(new Vector3f(-1f, 1f, 0)); //inverseViewProjection = inverseViewProjection.scale(new Vector3f(2, -2, 1)); //inverseViewProjection = inverseViewProjection.scale(new Vector3f(1f/engineParams.getDisplayWidth(), 1f/engineParams.getDisplayHeight(), 1)); GL20.glUniformMatrix4(shader.getLocmIVPMatrix(), false, OpenGLHelper.getMatrix4ScratchBuffer(inverseViewProjection)); float nearTest = 0, farTest = 0; Matrix4f projection = new Matrix4f(cameraController.getCoreCameraProjection()); GL20.glUniformMatrix4(shader.getLocmWVP(), false, OpenGLHelper.getMatrix4ScratchBuffer(cameraController.getActiveViewProjectionMatrix())); Vector2f zw = new Vector2f(projection.m22, projection.m23); //Vector4f testLightViewSpace = new Vector4f(lightPos.getX(), lightPos.getY(), lightPos.getZ(), 1); //testLightViewSpace = OpenGLHelper.columnVectorMultiplyMatrixVector((Matrix4f)cameraController.getActiveCameraView(), testLightViewSpace); // Compute z-bounds Vector4f lPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); float z1 = lPos.z + lightRadius; //if (z1 > NEAR_DEPTH) { float z0 = Math.max(lPos.z - lightRadius, NEAR_DEPTH); nearTest = (zw.x + zw.y / z0); farTest = (zw.x + zw.y / z1); if (nearTest > 1) { nearTest = 1; } else if (nearTest < 0) { nearTest = 0; } if (farTest > 1) { farTest = 1; } else if (farTest < 0) { farTest = 0; } GL20.glUniform3f(shader.getLocLightPos(), lightPos.getX(), lightPos.getY(), lightPos.getZ()); GL20.glUniform3f(shader.getLocLightColour(), lightColour.getX(), lightColour.getY(), lightColour.getZ()); GL20.glUniform1f(shader.getLocLightRadius(), lightRadius); GL20.glUniform1f(shader.getLocInvRadius(), 1f/lightRadius); GL20.glUniform1f(shader.getLocLightFalloff(), lightFalloff); GL20.glUniform2f(shader.getLocZBounds(), nearTest, farTest); } - Absolutely I have seen some of the optimisation tricks but until I have a firm handle I will stick to getting the basics going.
  14. Hi All,   I am a seasoned DX Developer and having scoured the web for decent deferred shader approaches I am trying to adapt samples from the web to the forsaken tongue   Deferred Rendering Tutorials I have seen: http://ogldev.atspace.co.uk/www/tutorial35/tutorial35.html - Uses Extra Depth Texture & Inefficient Stencil http://www.catalinzima.com/xna/tutorials/deferred-rendering-in-xna/creating-the-g-buffer/ - Uses Extra Depth Texture http://www.openglsuperbible.com/example-code/ (Deferred Rendering Demo) - Uses Extra Depth Texture http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/ - Only snippets, limited success with the frag_depth "vertex / w" approach (with perspective screen space distortions), fell flat otherwise. https://github.com/Circular-Studios/Dash - Looks great but I was unable to inspect using dev tools debugger did not play nice with D. http://www.humus.name/index.php?page=3D&&start=0 - No excess position buffers, no stencil buffers, advanced depth inspection to filter lights efficiently, its beautiful! In my attempt to port the last sample (based on the Second-Depth Anti-Alias sample) I get world space wrapped inside the light. Clearly the primary fault is that the inverse view projection is invalid (this was the worst number of artifacts I could generate rotating around a cube). I intentionally have a very well rounded light sphere to make the distortions clear.   Provided I can get the basics like a world space position I can write beautiful shaders, I am struggling to get this point of reference and without it I feel about 2cm tall.   Because I want this to be available for everyone (I hate the lack of GL sample code!) here is what I have so far (apologies that there is no sample app):   GeomVertShader: #version 330 in vec3 inPos; in vec2 inUV; in vec3 inNormal; in vec3 inBinormal; in vec3 inTangent; uniform mat4 wvpMatrix; out vec4 mWVPPosition; out vec3 pNormal; out vec3 pBinormal; out vec3 pTangent; out vec2 texCoord; void main(void) { mWVPPosition = wvpMatrix * vec4(inPos, 1.0f); gl_Position = mWVPPosition; pNormal = inNormal; pBinormal = inBinormal; pTangent = inTangent; texCoord = inUV; } GeomFragShader #version 330 in vec4 mWVPPosition; in vec3 pNormal; in vec3 pBinormal; in vec3 pTangent; in vec2 texCoord; uniform mat4 wvpMatrix; uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D heightTexture; uniform sampler2D specularTexture; layout (location = 0) out vec4 colourOut; layout (location = 1) out vec4 normalOut; void main(void) { vec3 bump = 2 * texture(normalTexture, texCoord).xyz -1; vec3 normal = pTangent * bump.x + pBinormal * bump.y + pNormal * bump.z; normal = normalize(normal); colourOut = texture( diffuseTexture, texCoord ); // specular intensity vec3 specularSample = texture( specularTexture, texCoord ).xyz; colourOut.w = ( specularSample.x + specularSample.y + specularSample.z ) / 3; normalOut.xyz = normal; normalOut.w = 1; } PointLightVertShader #version 330 in vec3 inPos; in vec2 inUV; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; out vec4 mWVPPosition; void main(void) { vec3 position = inPos; position *= lightRadius; position += lightPos; mWVPPosition = wvpMatrix * vec4(position, 1.0f); gl_Position = mWVPPosition; } PointLightFragShader #version 330 in vec4 mWVPPosition; uniform mat4 wvpMatrix; uniform mat4 ivpMatrix; uniform vec2 zBounds; uniform vec3 camPos; uniform float invRadius; uniform vec3 lightPos; uniform vec3 lightColour; uniform float lightRadius; uniform float lightFalloff; uniform sampler2D diffuseTexture; uniform sampler2D normalTexture; uniform sampler2D depthTexture; layout (location = 0) out vec4 colourOut; void main(void) { vec2 UV = mWVPPosition.xy; float depth = texture(diffuseTexture, UV).x; vec3 addedLight = vec3(0,0,0); //if (depth >= zBounds.x && depth <= zBounds.y) { vec4 diffuseTex = texture(diffuseTexture, UV); vec4 normal = texture(normalTexture, UV); // Screen-space position vec4 cPos = vec4(UV, depth, 1); // World-space position vec4 wPos = ivpMatrix * cPos; vec3 pos = wPos.xyz / wPos.w; // Lighting vectors vec3 lVec = (lightPos - pos) * invRadius; vec3 lightVec = normalize(lVec); vec3 viewVec = normalize(camPos - pos); // Attenuation that falls off to zero at light radius float atten = clamp(1.0f - dot(lVec, lVec), 0.0, 1.0); atten *= atten; // Lighting float colDiffuse = clamp(dot(lightVec, normal.xyz), 0, 1); float specular_intensity = diffuseTex.w * 0.4f; float specular = specular_intensity * pow(clamp(dot(reflect(-viewVec, normal.xyz), lightVec), 0.0, 1.0), 10.0f); addedLight = atten * (colDiffuse * diffuseTex.xyz + specular); } colourOut = vec4(addedLight.xyz, 1); } Note that for the moment I am totally ignoring the optimisation of "if (depth >= zBounds.x && depth <= zBounds.y)" because I want to crack the basic reconstruction before experimenting with this.   Shader Binding: Matrix4f inverseViewProjection = new Matrix4f(); Vector3f camPos = cameraController.getActiveCameraPos(); GL20.glUniform3f(shader.getLocCamPos(), camPos.x, camPos.y, camPos.z); inverseViewProjection = cameraController.getActiveVPMatrixInverse(); //inverseViewProjection = inverseViewProjection.translate(new Vector3f(-1f, 1f, 0)); //inverseViewProjection = inverseViewProjection.scale(new Vector3f(2, -2, 1)); inverseViewProjection = inverseViewProjection.scale(new Vector3f(1f/engineParams.getDisplayWidth(), 1f/engineParams.getDisplayHeight(), 1)); GL20.glUniformMatrix4(shader.getLocmIVPMatrix(), false, OpenGLHelper.getMatrix4ScratchBuffer(inverseViewProjection)); float nearTest = 0, farTest = 0; Matrix4f projection = new Matrix4f(cameraController.getCoreCameraProjection()); GL20.glUniformMatrix4(shader.getLocmWVP(), false, OpenGLHelper.getMatrix4ScratchBuffer(cameraController.getActiveViewProjectionMatrix())); Vector2f zw = new Vector2f(projection.m22, projection.m23); //Vector4f testLightViewSpace = new Vector4f(lightPos.getX(), lightPos.getY(), lightPos.getZ(), 1); //testLightViewSpace = OpenGLHelper.columnVectorMultiplyMatrixVector((Matrix4f)cameraController.getActiveCameraView(), testLightViewSpace); // Compute z-bounds Vector4f lPos = OpenGLHelper.columnVectorMultiplyMatrixVector(cameraController.getActiveCameraView(), new Vector4f(lightPos.x, lightPos.y, lightPos.z, 1.0f)); float z1 = lPos.z + lightRadius; //if (z1 > NEAR_DEPTH) { float z0 = Math.max(lPos.z - lightRadius, NEAR_DEPTH); nearTest = (zw.x + zw.y / z0); farTest = (zw.x + zw.y / z1); if (nearTest > 1) { nearTest = 1; } else if (nearTest < 0) { nearTest = 0; } if (farTest > 1) { farTest = 1; } else if (farTest < 0) { farTest = 0; } GL20.glUniform3f(shader.getLocLightPos(), lightPos.getX(), lightPos.getY(), lightPos.getZ()); GL20.glUniform3f(shader.getLocLightColour(), lightColour.getX(), lightColour.getY(), lightColour.getZ()); GL20.glUniform1f(shader.getLocLightRadius(), lightRadius); GL20.glUniform1f(shader.getLocInvRadius(), 1f/lightRadius); GL20.glUniform1f(shader.getLocLightFalloff(), lightFalloff); GL20.glUniform2f(shader.getLocZBounds(), nearTest, farTest); } The line "inverseViewProjection = cameraController.getActiveVPMatrixInverse();" depends on the multiplied result of the inverse of these two:   View Matrix public void updateViewMatrix(Matrix4f coreViewMatrix) { Matrix4f.setIdentity(coreViewMatrix); if (lookAtVector.length() != 0) { lookAtVector.normalise(); } Vector3f.cross(up, lookAtVector, right); if (right.length() != 0) { right.normalise(); } Vector3f.cross(lookAtVector, right, up); if (up.length() != 0) { up.normalise(); } coreViewMatrix.m00 = right.x; coreViewMatrix.m01 = up.x; coreViewMatrix.m02 = lookAtVector.x; coreViewMatrix.m03 = 0; coreViewMatrix.m10 = right.y; coreViewMatrix.m11 = up.y; coreViewMatrix.m12 = lookAtVector.y; coreViewMatrix.m13 = 0; coreViewMatrix.m20 = right.z; coreViewMatrix.m21 = up.z; coreViewMatrix.m22 = lookAtVector.z; coreViewMatrix.m23 = 0; //Inverse dot from eye position coreViewMatrix.m30 = -Vector3f.dot(eyePosition, right); coreViewMatrix.m31 = -Vector3f.dot(eyePosition, up); coreViewMatrix.m32 = -Vector3f.dot(eyePosition, lookAtVector); coreViewMatrix.m33 = 1; } Projection Matrix: public static void createProjection(Matrix4f projectionMatrix, float fov, float aspect, float znear, float zfar) { float scale = (float) Math.tan((Math.toRadians(fov)) * 0.5f) * znear; float r = aspect * scale; float l = -r; float t = scale; float b = -t; projectionMatrix.m00 = 2 * znear / (r-l); projectionMatrix.m01 = 0; projectionMatrix.m02 = 0; projectionMatrix.m03 = 0; projectionMatrix.m10 = 0; projectionMatrix.m11 = 2 * znear / (t-b); projectionMatrix.m12 = 0; projectionMatrix.m13 = 0; projectionMatrix.m20 = (r + l) / (r-l); projectionMatrix.m21 = (t+b)/(t-b); projectionMatrix.m22 = -(zfar + znear) / (zfar-znear); projectionMatrix.m23 = -1; projectionMatrix.m30 = 0; projectionMatrix.m31 = 0; projectionMatrix.m32 = -2 * zfar * znear / (zfar - znear); projectionMatrix.m33 = 0; }   TLDR: Please help me diagnose what is wrong with the lighting from the picture/code above, my holy grail is a working sample of true depth reconstruction in OpenGL preferably to world space.
  15. OpenGL

    Looks like I can get to both view and world space if I plug my derived linear depth into: http://mynameismjp.wordpress.com/2009/05/05/reconstructing-position-from-depth-continued/   Based on the attack the depth buffer pixel shader:     float3 viewRay = float3(Input.PositionVS.xy / Input.PositionVS.z, 1.0f);     float3 positionVS = LinearizedDepth(depthTextureForPizel) * viewRay; Where positionVS is the world view project multiplied vertex as per standard transform, Yours3!f's method includes additional transforms that may be required for OpenGL but I haven't tested that yet.   The real work will be getting to world space without matrix multiplication per pixel that is my current task.