Jack Shannon

Member
  • Content count

    24
  • Joined

  • Last visited

Community Reputation

492 Neutral

About Jack Shannon

  • Rank
    Member
  1. Idea for a demo of my 3D engine

    Thank you very much for your replies! I guess there is no easy way round that I was hoping for, I'll just have to keep thinking and finding inspiration. :)
  2. What I'm not having trouble with is how to best to show my engine, and I'm going to have to start sending off my portfolio to companies in under a month, as I'm doing an Industry year out from university!   So I've finally implemented all the techniques that I want in my engine, models, terrain, sky, atmosphere, shadows, ssao, physics etc. Now I just need content!   I'm a capable artist when it comes to copying things (if I have references), and am competent in the necessary tools in the content pipeline. I just can't come up with ideas.   I was thinking of going to a local park or something and taking loads of photographs and just recreating that scene, could this be a good idea to send to companies, or is there anywhere I can find cool concept art for levels that I can use for free? Would just like some advice really from anyone who has created a portfolio piece for their engine!   Thanks   Jack
  3. Hi Hodgman, I'm currently writing my builder in a similar way using python, really helpful! Was just wondering what kind of intermediate format you use to export your scene information from a program like maya, for example exporting a scene graph or just the object positions?
  4. Shader "plugin" system?

    In your ambient shader have something like: #ifdef OPTION_04 // SSAO vDiffuseMaterial.rgb *= 1.0-SSAO.Sample(InputSampler, i.vTex0).r; #endif Then you need to load a permutation of that shader with the macro "#define OPTION_04\n" prepended when your SSAO-plugin is enabled. In your config file you could specify which numbered options need to be enabled.
  5. Convert RGBA buffer to ARGB

    LodePNG only consists of a .hpp and a .cpp file and doesn't use libpng.
  6. I'm attempting to implement shadow mapping in my deferred pipeline. I've verified that my shadow map creation is correct but am now struggling to implement shadow occlusion.   Here is the fragment shader: in vec2 f_uv; in vec3 frustumCornerVS; out vec4 fragment; uniform mat4 shadowMatrix; uniform sampler2D gbuffer_3_tex; uniform sampler2D shadow_map_tex; void main() { // reconstruct the pixel position into view space from the depth buffer float pixelDepthCS = texture(gbuffer_3_tex, f_uv).r; vec3 pixelVS = pixelDepthCS * frustumCornerVS; // get pixel position in Light clip space vec4 pixelLightCS = shadowMatrix * vec4(pixelVS, 1); pixelLightCS /= pixelLightCS.w; // sample shadow map float litDepth = texture(shadow_map_tex, pixelLightCS.xy).r; // test for occlusion float occlusion = 0; if (pixelLightCS.z < litDepth) { occlusion += 1; } fragment = vec4(occlusion, occlusion, occlusion, 1); } Now, I know that pixelVS (pixel in view space) is correct because I've debugged it.   I think the problem is with shadowMatrix.   The code for building the shadow matrix is: // scale and bias glm::mat4 bias = glm::scale(0.5f, 0.5f, 0.5f); bias = glm::translate(bias, glm::vec3(0.5, 0.5, 0.5)); glm::mat4 view = scene->camera->getView(); glm::mat4 shadowMatrix = bias * (shadowCaster->projection * shadowCaster->view) * glm::inverse(view); I'm not sure it's doing exactly what I want it to: transforming into world space from view space, then transforming into the shadowCasters clip space, then a scale and bias into texture coordinates.   What could I be doing wrong?     Edit: This was such a poor question and their were many things wrong that had nothing to do with this. Feel free to delete.
  7. Thank you! It was just that!
  8. I'm trying to multiply the NDC frustum points by the inverse of the projection matrix to give me the frustum corners in view space. Here is my code: void getFrustumCorners(std::vector<glm::vec3>& corners, glm::mat4 projection) { corners.clear(); // homogeneous corner coords glm::vec4 hcorners[8]; // near hcorners[0] = glm::vec4(-1, 1, 1, 1); hcorners[1] = glm::vec4(1, 1, 1, 1); hcorners[2] = glm::vec4(1, -1, 1, 1); hcorners[3] = glm::vec4(-1, -1, 1, 1); // far hcorners[4] = glm::vec4(-1, 1, -1, 1); hcorners[5] = glm::vec4(1, 1, -1, 1); hcorners[6] = glm::vec4(1, -1, -1, 1); hcorners[7] = glm::vec4(-1, -1, -1, 1); glm::mat4 inverseProj = glm::inverse(projection); for (int i = 0; i < 8; i++) { hcorners[i] = hcorners[i] * inverseProj; hcorners[i] /= hcorners[i].w; corners.push_back(glm::vec3(hcorners[i])); } } int main() { auto proj = glm::perspective(56.25f, 720.0f / 450.0f, 0.1f, 100.0f); std::vector<glm::vec3> corners; getFrustumCorners(corners, proj); for (auto c : corners) { std::cout << c.x << " " << c.y << " " << c.z << std::endl; } return 0; } Now the output that this is giving me is: -0.213538 0.133461 -1.24719 0.213538 0.133461 -1.24719 0.213538 -0.133461 -1.24719 -0.213538 -0.133461 -1.24719 -0.142418 0.089011 -0.831807 0.142418 0.089011 -0.831807 0.142418 -0.089011 -0.831807 -0.142418 -0.089011 -0.831807 This can't be correct? Surely when using a zNear of 0.1 and a zFar of 100.0, the difference between the near and far plane coords should be just under 100.0?   What am I doing wrong?
  9. So It’s Come to This

    Always enjoy your posts L. Spiro. You are always so generous with your advice/knowledge. I wish you all the best!
  10. Strange specular with normal map (BlinnPhong)

    It turns out that the speculars are correct, but are only being rendered on half of the pane. This demonstrates it more clearly:   Why could this be?     EDIT: solved! It turned out that I shouldn't have normalised tbn_viewDirection, this fixed it:
  11. SOLVED   Please can someone take a look at my normal map code, the diffuse is fine.   Without normal map:   With normal map:   Vertex shader: /* The following can be defined: * TEXTURED * NORMALMAPPED */ in vec3 v_position; #if defined(TEXTURED) || defined(NORMALMAPPED) in vec2 v_uv; #endif in vec3 v_normal; #ifdef NORMALMAPPED in vec4 v_tangent; #endif layout(std140) uniform Transform { mat4 t_model_view_proj; mat4 t_model_view; mat3 t_normal; }; #if defined(TEXTURED) || defined(NORMALMAPPED) out vec2 f_uv; #endif #ifdef NORMALMAPPED layout(std140) uniform Light { int l_type; vec3 l_orientation; float l_attenuation; vec3 l_ambient; vec3 l_diffuse; vec3 l_specular; }; out vec3 tbn_lightDirection; out vec3 tbn_viewDirection; #else out vec3 f_position; out vec3 f_normal; out vec3 f_viewDirection; #endif void main() { gl_Position = t_model_view_proj * vec4(v_position, 1.0); #if defined(TEXTURED) || defined(NORMALMAPPED) f_uv = v_uv; #endif #ifdef NORMALMAPPED vec3 v_bitangent = v_tangent.z * cross(v_normal, v_tangent.xyz); // tangent.z stores m value which is the determinant of the object space to tangent space matrix mat3 TBN = transpose(mat3(v_tangent.xyz, v_bitangent, v_normal)); tbn_lightDirection = normalize(TBN * l_orientation); tbn_viewDirection = -normalize(TBN * v_position); #else f_position = (t_model_view * vec4(v_position, 1.0)).xyz; f_normal = normalize(t_normal * v_normal); f_viewDirection = -normalize(t_model_view * vec4(v_position, 1.0)).xyz; #endif }   Fragment shader: /* The following can be defined: * TEXTURED * NORMALMAPPED */ #define LTYPE_DIRECTIONAL 0 #define LTYPE_POINT 1 #if defined(TEXTURED) || defined(NORMALMAPPED) in vec2 f_uv; #endif #ifdef TEXTURED uniform sampler2D diffuseTexture; #endif #ifdef NORMALMAPPED uniform sampler2D normalTexture; in vec3 tbn_lightDirection; in vec3 tbn_viewDirection; #else in vec3 f_position; in vec3 f_normal; in vec3 f_viewDirection; #endif layout(std140) uniform Light { int l_type; vec3 l_orientation; float l_attenuation; vec3 l_ambient; vec3 l_diffuse; vec3 l_specular; }; layout(std140) uniform Material { vec3 m_ambient; vec3 m_diffuse; vec3 m_specular; // scale specular by shininess strength in external tool float m_shininess; }; out vec4 fragment; float lambert(vec3 lightDirection, vec3 normal) { float lambertTerm = dot(lightDirection, normal); lambertTerm = clamp(lambertTerm, 0, 1); return lambertTerm; } float blinnPhong(vec3 lightDirection, vec3 normal, vec3 viewDirection) { vec3 halfwayDirection = normalize(lightDirection + viewDirection); float blinnTerm = dot(normal, halfwayDirection); blinnTerm = clamp(blinnTerm, 0, 1); blinnTerm = pow(blinnTerm, m_shininess); return blinnTerm; } // not normalized vec3 getL() { #ifdef NORMALMAPPED return normalize(tbn_lightDirection); #else if (l_type == LTYPE_DIRECTIONAL) { return l_orientation; } else if (l_type == LTYPE_POINT) { return l_orientation - f_position; } #endif } vec3 getN() { #ifdef NORMALMAPPED vec3 tbnNormal = texture(normalTexture, f_uv).rgb * 2.0 - 1.0; return normalize(tbnNormal); #else return normalize(f_normal); #endif } vec3 getV() { #ifdef NORMALMAPPED return normalize(tbn_viewDirection); #else return normalize(f_viewDirection); #endif } void main() { vec3 lightDiffuse, lightSpecular = vec3(0); vec3 lightVector = getL(); vec3 l = normalize(lightVector); vec3 n = getN(); float lambertTerm = lambert(l, n); lightDiffuse = l_diffuse * lambertTerm; if (lambertTerm > 0) { vec3 v = getV(); lightSpecular = l_specular * blinnPhong(l, n, v); } #ifdef TEXTURED vec3 diffuse = texture(diffuseTexture, f_uv).xyz * lightDiffuse; #else vec3 diffuse = m_diffuse * lightDiffuse; #endif vec3 specular = m_specular * lightSpecular; float attenuation; if (l_type == LTYPE_DIRECTIONAL) { attenuation = 1; } else if (l_type == LTYPE_POINT) { float distanceToLight = length(lightVector); float attenuation = 1.0 / (1.0 + l_attenuation * pow(distanceToLight, 2)); } fragment = vec4(m_ambient + attenuation * (diffuse + specular), 1.0); //fragment = vec4(diffuse + specular, 1.0); }
  12. Take a look at Greenfoot, I haven't tried it but a proffessor at my university founded it and his books are of a very high standard.   http://www.greenfoot.org/door
  13. If you'd like something more generic, try CML, http://cmldev.net/
  14. Pre-Visualization Is Important!

    Thank you for this, I have also seen responses like you mentioned, seasoned professionals saying that they figure it all out in their head as they go along. This is the approach I have been taking and for me 50% of the time it results in spaghetti code and I get really confused and frustrated about the overall design.
  15. Thanks again L. Spiro!