• Advertisement

Florian22222

Member
  • Content count

    246
  • Joined

  • Last visited

Community Reputation

618 Good

About Florian22222

  • Rank
    Member
  1. Dynamic Narrative in The Hit

    This was a very interesting read! The only thing I hope in this game is that there aren't too many content bugs prohibiting the player from finishing a story-line.
  2. Starting my studio

    Nice article and good luck!   As to Unity3D, you can use Visual Studio as a code editor as well and it works much better than MonoDevelop. The only thing you can't do is step-by-step debugging in VisualStudio, for that you need to go back to MonoDevelop.
  3. [GLSL] NVIDIA vs ATI shader problem

    Ok I finally solved it with the help of you guys!   The problem was in the attribute locations in the vertex shader. NV automatically generates those according to the order in which the attributes where defined, while ATI needs to have the explicit layout(location = index) in front of each attribute. Also I switched to #version 330 otherwise those layout() identifiers are not available. The version change is no problem since i want to implement a deferred renderer in the next step anyway.   Again, thank you for your help!
  4. [GLSL] NVIDIA vs ATI shader problem

    So does anyone have a clue why the line posted above would make a problem on ati but not on NV?
  5. [GLSL] NVIDIA vs ATI shader problem

    It gives a compiler error without specified #version directive. I decided on version 130 to support as much hardware as possible.
  6. [GLSL] NVIDIA vs ATI shader problem

    I tracked at least one of the issues down to this line in the vertex shader: vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0f)).xyz); If I uncomment it, the vertex positions are correct. As soon as I don't uncomment this line the plane is not being rendered at all and the sphere's texcoords are wrong.   Any idea what might cause this?   EDIT: I changed all constant float values to have an f after them.
  7. Hello!   I am trying to get my engine running on ATI/AMD(previously i only ran it on NVIDIA).   My shader uses diffuse lighting to render a sphere and a plane(both with same material). This works totally fine on NVIDIA graphics cards. On ATI it compiles, but shows really weird results when executed. The sphere appears to have some issue with fetching texture and the plane does not appear at all. I also tried compiling it with AMDs Shader Analyzer and it did compile without errors or warnings.   Here is the vertex shader: uniform mat4 gWVP; uniform mat4 gNormalMatrix; uniform mat4 gModelViewMatrix; in vec3 inPosition; in vec3 inNormal; in vec2 inTexcoord; in vec3 inTangent; out vec2 vTexcoord; //all of these are in eyespace out vec3 vPosition; out vec3 vNormal; out vec3 vTangent; void main() { gl_Position = gWVP * vec4(inPosition,1.0); vNormal = normalize((gNormalMatrix * vec4(inNormal,0.0)).xyz); vTangent = normalize((gNormalMatrix * vec4(inTangent,0.0)).xyz); vPosition = (gModelViewMatrix * vec4(inPosition,1.0)).xyz; vTexcoord = inTexcoord; } and fragment shader: #version 130 const int LIGHT_TYPE_NONE = 0; const int LIGHT_TYPE_DIRECTIONAL = 1; const int LIGHT_TYPE_POINT = 2; struct Light{ int lightType; vec3 position; vec4 diffuse; float intensity; float constantAttenuation; float linearAttenuation; float quadraticAttenuation; }; const int NUM_LIGHTS = 4; uniform Light gLights[NUM_LIGHTS]; uniform vec4 gGlobalAmbient; vec4 calculateDiffuse(Light light, vec4 surfaceColor, vec3 normal, vec3 lightDir) { vec4 outColor = vec4(0.0); vec3 normalizedLightDir = normalize(lightDir); float NdotL = max(dot(normal,normalizedLightDir),0.0); if(light.lightType == LIGHT_TYPE_DIRECTIONAL) { if (NdotL > 0.0) { outColor += surfaceColor * light.diffuse * light.intensity * NdotL; } }else if(light.lightType == LIGHT_TYPE_POINT) { float dist = length(lightDir); if (NdotL > 0.0) { float attenuation = 1.0 / (light.constantAttenuation + light.linearAttenuation * dist + light.quadraticAttenuation * dist * dist); outColor += surfaceColor * light.diffuse * light.intensity * attenuation * NdotL; } } return outColor; } uniform sampler2D gMainTexture; in vec2 vTexcoord; in vec3 vPosition; in vec3 vNormal; in vec3 vTangent; out vec4 outColor; void main (void) { vec4 texDiffuse = texture(gMainTexture,vTexcoord); vec3 normal = normalize(vNormal); vec3 tangent = normalize(vTangent); vec3 bitangent = cross(normal, tangent); mat3 tangentSpaceMatrix = mat3( tangent.x, bitangent.x, normal.x, tangent.y, bitangent.y, normal.y, tangent.z, bitangent.z, normal.z ); //ambient outColor = texDiffuse * gGlobalAmbient; for(int i = 0;i<NUM_LIGHTS;i++) { vec3 lightDir = vec3(0.0); if(gLights[i].lightType == LIGHT_TYPE_DIRECTIONAL) { lightDir = normalize(tangentSpaceMatrix * gLights[i].position); }else if(gLights[i].lightType == LIGHT_TYPE_POINT) { lightDir = tangentSpaceMatrix * (gLights[i].position - vPosition); } if(gLights[i].lightType != LIGHT_TYPE_NONE) { outColor += calculateDiffuse(gLights[i],texDiffuse,vec3(0.0,0.0,1.0),lightDir); } } } I hope someone can point out the issue to me or help me find a solution to debug it(since its no compiler error).   Thanks in advance!
  8. [GLSL] Question about normal mapping

    Thanks Johnny Code for the clarification. I want to shade 4 lights.     Well it's highly unlikely that there is less fillrate than vertices, at least with the game I am making(mid-poly count with cartoony art style).
  9. [GLSL] Question about normal mapping

    The method you mentioned here is to pass data from cpu to the fragment shader.  What I want is passing multiple vec3's from vertex to fragment shader.   i used out vec3 vLightDirs[4]; to pass 4 light dirs to fragment shader. I just hope that this works on every gpu
  10. [GLSL] Question about normal mapping

    Isn't it very unefficient making the BTN-Matrix(tangent space) in the fragment shader?   I used a vec3 vLightDirs[NUM_LIGHTS] to pass the light dirs in tangent space. Is this gonna cause problem on certain gpus?
  11. wglChoosePixelFormatARB for MSAA very slow

    It kinda seems to have fixed itself. It's now much faster although I didn't change anything on my system...Still takes its time, but not that bad anymore. I tried adding LoadLibrary() but that didnt change much.   Thanks for your help anyway I think I can live with this half a second at startup.
  12. I took a look at some tutorials on the internet about how to do normal mapping and they all convert the light direction with into tangent space.   In my shader setup every shader has a "#define NUM_LIGHTS" and a respective array of structs for each light.   Every tutorial I looked at only calculated the lightDir for one light and passing it on with the "out" modifier to the fragment shader. How would I go about passing, for example, 4 lightDirs to the fragmentShader, without creating a vec3 lightDir0; vec3 lightDir1; etc for each light? I looked up arrays with out modifier but it seems those dont work on every card?   Hopefully someone can clarify this for me.   Have a nice day!
  13. wglChoosePixelFormatARB for MSAA very slow

    Adding WGL_SWAP_EXCHANGE_ARB,WGL_SWAP_EXCHANGE_ARB, WGL_PIXEL_TYPE_ARB, WGL_TYPE_RGBA_ARB, to the iAttributes array actually makes the whole thing run slower than before.   Here's the full iAttributes array without the 2 additions: int iAttributes[] = { WGL_DRAW_TO_WINDOW_ARB,GL_TRUE, WGL_SUPPORT_OPENGL_ARB,GL_TRUE, WGL_ACCELERATION_ARB,WGL_FULL_ACCELERATION_ARB, WGL_COLOR_BITS_ARB,24, WGL_ALPHA_BITS_ARB,8, WGL_DEPTH_BITS_ARB,16, WGL_STENCIL_BITS_ARB,0, WGL_DOUBLE_BUFFER_ARB,GL_TRUE, WGL_SAMPLE_BUFFERS_ARB,1, WGL_SAMPLES_ARB,16, 0,0 };
  14. wglChoosePixelFormatARB for MSAA very slow

    I have an NVIDIA GeForce GTX 560 Ti. Well yes its not a performance-critical part of my engine, yet it bugs me to have this delay in there after starting...
  15. wglChoosePixelFormatARB for MSAA very slow

    What do you mean by "what graphics are you using"?   I updated my graphics card driver and it didnt change anything in the time.   I have no small demo unfortunately its all incorporated in my engine. I just let a friend test it on his computer and there without Visual Studio it takes about 1-2 seconds(a bit faster than when i run it from visual studio).   EDIT: Just read the second answer. Is there any workaround for this?
  • Advertisement