tmason

Members
  • Content count

    134
  • Joined

  • Last visited

Community Reputation

326 Neutral

About tmason

  • Rank
    Member
  1. GLSL; return statement ...

    As a rule of thumb you should avoid unnecessarily changing render states or bound resources, yes... But you should also avoid unnecessary 'if' statements within shaders! You're making every pixel much more expensive in order to save a little bit of CPU time. I see; is there a happy medium inbetween where I can use one shader to do simple stuff like draw SkyBoxes/wireframes? I understand what folks are saying here; but having multiple shaders just to do simple tasks like drawing skyboxes/wireframes/other simple stuff seems like a waste, no?
  2. GLSL; return statement ...

    Thank you for all of the feedback. Since I am a beginner maybe my thinking isn't correct on this but I will share it in hopes of hearing what experts have to say on the matter: I wanted to use one (1) shader for drawing such that I don't have to constantly switch shaders CPU-side. The shader would use float-based uniforms to chose whether to draw:WireframeSkybox (using a texture2D sampler)Standard shading (with different texture channels for ambient, diffuse, specular, emissive, etc.)Within my standard shading model I plan to have the capability to calculate multiple lights and be able to position them, etc.I would also have the capability to turn lighting "on" or "off" such that I can just show the ambient/diffuse textures without lighting calculations.The method I was going about doing this was as shown in my OP as an example; if a "drawWireframe" uniform was equal to 1.0, use the wireframe section of code in my shader but if that uniform was 0.0 then do something else. In my mind this was simpler than using multiple shaders as I am just altering the uniforms in memory but not changing shaders. From what I read you should avoid changing shaders if you can. Let me know if this makes sense and if my thinking is correct here. Thank you.
  3. Hello, Simple question about GLSL and fragment shaders. May I return the "final color" early if I don't need to do any further processing in the void main function? Consider the following example: #version 330 core out vec4 finalColor; uniform float drawWireframe; uniform vec4 materialColor; vec4 fancyLightingFunction(vec4 colorToProcess) { // Fancy lighting function here... } void main() { if (drawWireframe == 1.0) finalColor = materialColor; // May I call "return" here? finalColor = fancyLightingFunction(materialColor); } Thank you for your time.
  4.   Thank you so much; this sounds like a workable solution.   I could extend this using threads to keep the sorting dynamic when the camera moves around.   I am just curious as to how to do this without interfering that much with the rendering thread; I'll need a mutex to lock the both the OpenGL object vector and the camera distance int vector until after sorting is complete when the camera moves.
  5. Hello, So I would like to optimize my drawing calls to account for transparency. Long story short, what I have been doing is sorting all of my OpenGL meshes from opaque to fully transparent before the scene is loaded and then drawing. However, after reading, you must also sort from furthest to nearest. The problem is I am not sure how this can be done dynamically in the scene with an std::vector or QVector based iteration of all of my objects I need to draw. Consider the logical steps I am going through now: Load all of my objects into Computer RAM via loaders. With loaders I assign the correct materials to the object (colors, textures, etc.) Sort by transparency Load into OpenGL Draw into OpenGL using the rendering loop. I guess what I am asking is how do people sort the objects drawn from furthest to nearest per frame based on the active camera position?   What's the high speed way that I am missing?   Thank you.
  6. Hello, So I have been looking around for a good function that generates a sphere for OpenGL and I found the function below. I am not trying to simply cut and paste code in my project so I and trying to understand how it works. It seems to me that the first three arguments are the x, y, and z coordinates of the center of the sphere and the fourth is the radius. What I don't understand is the fifth argument; is that the number of sections or "poles" for the sphere being generated? Thank you for your time: Source: https://gist.github.com/stuartjmoore/1076642 void renderSphere(float cx, float cy, float cz, float r, int p) { float theta1 = 0.0, theta2 = 0.0, theta3 = 0.0; float ex = 0.0f, ey = 0.0f, ez = 0.0f; float px = 0.0f, py = 0.0f, pz = 0.0f; GLfloat vertices[p*6+6], normals[p*6+6], texCoords[p*4+4]; if( r < 0 ) r = -r; if( p < 0 ) p = -p; for(int i = 0; i < p/2; ++i) { theta1 = i * (M_PI*2) / p - M_PI_2; theta2 = (i + 1) * (M_PI*2) / p - M_PI_2; for(int j = 0; j <= p; ++j) { theta3 = j * (M_PI*2) / p; ex = cosf(theta2) * cosf(theta3); ey = sinf(theta2); ez = cosf(theta2) * sinf(theta3); px = cx + r * ex; py = cy + r * ey; pz = cz + r * ez; vertices[(6*j)+(0%6)] = px; vertices[(6*j)+(1%6)] = py; vertices[(6*j)+(2%6)] = pz; normals[(6*j)+(0%6)] = ex; normals[(6*j)+(1%6)] = ey; normals[(6*j)+(2%6)] = ez; texCoords[(4*j)+(0%4)] = -(j/(float)p); texCoords[(4*j)+(1%4)] = 2*(i+1)/(float)p; ex = cosf(theta1) * cosf(theta3); ey = sinf(theta1); ez = cosf(theta1) * sinf(theta3); px = cx + r * ex; py = cy + r * ey; pz = cz + r * ez; vertices[(6*j)+(3%6)] = px; vertices[(6*j)+(4%6)] = py; vertices[(6*j)+(5%6)] = pz; normals[(6*j)+(3%6)] = ex; normals[(6*j)+(4%6)] = ey; normals[(6*j)+(5%6)] = ez; texCoords[(4*j)+(2%4)] = -(j/(float)p); texCoords[(4*j)+(3%4)] = 2*i/(float)p; } glVertexPointer(3, GL_FLOAT, 0, vertices); glNormalPointer(GL_FLOAT, 0, normals); glTexCoordPointer(2, GL_FLOAT, 0, texCoords); glDrawArrays(GL_TRIANGLE_STRIP, 0, (p+1)*2); } }
  7. Hello, I am wondering if anyone has used Qt datatypes (QMatrix4x4, QMatrix3x3, QVector3D, QVector4D, etc.) with OpenGL. I know that Qt has it's own way of dealing with OpenGL but I have an existing working program that I built using GLM that I need to port over. Ideally, I would like to use GLM datatypes but the problem is that Qt's shader management classes (QOpenGLShaderProgram and QOpenGLShader) doesn't seem to accept pointers in their setters (for example, I didn't find the setUniform() function that accepts a pointer to a float array). Hopefully someone has experience with this and can shed some light; I'd rather use GLM's datatypes since I have a working system using that already. However, since Qt is where I need to go (I need the toolbar/menu/UI functionality Qt provides) I am willing to convert if necessary. Thank you.
  8. Hello, I know folks have helped me in the past on this and I truly believe I am almost there. So, after much work I believe my directional light is working in the scene, sort of, but I still get weird artifacts and effects. Positional lighting doesn't seem to work at all. Also, my entire scene is poorly lit and I am not sure what I am doing wrong to have this binary "on, off" effect with lighting. Anyway, here is what I am talking about. With positional lighting coming from a point in one direction, the entire scene is dark, as shown below: When I enable just directional lighting per my shader code; I get the light coming from the right direction but it essentially comes in as an "on, off" thing. You'll see that I highlighted the areas in the image below where essentially the object is getting light and then right in the next pixel over there is no light: To show people that I am loading normals, I did a simple "finalColor = NormalColor" colorizer in my shader code. Here is the resultant image of the scene at the same angle: Also, per usual, here is my vertex shader:   #version 330 core #extension GL_ARB_explicit_attrib_location : require layout(location = 0) in vec3 vPosition; layout(location = 1) in vec3 vNormal; layout(location = 2) in vec2 vUV; layout (std140) uniform Sunlight { vec4 SunlightPosition; vec4 SunlightDiffuse; vec4 SunlightSpecular; vec4 SunlightDirection; float constantAttenuation, linearAttenuation, quadraticAttenuation; float spotCutoff, spotExponent; float EnableLighting; float EnableSun; float ExtraValue; }; out vec4 worldSpacePosition; // position of the vertex (and fragment) in world space out vec3 vertexNormalDirection; // surface normal vector in world space out vec2 TextureCoordinates; out vec3 NormalColor; uniform mat4 MVP; uniform mat4 ModelMatrix; uniform mat4 ViewMatrix; uniform mat4 ViewModelMatrix; uniform mat4 InverseViewMatrix; uniform mat3 NormalMatrix; void main() { gl_Position = MVP * vec4(vPosition, 1.0); TextureCoordinates = vUV; worldSpacePosition = ModelMatrix * vec4(vPosition, 1.0); vertexNormalDirection = normalize(NormalMatrix * vNormal); NormalColor = vNormal; } And here is my fragment shader:   #version 330 #extension GL_ARB_explicit_attrib_location : require precision highp float; uniform mat4 MVP; uniform mat4 ModelMatrix; uniform mat4 ViewMatrix; uniform mat4 ViewModelMatrix; uniform mat4 InverseViewMatrix; uniform mat3 NormalMatrix; // // These values vary per Mesh // uniform vec4 AmbientMeshColor; uniform vec4 EmissiveMeshColor; uniform vec4 DiffuseMeshColor; uniform vec4 SpecularMeshColor; uniform vec4 SceneBrightnessColor; uniform float MeshShininess; uniform float ObjectHasTextureFile; // // Sunlight Settings. // layout (std140) uniform Sunlight { vec4 SunlightPosition; vec4 SunlightDiffuse; vec4 SunlightSpecular; vec4 SunlightDirection; float constantAttenuation, linearAttenuation, quadraticAttenuation; float spotCutoff, spotExponent; float EnableLighting; float EnableSun; float ExtraValue; }; uniform vec4 SceneAmbient; // // Whether Materials are enabled at all. // uniform float IfEnableTextures; // // If we are just simply drawing the skybox. // uniform float DrawingSkyBox; uniform float DrawNormals; uniform float EnableWireframe; uniform vec4 WireframeColor; uniform float TextureCoordinateDebug; uniform sampler2D MainTextureSampler; in vec4 worldSpacePosition; in vec3 vertexNormalDirection; in vec2 TextureCoordinates; in vec3 NormalColor; vec4 finalDiffuseColor; out vec4 finalColor; void DrawSkyBox() { finalColor = texture(MainTextureSampler, TextureCoordinates); } void DrawWireFrame() { finalColor = WireframeColor; } void main() { if (DrawingSkyBox != 1.0) { if (DrawNormals == 1.0) { finalColor = vec4(NormalColor, 1.0); } else { vec3 normalDirection = normalize(vertexNormalDirection); vec3 viewDirection = normalize(vec3(InverseViewMatrix * vec4(0.0, 0.0, 0.0, 1.0) - worldSpacePosition)); vec3 lightDirection; float attenuation; if (SunlightPosition.w == 0.0) // directional light? { attenuation = 1.0; // no attenuation lightDirection = normalize(vec3(SunlightPosition)); } else // point light or spotlight (or other kind of light) { vec3 positionToLightSource = vec3(SunlightPosition - worldSpacePosition); float distance = length(positionToLightSource); lightDirection = normalize(positionToLightSource); attenuation = 1.0 / (constantAttenuation + linearAttenuation * distance + quadraticAttenuation * distance * distance); if (spotCutoff <= 90.0) // spotlight? { float clampedCosine = max(0.0, dot(-lightDirection, vec3(SunlightDirection))); if (clampedCosine < cos(radians(spotCutoff))) // outside of spotlight cone? { attenuation = 0.0; } else { attenuation = attenuation * pow(clampedCosine, spotExponent); } } } vec4 ambientLighting = SceneAmbient * AmbientMeshColor; vec3 diffuseReflection; if (ObjectHasTextureFile == 1.0) { diffuseReflection = attenuation * vec3(SunlightDiffuse) * vec3(texture(MainTextureSampler, TextureCoordinates)) * max(0.0, dot(normalDirection, normalDirection)); } else { diffuseReflection = attenuation * vec3(SunlightDiffuse) * vec3(DiffuseMeshColor) * max(0.0, dot(normalDirection, normalDirection)); } vec3 specularReflection; if (dot(normalDirection, lightDirection) < 0.0) // light source on the wrong side? { specularReflection = vec3(0.0, 0.0, 0.0); // no specular reflection } else // light source on the right side { specularReflection = attenuation * vec3(SunlightSpecular) * vec3(SpecularMeshColor) * pow(max(0.0, dot(reflect(-lightDirection, normalDirection), viewDirection)), MeshShininess); } finalColor = vec4(vec3(ambientLighting) + diffuseReflection + specularReflection, DiffuseMeshColor.a); } } else { DrawSkyBox(); } } Thank you for your time.
  9. Hello, Simple question; wondering what is the best option for displaying 3D panormic images and movies, a cube or a 3D sphere? I tried the 3D sphere but so far the image is distorted at the top and the bottom. Would a cube work better? Thank you for your time.
  10. Thank you for this very helpful info. One item I am trying to find to fix my lighting which seems simple to do using the GLM library: How do I calculate the Inverse View matrix? I could write my own functions but I wonder if I can use the GLM::inverse() function against either the Model, View, or Projection matrix. Thank you.
  11. Hello, So I have been banging my head against the wall on a GLSL shader issue; essentially my lighting is uneven and weird. See my picture below; the problem is that I get the bright highlight in the OpenGL scene but the rest of the scene is unbelievably dark and makes lighting unusable. What am I doing wrong with the shader? Also, please let me know what else you need to see (code, etc.) to help troubleshoot the issue: Vertex Shader: #version 330 core #extension GL_ARB_explicit_attrib_location : require layout(location = 0) in vec3 vPosition; layout(location = 1) in vec3 vNormal; layout(location = 2) in vec2 vUV; layout (std140) uniform Sunlight { vec4 SunlightPosition; vec4 SunlightDiffuse; vec4 SunlightSpecular; vec4 SunlightDirection; float constantAttenuation, linearAttenuation, quadraticAttenuation; float spotCutoff, spotExponent; float EnableLighting; float EnableSun; float ExtraValue; }; out vec2 TextureCoordinates; out vec3 Vertex_Normal; out vec4 Vertex_LightDir; out vec4 Vertex_EyeVec; uniform mat4 MVP; uniform mat4 ViewMatrix; uniform mat4 ViewModelMatrix; uniform mat3 NormalMatrix; void main() { gl_Position = MVP * vec4(vPosition, 1.0); TextureCoordinates = vUV; Vertex_Normal = vec3(ViewModelMatrix * vec4(vNormal, 1.0)); vec4 view_vertex = ViewModelMatrix * vec4(vPosition, 1.0); vec4 LP = ViewMatrix * SunlightPosition; Vertex_LightDir = LP - view_vertex; Vertex_EyeVec = -view_vertex; } Fragment Shader: #version 330 #extension GL_ARB_explicit_attrib_location : require precision highp float; // // These values vary per Mesh // uniform vec4 AmbientMeshColor; uniform vec4 EmissiveMeshColor; uniform vec4 DiffuseMeshColor; uniform vec4 SpecularMeshColor; uniform vec4 SceneBrightnessColor; uniform float MeshShininess; uniform float ObjectHasTextureFile; // // Sunlight Settings. // layout (std140) uniform Sunlight { vec4 SunlightPosition; vec4 SunlightDiffuse; vec4 SunlightSpecular; vec4 SunlightDirection; float constantAttenuation, linearAttenuation, quadraticAttenuation; float spotCutoff, spotExponent; float EnableLighting; float EnableSun; float ExtraValue; }; uniform vec4 SceneAmbient; // // Whether Materials are enabled at all. // uniform float IfEnableTextures; // // If we are just simply drawing the skybox. // uniform float DrawingSkyBox; uniform float EnableWireframe; uniform vec4 WireframeColor; uniform float TextureCoordinateDebug; uniform sampler2D MainTextureSampler; in vec2 TextureCoordinates; in vec3 Vertex_Normal; in vec4 Vertex_LightDir; in vec4 Vertex_EyeVec; vec4 finalDiffuseColor; out vec4 finalColor; void DrawSkyBox() { finalColor = texture(MainTextureSampler, TextureCoordinates); } void DrawWireFrame() { finalColor = WireframeColor; } void main() { if (DrawingSkyBox != 1.0) { if (ObjectHasTextureFile == 1.0) { finalDiffuseColor = texture(MainTextureSampler, TextureCoordinates); } else { finalDiffuseColor = DiffuseMeshColor; } vec4 CurrentObjectColor = AmbientMeshColor; vec4 N = vec4(normalize(Vertex_Normal), 0); vec4 L = normalize(Vertex_LightDir); float lambertTerm = dot(N,L); if (lambertTerm > 0.0 && EnableSun == 1.0) { if (ObjectHasTextureFile == 1.0) { CurrentObjectColor += SunlightDiffuse * DiffuseMeshColor * lambertTerm * finalDiffuseColor; } else { CurrentObjectColor += SunlightDiffuse * finalDiffuseColor * lambertTerm; } vec4 E = normalize(Vertex_EyeVec); vec4 R = reflect(-L, N); float specular = pow( max(dot(R, E), 0.0), MeshShininess); CurrentObjectColor += SunlightSpecular * SpecularMeshColor * specular; } finalColor.rgb = CurrentObjectColor.rgb; //finalColor.rgb += SceneBrightnessColor.rgb; finalColor.a = DiffuseMeshColor.a; //finalColor.a += SceneBrightnessColor.a; } else { DrawSkyBox(); } }
  12. Hello, I am working with an example of loading SketchUp files using the SketchUp C API but I am having a hard time understanding how to load the indices from the associated model. The below code works but from what I am getting it's just the vertex data. I may be wrong but it is hard to understand from the example. May someone review the code below and shed light on where/if the indices are being loaded from the example code below? Thank you. #include <slapi/slapi.h> #include <slapi/geometry.h> #include <slapi/initialize.h> #include <slapi/unicodestring.h> #include <slapi/model/model.h> #include <slapi/model/entities.h> #include <slapi/model/face.h> #include <slapi/model/edge.h> #include <slapi/model/vertex.h> #include <vector> int main() { // Always initialize the API before using it SUInitialize(); // Load the model from a file SUModelRef model = SU_INVALID; SUResult res = SUModelCreateFromFile(&model, "model.skp"); // It's best to always check the return code from each SU function call. // Only showing this check once to keep this example short. if (res != SU_ERROR_NONE) return 1; // Get the entity container of the model. SUEntitiesRef entities = SU_INVALID; SUModelGetEntities(model, &entities); // Get all the faces from the entities object size_t faceCount = 0; SUEntitiesGetNumFaces(entities, &faceCount); if (faceCount > 0) { std::vector<SUFaceRef> faces(faceCount); SUEntitiesGetFaces(entities, faceCount, &faces[0], &faceCount); // Get all the edges in this face for (size_t i = 0; i < faceCount; i++) { size_t edgeCount = 0; SUFaceGetNumEdges(faces[i], &edgeCount); if (edgeCount > 0) { std::vector<SUEdgeRef> edges(edgeCount); SUFaceGetEdges(faces[i], edgeCount, &edges[0], &edgeCount); // Get the vertex positions for each edge for (size_t j = 0; j < edgeCount; j++) { SUVertexRef startVertex = SU_INVALID; SUVertexRef endVertex = SU_INVALID; SUEdgeGetStartVertex(edges[j], &startVertex); SUEdgeGetEndVertex(edges[j], &endVertex); SUPoint3D start; SUPoint3D end; SUVertexGetPosition(startVertex, &start); SUVertexGetPosition(endVertex, &end); // Now do something with the point data } } } } // Get model name SUStringRef name = SU_INVALID; SUStringCreate(&name); SUModelGetName(model, &name); size_t name_length = 0; SUStringGetUTF8Length(name, &name_length); char* name_utf8 = new char[name_length + 1]; SUStringGetUTF8(name, name_length + 1, name_utf8, &name_length); // Now we have the name in a form we can use SUStringRelease(&name); delete []name_utf8; // Must release the model or there will be memory leaks SUModelRelease(&model); // Always terminate the API when done using it SUTerminate(); return 0; } 
  13. No, glReadPixels() will only read the pixels of the currently bound framebuffer within your current OpenGL context.
  14. Sure, first you get a screenshot of the desktop area (link is for Windows, Google around for Linux/Mac/etc.): http://stackoverflow.com/questions/3291167/how-to-make-screen-screenshot-with-win32-in-c Once you have the image you can then do whatever with the data (send/analyze/etc.)