Jump to content
• Advertisement  # junkyska

Member

20

143 Neutral

## About junkyska

• Rank
Member
1. Thanks for you response, but yestarday I finally solve it!! (The algorithm I posted here it's not right, i use another one from dark photon). Here there is the resolution: http://www.opengl.or...995#post1244995 Thank's for the link I get it before while searching on google.
2. Hi everyone, I'm trying for a few days removing my position texture from my deferred system. To achive this I have to reconstruct pixel position from depth value. I spend some days now (many houres) tring to solve the problem without luck, so I'm somehow frustated. Let's see if someone with better math can help me First how i fill my old position texture (geometry pass). [With this texture the lighting works fine.] Geometry pass - Vertex: vsPosition = ( ModelViewMatrix * vec4(in_Position, 1.0) ).xyz; Geometry pass - Fragment: fsPosition = vsPosition Where ModelViewMatrix is ViewMatrix * ModelMatrix. From here all work, and accessing pixel position in the light pass is trivial.The problem occurs when I try to compute pixel position with depth value. I tried many things (many...), and what I have at the moment is: vec2 calcTexCoord() { return gl_FragCoord.xy / ScreenSize; } vec3 positionFromDepth() { vec2 sp = calcTexCoord(); float depth = texture2D(Texture4, sp).x * 2.0 - 1.0; vec4 pos = InvProjectionMatrix * vec4( sp*2.0-1.0, depth, 1.0); return pos.xyz/pos.w; } Where InvProjectionMatrix is the inverse of the projection matrix, and Texture4 is depth texture. The strange thing is that if I output the absolute diference from both (using texture and using depth), i see black output (no diferences). Following code outputs black screen: (Texture0 is position texture) vec3 pos1 = positionFromDepth(); vec3 pos2 = texture2D(Texture0, calcTexCoord()).xyz; fragColor = vec4(abs(pos2.x-pos1.x), abs(pos2.y-pos1.y), abs(pos2.z-pos1.z), 1.); Here is my point light calculation: (With pixel position from depth doesn't work) uniform struct { vec3 diffuse; vec3 specular; vec3 ambient; float constantAtt; float linearAtt; float quadraticAtt; float spotInnerCutOff; float spotOuterCutOff; float spotFallOff; vec3 position; // For Point light. Computed as ViewMatrix * light_position vec3 direction; // For Spot and Directional lights int type; // 0-> Point, 1-> Spot, 2-> Directional } Light; ... vec3 computePhongPointLight() { vec3 color = vec3(0.0); vec2 texCoord = calcTexCoord(); //vec3 position = texture2D( Texture0, texCoord ).xyz; vec3 position = positionFromDepth(); vec3 difColor = texture2D( Texture1, texCoord ).xyz; vec3 specColor = texture2D( Texture2, texCoord ).xyz; vec3 normColor = texture2D( Texture3, texCoord ).xyz; vec3 lightDir = Light.position.xyz - position; vec3 lightDirNorm = normalize( lightDir ); float sDotN = max( dot(lightDirNorm, normColor), 0.0); float att = 1.0; float distSqr = dot(lightDir, lightDir); float invAtt = (Light.constantAtt + (Light.linearAtt*sqrt(distSqr)) + (Light.quadraticAtt*distSqr)); att = 0.0; if (invAtt != 0.0) att = 1.0/invAtt; vec3 diffuse = difColor.rgb * Light.diffuse * sDotN; vec3 ambient = difColor.rgb * Light.ambient; // Cheat here vec3 vertexToEye = -normalize(position); vec3 r = normalize(reflect(lightDirNorm, normColor)); // SpecularPower vec3 specular = vec3(0.0); if ( sDotN > 0.0 ) { specular = Light.specular.rgb * specColor * pow( max( dot(vertexToEye, r), 0.0), 60.0 ); // Change specular here!!!! value 60 must be an uniform } return (diffuse + specular + ambient)*att; } I will apraciate any help, so feel free to answer Sorry for my bad English and thanks in advance.
3. Chek http://glm.g-truc.net/. It's a really nice library with tones of util functions, and ofcorse vectors, matrix, quaternions... It has nearly the same sintax as GLSL and you don't need to link with your project, just include it.
4. I think it's well calculated normal matrix. I don't know where is the bug, I can't find it. Here is my normal matrix calculation: _currentRenderableInfo.renderablePos = actor->getDerivedPosition(); _currentRenderableInfo.renderableRot = actor->getDerivedRotation(); _currentRenderableInfo.renderableScale = actor->getDerivedScale(); _currentRenderableInfo.modelMatrix = actor->getFullTransform(); _currentRenderableInfo.modelViewMatrix = _currentViewInfo.viewMatrix * _currentRenderableInfo.modelMatrix; _currentRenderableInfo.mvpMatrix = _currentViewInfo.projectionMatrix * _currentRenderableInfo.modelViewMatrix; _currentRenderableInfo.normalMatrix = Core::transpose(Core::inverse(Core::mat3(_currentRenderableInfo.modelViewMatrix)));
5. Very good examples, good job! Japro it's really nice!! I like it!
6. Check glPixelStorei( GL_PACK_ALIGNMENT, 1 ) and glPixelStorei( GL_UNPACK_ALIGNMENT, 1 ).
7. Hi everyone, I have a problem with my shaders. I implemented a phong lightning. In the shader vertices get transformed by position, rotation and scale. The problems is that when the scale is not (1, 1, 1) I get wrong results (displaced lighting). It's really noticeable for example with scale (100, 20, 100). I see this bug when I was testing with spotlight, so directional light and point light can also be wrong. I think it's scale+normals problem, but I can't find it. Normal Matrix has scale in accound? How can I deal with this problem? If someone can help me I will be really apreciated. Any other advice not related to this problem it will be wellcome. If it will help I can upload some images. (SpotFallOff its usless, I use SpotInnerCutOff and SpotOuterCutOff) Sorry for my bad English. Here is my Shader: <Shader> <ShaderPart type="Vertex"> <ShaderCode> <![CDATA[ #version 420 uniform mat4 ModelViewMatrix; uniform mat4 MVP; uniform mat3 NormalMatrix; in vec3 in_Position; in vec3 in_Normal; in vec3 in_Tangent; in vec3 in_Bitangent; in vec2 in_TexCoord; out mat3 TBN; out vec2 pTexCoord; out vec3 pViewDir; out vec3 pPosition; out vec3 pVertexPos; void main(void) { pTexCoord = in_TexCoord; vec3 n = normalize(NormalMatrix * in_Normal); vec3 t = normalize(NormalMatrix * in_Tangent); vec3 b = -normalize( cross(n, t) ); TBN = mat3( t.x, b.x, n.x, t.y, b.y, n.y, t.z, b.z, n.z ); pPosition = vec3( ModelViewMatrix * vec4(in_Position, 1.0) ); pViewDir = TBN * vec3(-pPosition); pVertexPos = in_Position; gl_Position = MVP * vec4(in_Position, 1.0); } ]]> </ShaderCode> </ShaderPart> <ShaderPart type="Fragment" > <ShaderCode> <![CDATA[ #version 420 uniform sampler2D Texture0; // Diffuse uniform sampler2D Texture1; // Normal uniform sampler2D Texture2; // Specular uniform float SpecularPower; uniform struct { vec3 ambient; vec3 diffuse; vec3 specular; float constantAtt; float linearAtt; float quadraticAtt; float spotInnerCutOff; float spotOuterCutOff; float spotFallOff; vec3 position; vec3 direction; // For Spot and Directional lights int type; // 0-> Point, 1-> Spot, 2-> Directional } Lights; uniform int NumLights; in mat3 TBN; in vec2 pTexCoord; in vec3 pViewDir; in vec3 pPosition; in vec3 pVertexPos; out vec4 fragColor; vec4 computePhongPointLight(in int index) { vec3 color = vec3(0.0); vec4 difColor = texture2D( Texture0, pTexCoord ); vec3 normColor = normalize(texture2D( Texture1, pTexCoord ).xyz * 2.0 - 1.0); vec3 specColor = texture2D( Texture2, pTexCoord ).rgb; vec3 lightDir = TBN * (Lights[index].position.xyz - pPosition); // Normalized lightDir vec3 lightDirNorm = normalize( lightDir ); vec3 r = reflect( -lightDirNorm, normColor ); float sDotN = max( dot(lightDirNorm, normColor), 0.0 ); // Compute att for point and spot lights. Directional lights has att = 1.0 float att = 1.0; float distSqr = dot(lightDir, lightDir); float invAtt = (Lights[index].constantAtt + (Lights[index].linearAtt*sqrt(distSqr)) + (Lights[index].quadraticAtt*distSqr)); att = 0.0; if (invAtt != 0.0) att = 1.0/invAtt; vec3 diffuse = difColor.rgb * Lights[index].diffuse * sDotN; vec3 ambient = difColor.rgb * Lights[index].ambient; // Cheat here // Specular power vec3 specular = vec3(0.0); if ( sDotN > 0.0 ) { specular = Lights[index].specular.rgb * specColor * pow( max( dot(r, normalize(pViewDir)), 0.0), SpecularPower ); } color = (diffuse + specular + ambient)*att; return vec4(color, difColor.a); } vec4 computePhongSpotLight(in int index) { vec3 color = vec3(0.0); vec4 difColor = texture( Texture0, pTexCoord ); vec3 normColor = normalize(texture( Texture1, pTexCoord ).xyz * 2.0 - 1.0); vec3 specColor = texture( Texture2, pTexCoord ).rgb; vec3 lightDir = TBN * (Lights[index].position.xyz - pPosition); // Normalized lightDir vec3 lightDirNorm = normalize( lightDir ); vec3 r = reflect( -lightDirNorm, normColor ); float sDotN = max( dot(lightDirNorm, normColor), 0.0 ); // Compute att for point and spot lights. Directional lights has att = 1.0 float att = 1.0; float distSqr = dot(lightDir, lightDir); float invAtt = (Lights[index].constantAtt + (Lights[index].linearAtt*sqrt(distSqr)) + (Lights[index].quadraticAtt*distSqr)); att = 0.0; if (invAtt != 0.0) att = 1.0/invAtt; // Compute spot float cos_cur_angle = dot(-lightDirNorm, normalize(TBN * Lights[index].direction)); float cos_inner_cone_angle = cos(Lights[index].spotInnerCutOff); float cos_outer_cone_angle = cos(Lights[index].spotOuterCutOff); float cos_inner_minus_outer_angle = cos_inner_cone_angle - cos_outer_cone_angle; float spot = clamp((cos_cur_angle - cos_outer_cone_angle) / cos_inner_minus_outer_angle, 0.0, 1.0); vec3 diffuse = difColor.xyz * Lights[index].diffuse * sDotN; vec3 ambient = difColor.rgb * Lights[index].ambient; // Cheat here // Specular power vec3 specular = vec3(0.0); if ( sDotN > 0.0 ) { specular = Lights[index].specular.rgb * specColor * pow( max( dot(r, normalize(pViewDir)), 0.0), SpecularPower ); } color = (diffuse + specular + ambient)*att*spot; return vec4(color, difColor.a); } vec4 computePhongDirectionalLight(in int index) { vec3 color = vec3(0.0); vec4 difColor = texture( Texture0, pTexCoord ); vec3 normColor = normalize(texture( Texture1, pTexCoord ).xyz * 2.0 - 1.0); vec3 specColor = texture( Texture2, pTexCoord ).rgb; vec3 lightDir = TBN * Lights[index].direction; // Normalized lightDir vec3 lightDirNorm = normalize( lightDir ); vec3 r = reflect( -lightDirNorm, normColor ); float sDotN = max( dot(lightDirNorm, normColor), 0.0 ); vec3 diffuse = difColor.rgb * Lights[index].diffuse * sDotN; vec3 ambient = difColor.rgb * Lights[index].ambient; // Cheat here // Specular power vec3 specular = vec3(0.0); if ( sDotN > 0.0 ) { specular = Lights[index].specular.rgb * specColor * pow( max( dot(r, normalize(pViewDir)), 0.0), SpecularPower ); } color = diffuse + specular + ambient; return vec4(color, difColor.a); } vec4 computeLight(in int index) { if (Lights[index].type == 0) return computePhongPointLight(index); if (Lights[index].type == 1) return computePhongSpotLight(index); else if (Lights[index].type == 2) return computePhongDirectionalLight(index); } void main(void) { fragColor = vec4(0.0, 0.0, 0.0, 1.0); for (int i = 0; i < NumLights; i++) fragColor += computeLight(i); if (fragColor.a < 0.5) discard; } ]]> </ShaderCode> </ShaderPart> </Shader>
8. Hi everyone, I'm developing a home made 3d engine with opengl 4. I have problemes rendering 2d (GUI and Text). I know that i have to flip vertically the textures for opengl to work, and i do it (3d rendering works perfectly). So I flip vertically the textures, i create a rectangle mesh, but when i render, the uv is upside down (Gui is a mess and text is flipped vertically). I'm really frustated because I spend many hours without success. UV in opengl are: bottom left corner (0, 0) and top right (1, 1) right? If I swap v1 and v2 the code works, but it's kind of hack and I don't like it. If someone can help me I will really apreciated it. Sorry for my English. Here is my code, sorry for the mess. The mesh: void GL4RendererContext::create2dRect() { AtomicStaticMeshInfo mesh; StaticVertex v; // Bottom left v.setPos(0.0f, 0.0f, 0.5f); v.setUV(0.0f, 0.0f); mesh.vertices.push_back(v); // Bottom right v.setPos(1.0f, 0.0f, 0.5f); v.setUV(1.0f, 0.0f); mesh.vertices.push_back(v), // Top left v.setPos(0.0f, 1.0f, 0.5f); v.setUV(0.0f, 1.0f); mesh.vertices.push_back(v); // Top right v.setPos(1.0f, 1.0f, 0.5f); v.setUV(1.0f, 1.0f); mesh.vertices.push_back(v); // Indices mesh.indices.push_back(0); mesh.indices.push_back(2); mesh.indices.push_back(3); mesh.indices.push_back(0); mesh.indices.push_back(3); mesh.indices.push_back(1); mesh.aabb.setMin(Core::vec3(0.0f, 0.0f, 0.0f)); mesh.aabb.setMax(Core::vec3(1.0f, 1.0f, 0.0f)); _2dRect = allocateAtomicStaticMesh(mesh); } Rendering 2d Textured Rectangle void GL4RendererContext::draw2dTexturedRect(Texture::Ref texture, int x, int y, int w, int h, float32 startU, float32 startV, float32 endU, float32 endV) { if (getCurrentShader() != _2dShader) { bindShader(_2dShader); } /* /// HACK HERE, UV SWPAED. WHY???? :_( { float32 aux = startV; startV = endV; endV = aux; } */ Core::mat4 proj = Core::ortho(0.0f, (float)_renderWidth, (float)_renderHeight, 0.0f, 0.0f, 1.0f); Core::mat4 trans = Core::translate(Core::mat4(1.0f), Core::vec3((float)x, (float)y, 0.0f)); Core::mat4 scale = Core::scale(Core::mat4(1.0f), Core::vec3((float)w, (float)h, 0.0f)); Core::mat4 MVP = proj * trans * scale; GLuint fillModeIndex = glGetSubroutineIndex(_currentShader->getShaderID(), GL_FRAGMENT_SHADER, "textureMode"); glUniformSubroutinesuiv(GL_FRAGMENT_SHADER, 1, &fillModeIndex); bindMVPMatrix(MVP); glUniform2f(glGetUniformLocation(_currentShader->getShaderID(), "UVScale"), endU - startU, endV - startV); glUniform2f(glGetUniformLocation(_currentShader->getShaderID(), "UVTrans"), startU, startV); bindTexture(texture, "Texture", 0); bindVAO(_2dRect->getMeshID()); glDisable(GL_DEPTH_TEST); glDisable(GL_CULL_FACE); glPolygonMode(GL_FRONT_AND_BACK, GL_FILL); glDrawElements(GL_TRIANGLES, _2dRect->getNumTriangles()*3, GL_UNSIGNED_INT, BUFFER_OFFSET(0)); unbindVAO(); glEnable(GL_CULL_FACE); glEnable(GL_DEPTH_TEST); CHECK_GL_ERRORS } Shader void GL4RendererContext::create2dShader() { std::string vertexShader = "#version 420 \n" "\n" "in vec3 in_Position; \n" "in vec2 in_TexCoord; \n" "\n" "uniform mat4 MVP; \n" "uniform vec2 UVScale; \n" "uniform vec2 UVTrans; \n" "\n" "out vec2 pTexCoord; \n" "\n" "void main() \n" "{ \n" "gl_Position = MVP * vec4(in_Position, 1.0); \n" "pTexCoord = vec2(in_TexCoord.x*UVScale.x + UVTrans.x, in_TexCoord.y*UVScale.y + UVTrans.y); \n" //"pTexCoord = vec2((in_TexCoord.x+UVTrans.x)*UVScale.x, (in_TexCoord.y+UVTrans.y)*UVScale.y ); \n" "}"; std::string fragShader = "#version 420 \n" "\n" "subroutine vec4 fillModeType();\n" "\n" "out vec4 fragColor; \n" "\n" "in vec2 pTexCoord; \n" "\n" "uniform vec4 Color; \n" "uniform sampler2D Texture; \n" "\n" "subroutine( fillModeType ) \n" "vec4 colorMode() \n" "{ return Color; } \n" "\n" "subroutine( fillModeType ) \n" "vec4 textureMode() \n" "{ \n" "return textureLod(Texture, pTexCoord, 0); \n" "}\n" "\n" "subroutine( fillModeType ) \n" "vec4 textMode() \n" "{ \n" "return vec4(Color.rgb, textureLod(Texture, pTexCoord, 0).a); \n" "}\n" "subroutine uniform fillModeType fillMode;\n" "\n" "void main() \n" "{ \n" "fragColor = fillMode(); \n" "}"; ShaderPartInfo::Ref vInfo = new ShaderPartInfo(vertexShader, Shader::SHADER_VERTEX); ShaderPartInfo::Ref fInfo = new ShaderPartInfo(fragShader, Shader::SHADER_FRAGMENT); ShaderInfo shaderInfo; shaderInfo.appendShaderPart(vInfo); shaderInfo.appendShaderPart(fInfo); std::string error = ""; _2dShader = allocateShader(shaderInfo, error); if (!_2dShader) { ERR("Unable to create debug shader, error: "+error); FOSSIL_ASSERT(false, "Unable to create 2dShader shader"); } }
9. You help me alot! Thanks for sharing your knowlage and for your time.
10. Wow thanks for all that information! I really appreciated it! I will follow your advice and go for a kd-tree for the static meshes. I will study MathGeoLib to undestand his implementation. Just another question, how you deal with transformations (scale, rotation, translation) and kd-tree? You transform the ray to local coordinates before the ray query?
11. Another question, I can't find much information about MathGeoLib. I see in the source codes that they implement QuadTree and Kd-Tree. You have some examples or something that i can learn about?
12. Wow so much information!! Ok I will go for a CPU aproch. Tha bad new for me is that I have to touch some classes structures and implement a few thinks (actauly more than a few thinks... hehe). I know about kd-tree, what I don't undestand is for animated meshes. Becouse the bones, I will have to move the triangles before performing a ray cast and I don't know any tree structure to do this fast... For AABB check I currently have a Fixed Size Loose Octree Space Partition implementation (currently used for frustum culling). I never see MathGeoLib, I'm currently using GLM, but GLM doesn't have kd-trees and this kind of thinks. I think I can integrate MathGeoLib for ray casting objects and kd-tree the static mesh. I will read information about this library. Thanks a lot.
13. HI everyone, I'm currently implementing a 3d game engine. It's done with c++ and opengl 4. I want to do ray test against meshes. Meshes data reside only in the gpu, not in the cpu. I want to trace a ray against meshes (for example for a gun shoot) and retrive this information: Hit mesh, hit point, hit normal. The ray test hasn't to be a bottleneck, so it has to be fast. What I'm thinking is: First cpu work: Check ray against my loose octree space partition implementation (ray against AABB) to discard meshes. Second gpu work: Here is where i dont know exacly what to do, and I can't find many useful information. What I was thinking is: Create a texture render target with two textures, one RGB (that acts like color attachment) and i will store depth distance, and one RGB32F that it will store point normal. The textures will be 1x1 pixels. Then i can make an orthogonal view matrix with near distance = Ray.start and far distance = Ray.end and this matrix pointing to Ray.Dir. To check each mesh, i will 1- bind the texture render target, 2- bind orthogonal view matrix 3- For each mesh: ---- 4- bind model matrix ---- 5- clear render target, ---- 6- Draw mesh with custom shader (that will draw into this two textures) ---- 7- Retrive textures information ---- 8- Hit point = Ray.start + (Normal(Ray.dir) * toFloat(texture1.rgb)) ---- 9- Normal point = texture2.rgb32f ---- 10- Hit mesh = current mesh I never did this before, so i'm thinking that is not correct. I'm really lost and i can find any useful information. Can someone help me please? I will apreciate any information. Sorry for my bad English. Thanks to all.
14. First of all thanks to answer. I'm integrating Gwen (http://code.google.com/p/gwen/) a gui library. To achive this, I have to implement my custom UI render. This is mostly done. Gwen define his own render interface, so I have to implement this interface. One of this functions is draw colored rectangle. This function works well. Another function is draw a single pixel. To achive this I call the function that draws a colored rectangle but with the rectangle bounds like that: rect.x = pixel.x, rect.y = pixel.y, rect.heigh = 1, rect.width = 1. The problem comes when the Gui system tries to perform many draw pixel operations. This drops down the frame rate. I attach an image showing that problem (Color pick up), the frame rates drops down to 10 fps. [attachment=7013:draw_pixel_problem.jpg]
15. Hi everyone, I'm integrating Gwen (Gui system) in my engine. I'm using Open GL 3+ (without fixed pipeline, all in the shaders). I have a problem drawing a single pixel because it's really SLOW. What I'm doing now is drawing a rectangle with startX = endX and startY = endY, but when the Gui have to render many single pixels (ex: Color pick up box) the frame rate drops down. Is there any efficent way of doing this? Sorry for my English, it's not very good. Thanks.
• Advertisement
×

## Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!