• Advertisement


  • Content count

  • Joined

  • Last visited

Community Reputation

191 Neutral

About AhmedSaleh

  • Rank

Personal Information

  • Interests
  1. Remove Triangles from ogre mesh

    @Mike2343 Indices is an array of unsigned int, unsigned int * indices. u1,u2,u2 is unsigned int referring to the triangle indices u1,u2,u3
  2. Remove Triangles from ogre mesh

    @Mike2343 Can you show me c++ code to remove those vertices from indices array ?
  3. I'm having a ray to intersection and getting an intersection point, then I compare a condition. I would like to remove that particular triangle from the mesh. In the RayCasting function, when I get hit, I save three variables u1,u2,u3 which are the indices of the triangle that has a hit. Ogre::Ray ray; ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z)); ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z)); Ogre::Vector3 result; RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3); float pointZ = out.pointlist[(3*i)+2]; if(result.z < pointZ) { std::cout << "Remove edge "<< u1 << " "<< u2 << " "<< u3 << std::endl; Utility::DebugPrimitives::drawSphere( result, 0.3f , "RayMesh"+std::to_string(counter), "SimpleColors/SolidGreen" ); cntEdges++; indices = static_cast<uint16_t *>(indexBuffer->lock(Ogre::HardwareBuffer::HBL_DISCARD)); for (int i = 0; i<out.numberofedges; i++) { if(indices[i] == u1 || indices[i] == u2 || indices[i] == u3) { continue; } out.edgelist[i] = indices[i]; } indexBuffer->unlock(); indices = static_cast<uint16_t *>(indexBuffer->lock(Ogre::HardwareBuffer::HBL_DISCARD)); for (int i = 0; i<numEdges - cntEdges; i++) { indices[i] = out.edgelist[i]; } indexBuffer->unlock(); }
  4. I have a simple problem, just converting 3D points into 2D image coordinate, the image center should be 0,0 to -1,1 I have done the following equations with the help of @iedoc but I still don't get normalized points also another question how would I debug it, I only have the ability to draw spheres so I can't draw 2D circles First I have camera position and orientation as quaternion, I convert the quaternion to rotation matrix then I compose the camera pose matrix 4x4 that works and I tested it const Ogre::Vector3 cameraPosition = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldPosition(); const Ogre::Quaternion cameraOrientation =Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldOrientation(); Ogre::Matrix4 cameraPose; Ogre::Matrix3 orienatationMatrix; cameraOrientation.ToRotationMatrix(orienatationMatrix); cameraPose[0][0] = orienatationMatrix[0][0]; cameraPose[1][0] = orienatationMatrix[1][0]; cameraPose[2][0] = orienatationMatrix[2][0]; cameraPose[0][1] = orienatationMatrix[0][1]; cameraPose[1][1] = orienatationMatrix[1][1]; cameraPose[2][1] = orienatationMatrix[2][1]; cameraPose[0][2] = orienatationMatrix[0][2]; cameraPose[1][2] = orienatationMatrix[1][2]; cameraPose[2][2] = orienatationMatrix[2][2]; cameraPose[0][3] = cameraPosition.x; cameraPose[1][3] = cameraPosition.y; cameraPose[2][3] = cameraPosition.z; cameraPose[3][0] = 0; cameraPose[3][1] = 0; cameraPose[3][2] = 0; cameraPose[3][3] = 1; Ogre::Vector3 pos, scale; Ogre::Quaternion orient; cameraPose.decomposition(pos, scale, orient); std::vector<Ogre::Vector2> projectedFeaturePoints; Core::CameraIntrinsics cameraIntrinsics = Core::EnvironmentInformation::getSingleton()->getCameraIntrinsics(); Core::Resolution screenResolution = Core::EnvironmentInformation::getSingleton()->getScreenResolution(); Core::EnvironmentInformation::AspectRatio aspectRatio = Core::EnvironmentInformation::getSingleton()->getScreenAspectRatio(); Ogre::Matrix4 viewProjection = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraViewProjectionMatrix(); for (int i = 0; i < out.numberofpoints; i++) { Ogre::Vector4 pt; pt.x = out.pointlist[(3*i)]; pt.y = out.pointlist[(3*i) + 1]; pt.z = out.pointlist[(3*i) + 2]; pt.w = 1; Ogre::Vector4 pnt = cameraPose.inverse()*pt; float x = (((pnt.x - cameraPosition.x)*cameraIntrinsics.focalLength.x)/pnt.z) + cameraPosition.x; float y = (((pnt.y - cameraPosition.y)*cameraIntrinsics.focalLength.y)/pnt.z) + cameraPosition.y; projectedFeaturePoints.push_back(Ogre::Vector2(x,y)); }
  5. I have been reading this paper http://openaccess.thecvf.com/content_iccv_workshops_2013/W21/papers/Sugiura_3D_Surface_Extraction_2013_ICCV_paper.pdf I have already generated a tetrahedra for my mesh and I would like to create a surface mesh. I can't understand the algorithm. From my understanding, there is a camera that generates rays towards to the points of the tetrahedral, and I get the intersections, but how would I eliminate the triangles that are inside or outside ? how would I detect inside or outside polygons ? Would someone give a pseudo code of the algorithm ?
  6. I'm trying to trianglulate 3D Points using DirectX11, so I triangulate 3D points then I try to draw triangles, the outcome of triangulation is std::vector<Tri>, each Tri has a,b,c 3 values. I don't see any output, I think I have a problem with the math.. here is my code: https://pastebin.com/SQ8z3WAt
  7. 3D Transparent Shader problem

    I have tried to use the face normal but the result is very bad all the faces are culled. here is how i use it #version 100 precision highp int; precision highp float; attribute vec4 vertex; attribute vec3 normal; uniform mat4 normalMatrix; uniform mat4 modelViewProjectionMatrix; uniform mat4 modelView; uniform vec3 camera_world_position; varying vec3 ec_pos; varying vec3 camPos; void main() { gl_Position = modelViewProjectionMatrix * vertex; vec3 norm = normal; //norm *=-1.0; ec_pos = vec3(gl_Position.x, gl_Position.y, gl_Position.z); camPos = camera_world_position; //lightDiffuse = dot(normalize(vec3(norm.x, norm.y, norm.z)), normalize(camera_world_position - vec3(gl_Position.x, gl_Position.y, gl_Position.z))); #version 100 precision highp int; precision highp float; uniform float time; uniform float touchX; uniform float touchY; uniform float touchZ; uniform float line; varying vec3 ec_pos; varying vec3 camPos; void main() { vec3 ec_normal = normalize(cross(dFdx(ec_pos), dFdy(ec_pos))); float lightDiffuse = dot( ec_normal, normalize(camPos)); float rampLight =lightDiffuse; float light = (1.0 - rampLight) * 1.0; vec4 lightColor = vec4(1.0,1.0,1.0, 1.0); vec4 diffuseColor = lightColor * light; if(rampLight <0.0) { discard; } diffuseColor = smoothstep(vec4(0.0, 0.0, 0.0, 0.0), vec4(0.7, 0.7, 0.7, 0.7), vec4(diffuseColor)); gl_FragColor = diffuseColor; } }
  8. SSAO using Opengles2

    adjusting the offset didn't change anything... still there are outlines..
  9. SSAO using Opengles2

    The problem is I'm using opengles 2 with no GBuffer... is there any alternative to not using normals in ssao ? Also I think the offset should be 1/width for x, and 1/height for y is that correct ?
  10. SSAO using Opengles2

    Thanks a lot for your reply. Which resoultion should I take into account ? 1/width of depth buffer ? The effect is like edge filtering actually, is that what SSAO should do ?
  11. I'm following the following article which uses the depth buffer to implement ssao.http://theorangeduck.com/page/pure-depth-ssao I have got the following result result image I don't know what's wrong.. I made sure that the depth buffer image is correct and the random texture is correct also. As you see there are antialiasing effect, and its very dark.. I'm not doing any post processing like blurring. Would someone take a look at the shader ? maybe there is something wrong.. #version 100 precision mediump int; precision mediump float; uniform sampler2D DepthMap; uniform sampler2D RandomtextureSampler; varying vec2 uv; vec3 normal_from_depth(float depth, vec2 texcoords) { const vec2 offset1 = vec2(0.0,0.001); const vec2 offset2 = vec2(0.001,0.0); float depth1 = texture2D(DepthMap, texcoords + offset1).r; float depth2 = texture2D(DepthMap, texcoords + offset2).r; vec3 p1 = vec3(offset1, depth1 - depth); vec3 p2 = vec3(offset2, depth2 - depth); vec3 normal = cross(p1, p2); normal.z = -normal.z; return normalize(normal); } void main() { const float total_strength = 1.40; const float base = 0.2; const float area = 0.0075; const float falloff = 0.000001; const float radius = 0.0002; const int samples = 16; vec3 sample_sphere[16]; sample_sphere[0] = vec3( 0.5381, 0.1856,-0.4319); sample_sphere[1] = vec3( 0.1379, 0.2486, 0.4430); sample_sphere[2] = vec3( 0.3371, 0.5679,-0.0057); sample_sphere[3] = vec3(-0.6999,-0.0451,-0.0019); sample_sphere[4] = vec3( 0.0689,-0.1598,-0.8547); sample_sphere[5] = vec3( 0.0560, 0.0069,-0.1843); sample_sphere[6] = vec3(-0.0146, 0.1402, 0.0762); sample_sphere[7] = vec3( 0.0100,-0.1924,-0.0344); sample_sphere[8] = vec3(-0.3577,-0.5301,-0.4358); sample_sphere[9] = vec3(-0.3169, 0.1063, 0.0158); sample_sphere[10] = vec3( 0.0103,-0.5869, 0.0046); sample_sphere[11] = vec3(-0.0897,-0.4940, 0.3287); sample_sphere[12] = vec3( 0.7119,-0.0154,-0.0918); sample_sphere[13] = vec3(-0.0533, 0.0596,-0.5411); sample_sphere[14] = vec3( 0.0352,-0.0631, 0.5460); sample_sphere[15] = vec3(-0.4776, 0.2847,-0.0271); vec3 random = normalize( texture2D(RandomtextureSampler, uv * 4.0).rgb ); float depth = texture2D(DepthMap, uv).r; vec3 position = vec3(uv, depth); vec3 normal = normal_from_depth(depth, uv); float radius_depth = radius/depth; float occlusion = 0.0; for(int i=0; i < 16; i++) { vec3 ray = radius_depth * reflect(sample_sphere[i], random); vec3 hemi_ray = position + sign(dot(ray,normal)) * ray; float occ_depth = texture2D(DepthMap, clamp(hemi_ray.xy, 0.0, 1.0)).r; float difference = depth - occ_depth; occlusion += step(falloff, difference) * (1.0-smoothstep(falloff, area, difference)); } float ao = 1.0 - total_strength * occlusion * (1.0 / 16.0); float oc = clamp(ao + base, 0.0, 1.0); gl_FragColor = vec4(oc,oc,oc, 1.0) ; } The depth texture is a render to texture and its format PF_FLOAT16R
  12. I'm trying to render the depth buffer using a RTT then pass it to compositor then render a full screen quad. but I don't get any depth texture at all here is my code CODE: SELECT ALL // Create the depth render texture Ogre::TexturePtr depthTexture = Ogre::TextureManager::getSingleton().createManual( "DepthMap", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, Ogre::TEX_TYPE_2D, mOgreCamera->getViewport()->getActualWidth(), mOgreCamera->getViewport()->getActualHeight(), 0, Ogre::PF_FLOAT16_R, Ogre::TU_RENDERTARGET); Ogre::MaterialPtr material = Ogre::MaterialManager::getSingleton().getByName("Ogre/Compositor/SSAO"); material->getTechnique(0)->getPass(0)->getTextureUnitState(0)->setTextureName("DepthMap"); // material->getTechnique(0)->getPass(0)->setVertexProgram( "SSAO_vs" ); // material->getTechnique(0)->getPass(0)->setFragmentProgram( "SSAO_fs" ); //material->load(); Ogre::RenderTexture* renderTexture = depthTexture->getBuffer()->getRenderTarget(); renderTexture->addViewport(mOgreCamera); renderTexture->getViewport(0)->setClearEveryFrame(true); renderTexture->getViewport(0)->setBackgroundColour(Ogre::ColourValue( 0.8f, 0.8f, 0.8f )); renderTexture->getViewport(0)->setOverlaysEnabled(false); and that's the compositor CODE: SELECT ALL compositor DepthMap { technique { // Temporary textures texture rt0 target_width target_height PF_A8R8G8B8 target rt0 { // Render output from previous compositor (or original scene) input previous } target_output { // Start with clear output input none // Draw a fullscreen quad with the black and white image pass render_quad { // Renders a fullscreen quad with a material material Ogre/Compositor/SSAO input 0 rt0 } } } }
  13. 3D Transparent Shader problem

    here is another screenshot of that problem
  14. 3D Transparent Shader problem

    How would I use the face normals ?
  15. hi all I'm having a problem with fragment shader https://imgur.com/a/uDGXK I don't want to render the parts that are in red in the image I have to passes which culls clockwise and anticlockwise and I'm getting the dot product of the normal with the camera position if its less than 0 I set a transparent fragment otherwise discard the fargment here is the shader https://pastebin.com/WyjuuqbT
  • Advertisement