• Advertisement

AhmedSaleh

Member
  • Content count

    151
  • Joined

  • Last visited

Community Reputation

191 Neutral

1 Follower

About AhmedSaleh

  • Rank
    Member

Personal Information

  • Interests
    Programming
  1. I casted a ray and I think that is the easiest solution, as I want to know and sliced the curved part as separated game objects.
  2. @Scouting Ninja Thanks. I create a one large mesh with game object with curves in the track and straight lines, how would I still know if I'm moving in straight line and a curve from the gameobject mesh. I think I still to ray cast -Forward.Right so that it intersect with mesh ?
  3. Would you elaborate with some pseudo code please to get the idea ?
  4. Sorry I miss wrote it, the car is a real car game, so it can drive in straight lines, curves,..etc. I want to know if the car is really driving for a curve, then I need to deduct some score if he is passing a left of a solid line which is on the track.
  5. Thanks a lot for the sample code. My car is moving in a curve and there are solid lines on its right and on its left, so I need to know if its Moving in a curve AND passing left of the solid line, I have to deduct some score.
  6. @Scouting Ninja I'm trying to do some logic if the car is moving in a curve so I have to detect that. Can you provide a pseudo code ?
  7. How would I know in Unity if the car is moving, driving into a curve ? I can get the angular velocity of the rigid body, but how would I detect that
  8. FBX Connections

    I use assimp fbx importer and I get a list of meshes. but There are materials that are not assigned to some meshes. I opened the FBX ASCII file and I looked for the material name, I found it is in FBX Connections. I looked for the documentation, but I don't understand how the FBX connections works, and how would I read it, so that I relate a material to a mesh. Here is a screenshot of the my FBX file connections The name MauerWerk is a material, but how would I know that this node is a material ?
  9. I'm trying to write a leather material shader. I have a normal map, bump map (grayscaled), specular map, diffuse map, cube maps. I have done the following #version 100 precision highp int; precision highp float; uniform sampler2D diffuseColorMap; uniform sampler2D ambientOcclusionMap; uniform sampler2D normalMap; uniform sampler2D specularMap; uniform sampler2D bumpMap; uniform samplerCube envMap; varying vec2 texCoord[2]; varying vec3 viewWorld; uniform float reflectionFactor; uniform float diffuseFactor; uniform float opacity; varying vec3 eyeVector; varying mat3 world2Tangent; varying vec3 lightVec; varying vec3 halfVec; varying vec3 eyeVec; void main() { vec3 normalTangent = 2.0 * texture2D (normalMap, texCoord[0]).rgb - 1.0; vec4 x_forw = texture2D( bumpMap, texCoord[0]+vec2(1.0/2048.0, 0.0)); vec4 x_back = texture2D( bumpMap, texCoord[0]-vec2(1.0/2048.0, 0.0)); vec4 y_forw = texture2D( bumpMap, texCoord[0]+vec2(0.0, 1.0/2048.0)); vec4 y_back = texture2D( bumpMap, texCoord[0]-vec2(0.0, 1.0/2048.0)); vec3 tangX = vec3(1.0, 0.0, 3.0*(x_forw.x-x_back.x)); vec3 tangY = vec3(0.0, 1.0, 3.0*(y_forw.x-y_back.x)); vec3 heightNormal = normalize(cross(tangX, tangY)); heightNormal = heightNormal*0.5 + 0.5; float bumpAngle = max(0.0, dot(vec3(0.0,0.0,1.0),heightNormal )); vec3 normalWorld = normalize(world2Tangent *heightNormal); vec3 refDir = viewWorld - 2.0 * dot(viewWorld,normalWorld) * normalWorld; // compute diffuse lighting vec4 diffuseMaterial = texture2D (diffuseColorMap, texCoord[0]); vec4 diffuseLight = vec4(1.0,1.0,1.0,1.0); // In doom3, specular value comes from a texture vec4 specularMaterial = texture2D (specularMap, texCoord[0]) ; vec4 specularLight = vec4(1.0,1.0,1.0,1.0); float shininess = pow (max (dot (halfVec,heightNormal), 0.0), 2.0) ; vec4 reflection = textureCube(envMap, refDir); //gl_FragColor=diffuseMaterial * diffuseLight * lamberFactor ; //gl_FragColor+=specularMaterial * specularLight * shininess ; //gl_FragColor+= reflection*0.3; gl_FragColor = diffuseMaterial*bumpAngle ; } My question is how would I use the bump map (Grayscale) to the result of the reflection or what's wrong in my shader ?
  10. Remove Triangles from ogre mesh

    @Mike2343 Indices is an array of unsigned int, unsigned int * indices. u1,u2,u2 is unsigned int referring to the triangle indices u1,u2,u3
  11. Remove Triangles from ogre mesh

    @Mike2343 Can you show me c++ code to remove those vertices from indices array ?
  12. I'm having a ray to intersection and getting an intersection point, then I compare a condition. I would like to remove that particular triangle from the mesh. In the RayCasting function, when I get hit, I save three variables u1,u2,u3 which are the indices of the triangle that has a hit. Ogre::Ray ray; ray.setOrigin(Ogre::Vector3( ray_pos.x, ray_pos.y, ray_pos.z)); ray.setDirection(Ogre::Vector3(ray_dir.x, ray_dir.y, ray_dir.z)); Ogre::Vector3 result; RaycastFromPoint(ray.getOrigin(), ray.getDirection(), result, u1, u2, u3); float pointZ = out.pointlist[(3*i)+2]; if(result.z < pointZ) { std::cout << "Remove edge "<< u1 << " "<< u2 << " "<< u3 << std::endl; Utility::DebugPrimitives::drawSphere( result, 0.3f , "RayMesh"+std::to_string(counter), "SimpleColors/SolidGreen" ); cntEdges++; indices = static_cast<uint16_t *>(indexBuffer->lock(Ogre::HardwareBuffer::HBL_DISCARD)); for (int i = 0; i<out.numberofedges; i++) { if(indices[i] == u1 || indices[i] == u2 || indices[i] == u3) { continue; } out.edgelist[i] = indices[i]; } indexBuffer->unlock(); indices = static_cast<uint16_t *>(indexBuffer->lock(Ogre::HardwareBuffer::HBL_DISCARD)); for (int i = 0; i<numEdges - cntEdges; i++) { indices[i] = out.edgelist[i]; } indexBuffer->unlock(); }
  13. I have a simple problem, just converting 3D points into 2D image coordinate, the image center should be 0,0 to -1,1 I have done the following equations with the help of @iedoc but I still don't get normalized points also another question how would I debug it, I only have the ability to draw spheres so I can't draw 2D circles First I have camera position and orientation as quaternion, I convert the quaternion to rotation matrix then I compose the camera pose matrix 4x4 that works and I tested it const Ogre::Vector3 cameraPosition = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldPosition(); const Ogre::Quaternion cameraOrientation =Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraWorldOrientation(); Ogre::Matrix4 cameraPose; Ogre::Matrix3 orienatationMatrix; cameraOrientation.ToRotationMatrix(orienatationMatrix); cameraPose[0][0] = orienatationMatrix[0][0]; cameraPose[1][0] = orienatationMatrix[1][0]; cameraPose[2][0] = orienatationMatrix[2][0]; cameraPose[0][1] = orienatationMatrix[0][1]; cameraPose[1][1] = orienatationMatrix[1][1]; cameraPose[2][1] = orienatationMatrix[2][1]; cameraPose[0][2] = orienatationMatrix[0][2]; cameraPose[1][2] = orienatationMatrix[1][2]; cameraPose[2][2] = orienatationMatrix[2][2]; cameraPose[0][3] = cameraPosition.x; cameraPose[1][3] = cameraPosition.y; cameraPose[2][3] = cameraPosition.z; cameraPose[3][0] = 0; cameraPose[3][1] = 0; cameraPose[3][2] = 0; cameraPose[3][3] = 1; Ogre::Vector3 pos, scale; Ogre::Quaternion orient; cameraPose.decomposition(pos, scale, orient); std::vector<Ogre::Vector2> projectedFeaturePoints; Core::CameraIntrinsics cameraIntrinsics = Core::EnvironmentInformation::getSingleton()->getCameraIntrinsics(); Core::Resolution screenResolution = Core::EnvironmentInformation::getSingleton()->getScreenResolution(); Core::EnvironmentInformation::AspectRatio aspectRatio = Core::EnvironmentInformation::getSingleton()->getScreenAspectRatio(); Ogre::Matrix4 viewProjection = Stages::StageManager::getSingleton()->getActiveStage()->getActiveCamera()->getCameraViewProjectionMatrix(); for (int i = 0; i < out.numberofpoints; i++) { Ogre::Vector4 pt; pt.x = out.pointlist[(3*i)]; pt.y = out.pointlist[(3*i) + 1]; pt.z = out.pointlist[(3*i) + 2]; pt.w = 1; Ogre::Vector4 pnt = cameraPose.inverse()*pt; float x = (((pnt.x - cameraPosition.x)*cameraIntrinsics.focalLength.x)/pnt.z) + cameraPosition.x; float y = (((pnt.y - cameraPosition.y)*cameraIntrinsics.focalLength.y)/pnt.z) + cameraPosition.y; projectedFeaturePoints.push_back(Ogre::Vector2(x,y)); }
  14. I have been reading this paper http://openaccess.thecvf.com/content_iccv_workshops_2013/W21/papers/Sugiura_3D_Surface_Extraction_2013_ICCV_paper.pdf I have already generated a tetrahedra for my mesh and I would like to create a surface mesh. I can't understand the algorithm. From my understanding, there is a camera that generates rays towards to the points of the tetrahedral, and I get the intersections, but how would I eliminate the triangles that are inside or outside ? how would I detect inside or outside polygons ? Would someone give a pseudo code of the algorithm ?
  15. I'm trying to trianglulate 3D Points using DirectX11, so I triangulate 3D points then I try to draw triangles, the outcome of triangulation is std::vector<Tri>, each Tri has a,b,c 3 values. I don't see any output, I think I have a problem with the math.. here is my code: https://pastebin.com/SQ8z3WAt
  • Advertisement