Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

150 Neutral

About takis

  • Rank
  1. I flip the normal, but i dont see any changes? Here is my code, perhaps i am doing something wrong. // Phong shading model const Vec3 Renderer::phong(Vec3 n, // shading normal const Vec3& l, // light direction const Vec3& v, // direction to viewer const Vec3& rd, // diffuse color const Vec3& rs, // specular color const float ns) // shininess (Phong exponent) { // diffuse term: Rd times absolute value of cosine between light // direction and shading normal Vec3 color = rd * fabs(n & l); if (ns > 0.) { // specular term Vec3 r = n*2*(n&v) - v; // ideal reflected direction float a = r&l; if (a>0.) color += rs * powf(a, ns); } return color; } Vec3 Renderer::headlight_phong(Ray* ray, Hit* hit) { Vec3 n(hit->getShadingNormal()); // shading normal Vec3 v = -Vec3(ray->dir); // direction to viewer Vec3 l = -Vec3(ray->dir); // direction to light (at viewer) Texture* txt = sim->getObject()->getTexture(); Material* mat = sim->getObject()->getMaterial(); Vec3 rd(txt // diffuse color: hit surface has texture? ? txt->lookup(hit->getTexCoord()) // yes: use texture color : mat->diffuse); // no: use diffuse material Vec3 rs(mat->specular); // specular color float ns = mat->shininess; // shininess Vec3 ed(mat->emissivity); // emissivity return ed + phong(n, l, v, rd, rs, ns); }
  2. I am using a voxelgrid to speed up the raytracing and i hit the triangle at the back. If i use phong, i see the back of the object instead of the interior surface. How can i use phong to render the inside of an object?
  3. Hello, I am simulating corrosion on 3D objects. The corrosion can make holes into an object. How can i raytrace so that i can look inside the object. Now i make the corresponding pixel in the hole as black?
  4. Hello, I just wondering how you can rotate a point p around a line segment [a,b]. I do it like this: [cpp] Quat q; q.rotation((b-a).normalized(),angle); Mtx3 m=q.makeMatrix(); Vec3 pos=m*(p-a); pos+=a; void Quat::rotation(const Vec3& u, float angle) { angle*=0.5f; s=cosf(angle); v=u*sinf(angle); normalize(); } float Quat::magnitude() { return s*s+(v&v); } void Quat::normalize() { float m=magnitude(); if(m==0.0f) return; s/=m; v*=(1.0f/m); } Mtx3 Quat::makeMatrix() const { Mtx3 m; m[0] = 1.0f - 2*(v[Y]*v[Y] + v[Z]*v[Z]); m[1] = 2*(v[X]*v[Y] + s*v[Z]); m[2] = 2*(v[X]*v[Z] - s*v[Y]); m[3] = 2*(v[X]*v[Y] - s*v[Z]); m[4] = 1.0f - 2*(v[X]*v[X] + v[Z]*v[Z]); m[5] = 2*(v[Y]*v[Z] + s*v[X]); m[6] = 2*(v[X]*v[Z] + s*v[Y]); m[7] = 2*(v[Y]*v[Z] - s*v[X]); m[8] = 1.0f-2*(v[X]*v[X] + v[Y]*v[Y]); return m; } [\cpp]
  5. Hello, I am looking on the net how you can calculate the intersectionpoint between a ray and a 3d segment. I am moving particles on a mesh for simulating lichen. If i want to move from triangle t1 to a neighbour triangle t2 in the direction d, i want to shoot a ray from the position of the particle in the direction d. Then i check which edge off t1 i hit. If i found the edge E and the intersectionpoint p, then i look for the neighbour triangle t2. I calculate the tangent direction by multiplying the normal of t2 with the direction d. Now i get a direction d' in the plane of triangle t2. So i got a new ray with the intersectionpoint p as origin and as direction d'. I repeat this until the maximum travel distance is reached. Can anyone help me with the intersection code?
  6. takis

    look up table

    If i understand it correctly: i just have to put my N colors in an 1xN array.
  7. Hello, I am simulating corrosion with particles. I adjust the color of a particle according to his life time. If a particle is dead, it get black and i use alpha channel to simulate holes in the surface. But how can i use a lookup table, to find colors according to the life time of a particle? I got the colors from real life images of corrosion. According to his lifetime i have to choose one of them. How longer a particle lives, how darker brown it get.
  8. Hello, I have a texture atlas for an object in my scene. If i search the pixel in my texture that corresponds to my texture coordinate and i change its color, i get a strange result. While in my texture only one pixel is change from color, i get after texture mapping more changes. Here is the link to the screenshots of my problem, that makes my problem more clearly. void Texture::adjust(const Vector2D& texcoord, const Vector3D& color, float alpha) { float u=texcoord.getU(); float v=texcoord.getV(); float x = (u-floorf(u))*m_width; float y = (v-floorf(v))*m_height; float i = floorf(x); float j = floorf(y); unsigned char* pix=lookup_raw((int)i,(int)j); pix[0]=(unsigned char)(color.getR()<0.f ? 0.f: color.getR()>1.f ? 255.f : color.getR()*255.f); pix[1]=(unsigned char)(color.getG()<0.f ? 0.f: color.getG()>1.f ? 255.f : color.getG()*255.f); pix[2]=(unsigned char)(color.getB()<0.f ? 0.f: color.getB()>1.f ? 255.f : color.getB()*255.f); if(m_channels==4) pix[3]=(unsigned char)(alpha<0.f ? 0.f: alpha>1.f ? 255.f : alpha*255.f); } inline unsigned char* Texture::lookup_raw(int col, int row) { return m_map + m_channels * (m_width*row + col); }
  9. takis

    ray tracer: shooting ray

    I just want to know if i rotate an object with arcball. Do i have to change the eye of the camera in order to shoot rays to the object? How can i extract the eye and the center from the model_view matrix? void ArcBall::mouseMove(float mouseX, float mouseY, float viewW, float viewH) { // map [0, w] to [-1, 1] and [0, h] to [-1, 1] float x=2*mouseX/viewW-1.0f; float y=2*((viewH-mouseY)/viewH)-1.0f; Vector3D vTo=mapToSphere(x,y); m_qDrag.setFromPoints(m_vFrom,vTo); m_qNow.product(m_qDrag,m_qDown); Quaternion q=m_qNow; q.conjugate(); m_mTrans=q.getRotationMatrix(); } [\code]
  10. Hello, I have implemented arcball into my raytracer. The arcball works perfectly. glPushMatrix(); glMultMatrixf(m_pArcBall->getrotationmatrix().ptr()); m_pDocument->draw(); glPopMatrix(); For my raytracer, do i have to recalculate my viewposition en viewdirection if i am using my arcball? Can i recalculate it like this: viewpos=m_pArcBall->getrotationmatrix()*viewpos; viewdir=m_pArcBall->getrotationmatrix()*viewdir; where eye=viewpos and center=viewpos+viewdir
  11. ello, i am using the function of Tomas Moller and ben Trumbore to calculate the intersection with a triangle: intersect_triangle(float orig[3], float dir[3], float vert0[3], float vert1[3], float vert2[3], float *t, float *u, float *v); It returns also the barycentric coordinates u and v. Can i use them to calculate the texturecoordinates of the intersection point? How can i use them to calculate the texturecoordinates? I got the texturecoordinates of the vertices of the triangle. If i could use u and v, i dont have to calculate again the barycentric coordinates. I compute the shading normal as follow with u and v from the function intersect_triangle: float* Hit::getShadingNormal(void) { if (!triangle->surface->vnorm) // no per-vertex normals return triangle->normal; else { const R3* n = triangle->getNormals(); PINT(n[0], n[1], n[2], s, t, normal); R3NORMALIZE(normal); return &normal[0]; } } /* Point IN Triangle: barycentric parametrisation */ #define PINT(v0, v1, v2, u, v, p) { double _u = (u), _v = (v); (p)[0] = (v0)[0] + _u * ((v1)[0] - (v0)[0]) + _v * ((v2)[0] - (v0)[0]); (p)[1] = (v0)[1] + _u * ((v1)[1] - (v0)[1]) + _v * ((v2)[1] - (v0)[1]); (p)[2] = (v0)[2] + _u * ((v1)[2] - (v0)[2]) + _v * ((v2)[2] - (v0)[2]);
  12. Hello, I am creating a raytracer. I am using a texture atlas. Can i calculate the texture coordinates of the intersectionpoint P as follow: vec3 Triangle::barycentric(vec3 P) { float F_abc, F_pbc, F_apc; vec3 bary(0,0,0); F_abc = tri_area(_A,_B,_C); F_pbc = tri_area(P,_B,_C); F_apc = tri_area(_A,P,_C); if (F_abc == 0.0f) return bary; bary.x = F_pbc/F_abc; bary.y = F_apc/F_abc; bary.z = 1.0f- bary.x-bary.y; return bary; } // Texturecoordinates for point P void Triangle::uv(vec3 P, float &u, float &v) { vec3 Pb = barycentric(P); u = _A.texcoord.x * Pb.x + _B.texcoord.x * Pb.y + _C.texcoord.x * Pb.z; v = _A.texcoord.y * Pb.x + _B.texcoord.y * Pb.y + _C.texcoord.y * Pb.z; }
  13. Hello, I am creating a ray tracer. I want to use an octree to store the polygons of an object. I have some questions about building it and using it. 1 Build: the maximum polygons in an octree node is defined to 50. If there are more than 50 polygons in an octreenode: - I create his 8 children. - I test each polygon against the 8 new children (the same polygon can be in more than one child). - If the polygon is in one child, i add it to this child. - If all polygons are tested and inserted in the proper child or children, then the parentnode holds no polygons anymore. 2 Using it: - I shoot a ray. - if i hit an octreenode and it is not a leaf, then i check which children i hit. - I go on only with the children that i have hit. - If i hit a leaf, then i test which face is hit. Is this good thinking about using an octree for ray tracing? greetins takis
  14. Hello, I am trying to rotate points that lay on a mesh into a given plane. I manage it so far, but sometimes it does not rotate into the given plane. Could someone help me? Polygon A with a point p on it. Polygon B with a point q on it. I want to rotate point q in the plane describe by polygon A. Vec3 d=q-p; // building local system around p Vec3 localZ=A.normal; // the normal of polygon A Vec3 localY=cross(localZ,d); // rotation axis localY.normalize(); Vec3 localX=cross(localY,localZ); localX.normalize(); // projection of d into local system float x=dot(localX, d); float y=dot(localY, d); float z=dot(localZ, d); // calculating rotation angle float a=atan2f(z, x); // do the rotation Vec3 Pos; Pos.rotateAroundAxis(d, localY, a); Pos+=p; // result void Vec3::rotateAroundAxis(const Vec3& p, const Vec3& axis, float a) { // axis is normalised float s=sinf(a); float c=cosf(a); Vec3 vpar=axis*dot(p,axis); Vec3 vprp=p-vpar; Vec3 vcrs; vcrs.cross(axis,p); m_values[0]=vpar[0]+c*vprp[0]+s*vcrs[0]; m_values[1]=vpar[1]+c*vprp[1]+s*vcrs[1]; m_values[2]=vpar[2]+c*vprp[2]+s*vcrs[2]; }
  15. Hello, i want to speed up my ray tracer. I heard you can use octrees and kdtrees to speed up. How can you efficiently do this and how many polygons is the maximum to store in a cell of an octree?
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!