Advertisement Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

129 Neutral

About resle

  • Rank
  1. Hi, given that the current webgl specification doesn't support texture arrays, I am implementing some sort of variable texture atlas mechanism in my last engine. Essentially I pass to the shaders a couple variable multipliers for the texture coords to allow for arbitrary atlases to be used with any mesh. My main concern is: won't multiplying every texture coord counter the performance gain given by using only a few large textures instead of switching many little ones? Any thoughts or personal experiences for this scenario? Thanks, a.
  2. resle

    (Webgl) Glsl silhouette

    Any hint on the 2nd method - image space? I could try bending some parts of the engine to make it fit...
  3. resle

    (Webgl) Glsl silhouette

    Thanks Relfos, the 2nd technique sounds good for my case. I tried it by scaling the mesh in the 2nd pass by simple model matrix scaling. The result looks like this: scaling is probably a bit too simplistic for some types of geometric shapes, also it doesn't take into account the object's own scale. (I can more or less guess what "extruding" means [moving a vertex on its normal?] but I am not sure I know how to do it, also I haven't per-vertex normals data)
  4. resle

    (Webgl) Glsl silhouette

    Oww... no answers, looks badder than I thought. By the way I came back for a little update: image space is unluckily unusable in my rendering scenario, so I am now trying to search for object space ways of outlining a mesh. It puzzles me the solutions of this problem was so straightforward with fixed function, and with little performance loss despite rendering every mesh twice, and it's an ungooglable riddle now with shaders..
  5. Hi, I went on porting a simple 3d engine to webgl and noticed that GlPolygonMode and more importantly GlLineWidth have both been deprecated. Given that, could anyone point me to some tutorial on how to draw a toon outline using glsl? (perhaps - from what I "sniffed" on google - in image space?) Thanks Andrea
  6. And with this, two weeks of migraines are officially ended. I thank you from the deepest of my [insert your favourite organ] - andrea
  7. Hi, I've written this variant of the Moller test, in Javascript, and I am using it within a WebGL application. The function correctly returns whether the given ray (p,d) is intersecting or not the triangle (v0,v1,v2). What I can't get to work, is WHERE the intersection happens. I've tried everything... do you see any major flaw in the following code? Thanks a lot in advance, andrea function check(v0, v1, v2, p, d) { var e1 = vec3.create(); var e2 = vec3.create(); var h = vec3.create(); var s = vec3.create(); var q = vec3.create(); var a, f, u, v; var l = vec3.create(); var i = vec3.create(); e1[0] = v1[0] - v0[0]; e1[1] = v1[1] - v0[1]; e1[2] = v1[2] - v0[2]; e2[0] = v2[0] - v0[0]; e2[1] = v2[1] - v0[1]; e2[2] = v2[2] - v0[2]; vec3.cross(d, e2, h); a =, h); if(a>-0.000001 && a < 0.00001) return null; f = 1/a; s[0] = p[0] - v0[0]; s[1] = p[1] - v0[1]; s[2] = p[2] - v0[2]; u = f *, h); if (u<0 || u>1) return null; vec3.cross(s, e1, q); v = f *, q); if(v<0 || (u+v)>1) return null; t = f *, q); if (t>0.00001) { vec3.scale(d, a, l); vec3.add(p,l,i); return i; } return null; }
  8. I am really going mad ... I can't understand what's going wrong here [Chrome 10+ only] The tree dots are delimiting an invisible triangle "hovering" over the terrain. Intersection basically works, infact if you move the cursors in the boundaries of this triangle, an intersection is reported, but as you can see the added dot which should trace the intersection point, is somewhat "scaled". Even worse. By pressing "A" you make the triangle rotate on its own Y axis (it's actually part of a quad so it will spin according to the quad's center) Now the tracing point will start orbiting around the triangle counterclockwise. Something is seriously wrong here... I am 100% sure about the way I construct the ray, since I traced it previously by having it drawn onscreen, and it's perfect. The code that follows (WebGL / Javascript): // take 3 verts of the hovering quad, which is a generic quad model // and multiply them by the positional matrix that makes it appear // hovering there on that remote part of the world's terrain // where the camera is currently exploring var vert1 = vec3.create([quad.verts[(0*8)+0], quad.verts[(0*8)+1], quad.verts[(0*8)+2]]); vert1 = mat4.multiplyVec3(ent.mtx,vert1); var vert2 = vec3.create([quad.verts[(1*8)+0], quad.verts[(1*8)+1], quad.verts[(1*8)+2]]); vert2 = mat4.multiplyVec3(ent.mtx,vert2); var vert3 = vec3.create([quad.verts[(2*8)+0], quad.verts[(2*8)+1], quad.verts[(2*8)+2]]); vert3 = mat4.multiplyVec3(ent.mtx,vert3); // check intersection with the ray defined by Orig and Dir var point = Check(vert1 ,vert2 ,vert3, orig, dir); if(point==null) return; // the returned intersection point is relative to the triangle itself // and normalized, so remultiply it by the aforementioned positional matrix point = mat4.multiplyVec3(ent.mtx, point); // ray - triangle intersection function derived from the Moller Test function check(v0, v1, v2, p, d) { var e1 = vec3.create(); var e2 = vec3.create(); var h = vec3.create(); var s = vec3.create(); var q = vec3.create(); var a, f, u, v; e1[0] = v1[0] - v0[0]; e1[1] = v1[1] - v0[1]; e1[2] = v1[2] - v0[2]; e2[0] = v2[0] - v0[0]; e2[1] = v2[1] - v0[1]; e2[2] = v2[2] - v0[2]; vec3.cross(d, e2, h); a =, h); if(a>-0.000001 && a < 0.00001) return null; f = 1/a; s[0] = p[0] - v0[0]; s[1] = p[1] - v0[1]; s[2] = p[2] - v0[2]; u = f *, h); if (u<0 || u>1) return null; vec3.cross(s, e1, q); v = f *, q); if(v<0 || (u+v)>1) return null; t = f *, q); if (t>0.00001) return vec3.create([u,v,-t]); return null; } Can you see any major flaw here? Thanks in advance... again! a.
  9. Hi Vilelm, thanks for your hints. To complete the picture, I never actually coded a ray-triangle intersection test before - since in the early days of OpenGL I used to rely on backreading the depth buffer and with that, I could track a single pixel in 3d space without much hassle. Now that backreading is gone, I am trying to pickpoint a single (x,y,z) point out of a very large terrain. I definitely need to discard the culled triangles, that's why I chose that article although outdated... I am looking into the Arenberg test but I am having a hard time finding a proper algorithmical description of that. My doubts are falling, mostly, on: ehat should I pass exactly to GLU.unProject as ModelView matrix. I have a camera matrix representing the ... camera which moves on the terrain, and it's a plain 4x4 matrix which only tells me where (posx, posy, posz, rotx, roty, rotz) the camera is currently. I also have a single matrices for every object in the world, which I separately multiply by the camera matrix, one by one as needed.
  10. By the way, I started doubting my matrix multiplications when I encountered the issues which brought me to open the other topic a few hours ago ( triangle / ray intersection) I am fiddling with the function, corrected many mistakes, and now it works.... almost: There's a single triangle onscreen in my test - and I cast a ray trying to intersect it. Well, the function seems to intersect a ghost triangle which is perfectly specular to the first one. Like it was flipped on the X and Y axes, or the camera lens was somehow "backprojected"... ps: (Chrome 9.0 / 10.0 only for now) the ray-intersection test (just move on the "ghost triangle" to see the caption change general work in progress
  11. Can you elaborate on this? Also, theres no need for a "TempMatrix", here, you could construct it in a single shot if you wanted: TransformMatrix = ProjectionMatrix * Inverse(CameraMatrix) * ObjectMatrix; [/quote] I am using a temp matrix cause I am coding in Javascript and, at the moment, I am using some functions to multiply matrices - functions whose syntax force me to the two calls and the temp matrix. As for the coordinate system, I mean that, for instance, I've got a function to move a generic entity forward. Well, when I need to move the camera, it moves backwards. Basically, every object in the world has has 4x4 matrix representing its position, rotation. As you hinted, the camera is no different. If you want to "view" the scene from 5 units on the positive y axis, than you can just treat your camera like an object and transform it up by 5 units. Taking the inverse of this gives you the "view" matrix that transforms the world into the camera's coordinate system. [/quote] This is interesting: what would be the other way to deal with the camera matrix?
  12. Some doubts moving from the fixed function pipeline to shaders. Having: ProjectionMatrix, CameraMatrix and the N ObjectMatrix, I need to pass the vertex shader a TransformMatrix. What would the right order of multiplication? Right now I ended up doing something like this through... trial and error: TempMatrix = ProjectionMatrix * Inverse(CameraMatrix); TransformMatrix = TempMatrix * ObjectMatrix; Although this works, I feel there's something wrong here, expecially because I seem to have a different coordinate system for camera and objects.
  13. Greetings, I bet the topic has been covered 1000 times, and even myself - I've coded the damn thing again and again through many interations of opengl versions and languages. This time I am having a hard time debugging, since Javascript and Webgl aren't exactly debugging-friendly. First: building the ray. I am using the GLUUnproject function which I found here: Simple and clean, not much to explain. Second: the intersection itself I've coded this from scratch, basing myself on this paper The code looks like this function check(vert0, vert1, vert2, orig, dir) { // OUT var t,u,v; // VAR var edge1 = vec3.create(); var edge2 = vec3.create(); var tvec = vec3.create(); var pvec = vec3.create(); var qvec = vec3.create(); var det, inv_det; // find vectors for edges vec3.subtract(vert1, vert0, edge1); vec3.subtract(vert2, vert0, edge2); // calculate determinant vec3.cross(dir, edge2, pvec); det =, pvec); // calculations - CULLING ENABLED vec3.subtract(orig, vert0, tvec); u =,pvec); if(u<0 || u>det) return null; vec3.cross(tvec, edge1, qvec); v =, qvec); if(v<0 || (u+v)>det) return null; t =,qvec); inv_det = 1/det; t = t * inv_det; u = u * inv_det; v = v * inv_det; return vec3.create([t,u,v]); } Then I drew the simplest triangle ever and checked against it, failing. I've double checked all the values that I pass to the functions (both GluUnproject and Check), and they're correct. Nonetheless Check() fails. I think I may be doing something wrong in creating the ray, possibly because I am using a right-hand camera system. Do you see anything that may be wrong? Can someone tell me how to "build" manually a ray that goes straight from the center of the screen to the horizon? Thanks in advance andrea
  14. Quote:Original post by Hodgman Also, if you're going to go for the blur method, there's some tricks you can do to change what that the up-scaling (bilinear filtering) looks like. More great readings, thanks a lot. I think indeed this is the way: going for a simple blur but fiddling with interpolation, perhaps adding some constant form of per-block (NxN texels) noise.
  15. Quote:Original post by Hodgman What near/far plane values are you using when you render the shadow map? Is it possible to increase the near plane / decrease the far plane? I could indeed change the near/far plane and recalculate the projection matrix on the fly, yes. Currently it stays the same both for the 1st and 2nd pass: Near 1.0, far 1024 EDIT: here's a debug screenshot of the depth texture...
  • Advertisement

Important Information

By using, you agree to our community Guidelines, Terms of Use, and Privacy Policy. is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!