• Advertisement

j00hi

Member
  • Content count

    5
  • Joined

  • Last visited

Community Reputation

106 Neutral

About j00hi

  • Rank
    Newbie
  1. I've asked in the Unity forum, how they have implemented projectors. Here's the question: http://answers.unity3d.com/questions/319622/what-is-the-algorithm-used-behind-projectors.html There are some very interesting comments and results.
  2. Oh nice! Then I guess I should take a look into that technique. Yes, I'm mainly developing for mobiles. And in my Unity projects, I'm using projectors a lot. I use them for: -) grenade craters on the terrain, after a grenade exploded -) blob shadows for enemies -) hexagonal grid projection onto the terrain (using a repeating texture and an orthogonal projector) Basically I could use an orthogonal projector for all those, but I want to have the possibility of perspective projection, too.
  3. Hmm, I have also thought about those two possibilities, but I think, there are problems with both of them: 1) tri-overlay and use alpha blending: That wouldn't represent underlying geometry exactly, would it? (The Unity projectors are exactly on the geometry, every pixel) 2) Precalculate the texture (on-the-fly): Would this be even possible in realtime? If there were lots of objects and lots of projectors, I think that would have a huge impact on performance. Furthermore, I think that wouldn't work for every texture-combination. The terrain could have a repeating grass texture, and the projector could be a non-repeating texture => how would you precalculate those? I doubt that would be possible in a fast/easy way.
  4. Thanks for the link! I don't think that Unity is using a deferred renderer, because those projectors work on mobile platforms (iOS, Android) as well. According to the license comparison (http://unity3d.com/unity/licenses) deferred rendering is not supported on mobile at all. Maybe there is another approach which isn't based upon a deferred renderer?
  5. I'm asking myself how to implement something like the projectors in Unity using OpenGL or DirectX. I know how to do projective texture mapping, but in the implementations I've seen, the projected texture is passed as an additional texture to a shader - so it's basically multitexturing. If I would render a scene like in the attached screenshot, I would pass the grass texture and one "blob"-texture to my shader and render that. Unity, however, supports an arbitrary number of projectors. The scene on the attached screenshot shows a terrain with 16 projectors. But Unty supports way more than that - I could easily use 100 projectors, and all would be rendered, correctly perspectively projected onto the terrain, including correct occlusions if a "blob" is projected, for example, over a mountain ridge. It's perfectly projected onto the underlying geometry always. So, Unity obviously uses a different approach than the one I know, which is passing all the projected textures (and their texture-matrices, etc.) to the shader. Which one? How could I implement the same functionality using OpenGL or DirectX?
  • Advertisement