Archived

This topic is now archived and is closed to further replies.

Holographic Texture Mapping?

This topic is 4929 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In a recent interview about UnrealEngine3 Tim Sweeney mentions "entirely new things that haven''t been done in real-time before, such as holographic texture mapping, spherical harmonic lighting, and dynamic soft shadows.". Does anyone know what technique he''s referring to when he mentions holographic texture mapping? I did some googling and couldn''t find anything obvious. I''m guessing it might be something to do with lumigraphs / lightfield rendering (which I don''t know much about). Can anyone explain what he''s talking about or provide any links to relevant papers?

Share this post


Link to post
Share on other sites
Spherical harmonics isn''t a "real time" thing from what i can tell.. unless they are doing some sort of pixel shader function for a large amount of lookup tables. Dynamic soft shadows "ain''t no big thing" there''s various ways to do that now.

As far as holographic mapping, i would imagine it has to do with some interesting reflection mapping properites that change as the viewpoint changes towards an image.

~Main

==
Colt "MainRoach" McAnlis
Programmer
www.badheat.com/sinewave

Share this post


Link to post
Share on other sites
"Spherical harmonics" isn't a single technique - it's a tool that can be used to achieve all sorts of effects of varying complexity. Perhaps the simplest application of spherical harmonics is for Irradiance Environment Maps and that is very much a 'real time thing' - plenty of games already use that technique and it's pretty cheap when done per vertex. A more advanced application of spherical harmonics is for Precomputed Radiance Transfer and that is now a real time technique as well (you can see it in action in the examples with the latest DX9 SDK). The only problem with PRT is that it doesn't really work for animated models (at least no one's published a way of using it for animated models). PRT can be used for soft shadows, I don't know if that's what UnrealEngine3 is doing.

I'm pretty sure they're doing PRT in UnrealEngine3 because Tim Sweeney gave an interview about the benefits of 64 bit memory addressing for pre-processing models for spherical harmonic lighting.

[edited by - mattnewport on March 20, 2004 6:17:12 PM]

Share this post


Link to post
Share on other sites
I'm wondering if by "Holographic Texture Mapping" he really means using "Holographic Transforms for Image Compression". A way of taking multiple lossy compressed images and using a holographic transform to produce a near-lossless decompressed image. Such textures would be a breeze to stream off a disk or disc in real time due to their high-loss compression. Though it would be rather demanding of the CPU to produce the uncompressed images. Perhaps using a seperate thread (via a hyperthreading cpu or simply a second cpu) could elivate that problem by predictively decompressing textures a little ahead of the time they are needed. This could allow absolutely non-repeating textures for tremendously large scenes. And because of the holographic nature of the transform, you could literally decompress the images to the particular mip levels that are needed at that time.

I'm betting however he means something akin to lumigraphs.

[edited by - Mastaba on March 20, 2004 8:00:01 PM]

Share this post


Link to post
Share on other sites
I read a newspaper article about (edit)the new Tribes a few month ago. The journalist was talking about a new mapping process used by the Tribes team called normal mapping. According to the journalist, this technic will simulate that each object are compose of many many polygons. I think it's what they call now holographic mapping, you will understand.

In this time, I was implementing a correct perpective mapper and when I read the article, it came to me an idea: the mapping is directly join to the plan of the polygons. If you consider the vertices normals, you can interpolate a normal\plan (which is more correct) for each screen pixel and do a mapping with this new plan. You imagine easily the result. I implemented it quickly and it worked very well (but it was very slow, too slow for my software engine).

What do you think of that, it's maybe near from that


[edited by - Chuck3d on March 21, 2004 9:47:12 AM]

Share this post


Link to post
Share on other sites
I agree with duhroach.
I think holographic mapping is "just" parallax compensated normal mapping.
And yes, you use SH PRT in real time; that''s the point.

Share this post


Link to post
Share on other sites
I don''t see why the term ''holographic'' would have anything to do with parallax mapping though. Maybe I''m missing something but it seems like a very poor choice of word if that''s really what he''s talking about. What makes you think that''s what he''s referring to? I did see a quote here which shows that Tim Sweeney is familiar with the technique and thinks it is impressive but I still don''t see where the ''holographic'' would come into it.

Share this post


Link to post
Share on other sites
Ok, I spoke too soon. There''s a post here (8th post down) by Daniel Vogel, one of the programmers at Epic, which seems to make it clear that this is just their name for parallax mapping. I still think it''s a terrible choice of name for the technique but it seems that''s all it is. I thought it might have been something more interesting.

Share this post


Link to post
Share on other sites
Or it could be an attempt to use ultra hd monitors to render very small grids that would create diffraction figures, modify the light phase, and eventually create a hologram.
This would ony require resolutions of about 1e9x1e9 pixels (and the associated data storage), which, incidentally, is adressable using 64 bit pointers, and that''s why you have to use 64 bits for content authoring.
Seriously, marketing is one of the biggest warping lensens aver conceived by manking. I''d better go back to work on my twice as fast hyper threading P4. Or maybe I''d better switch to the twice as fast amd64 because, obviously (!), 64 = 32*2

Share this post


Link to post
Share on other sites
im not going to read up the details about how those nice holographic pictures work (the flat 2d kind that actually exists, not the "projected into thin air" 3d stuff no sf show or movie seems to be able to avoid).. but my guess would be that its a similiar method using multiple layers.

Share this post


Link to post
Share on other sites
I wish programmer''s wouldn''t use marketing inspired buzzwords to describe an algorithim. "Holographic Texture Mapping" coming from a programmer implies something much more significant than parallax mapping. But I guess they wanted to make sure no one would confuse this with the old 2D parallax effect in old games.

Share this post


Link to post
Share on other sites
quote:
Original post by Trienco
im not going to read up the details about how those nice holographic pictures work (the flat 2d kind that actually exists, not the "projected into thin air" 3d stuff no sf show or movie seems to be able to avoid).. but my guess would be that its a similiar method using multiple layers.


Sort of. The corregated surface refracts the light onto a different part of the image, hence, if you construct the image correctly, you can produce the effect of a 3D image. I can''t remember whether they actually use different layers, or just one composite image. Something along those lines anyway.


You have to remember that you''re unique, just like everybody else.

Share this post


Link to post
Share on other sites
It has nothing to do with multiple layers.
Take a look at a good hologram, and not the cheap sticker kind, and you''ll be impressed. It uses diffraction rather than refraction, and the use of diffraction allows to encore "per direction" information on top of spatial information, and, apart from coloring, look just like the real 3D thing.

Share this post


Link to post
Share on other sites
quote:
Original post by janos
Or it could be an attempt to use ultra hd monitors to render very small grids that would create diffraction figures, modify the light phase, and eventually create a hologram.
This would ony require resolutions of about 1e9x1e9 pixels (and the associated data storage), which, incidentally, is adressable using 64 bit pointers, and that''s why you have to use 64 bits for content authoring.

In my Googling efforts to find out what this technique was I came across several papers describing holographic displays which use LCDs to do pretty much what you describe. They''re extremely limited in size at the moment and the computational requirements for calculating the diffraction figures are pretty extreme but I''m going to be first in line for a holographic display when they get the tech sufficiently refined :-)

Share this post


Link to post
Share on other sites
While I agree that it''s a bit misleading to call the technique holographic, there are a few reasons I can think of to do so:

1) Holograms encode 3d information on a 2d surface
-- Parallax normal maps encode 3d vectors onto a 2d map

2) Holograms account for parallax differences between viewpoints
-- So do parallax normal maps

Here''s a link to a demo:
http://www.theswapmeet.com/ubb/Forum12/HTML/000029.html

Share this post


Link to post
Share on other sites
Could this be something as simple as a multi-angle imposter texture?

So instead of a single rendered-to-texture image being used at a distance to save re-rendering stuff that doesn''t move a lot, it uses several frames from different angles at a closer distance. Depending on the angle you look at the imposter, you see a different version of the texture.

That sounds a lot like what most people think of as a hologram and could possibly be called holographic texturing.

I''m not sure what use that would be in the Unreal engine though, but it could have it''s uses in car games for example. Windows that change as the car goes past could get away with a single polygon instead of rendering the interior of the car. Or does the unreal engine still use portals? Then it could be used for drawing middle distance portal planes instead of rendering regions beyond.

Dan

Share this post


Link to post
Share on other sites
quote:
Original post by janos
It has nothing to do with multiple layers.
Take a look at a good hologram, and not the cheap sticker kind, and you''ll be impressed. It uses diffraction rather than refraction, and the use of diffraction allows to encore "per direction" information on top of spatial information, and, apart from coloring, look just like the real 3D thing.


Janos, do you mean a proper transparent hologram rather than the shiny metallic kind? They are incredible. I got to make a couple last year as part of a course I was doing. I''ve got them somewhere still, but I haven''t got the laser to view them with anymore as I had to send it back at the end of the course. The one that came with it was excellent, a stopwatch with a magnifying glass in front of it. The magnifying glass actually worked so as you moved your head different parts of the watch were magnified but you could still look behind it at the normal watch

Dan

Share this post


Link to post
Share on other sites
Yep, I was talking about that transparent kind of holograms on fine photo film.
It's fun to make. The theory behind it is intersting too.
Back to parallax bump mapping->
You use an extra texture that contains the height of the point on the surface you want to represent compared to the polygonal surface. You then encode the tangent plane (or any higher degree surface if you want/can/are willing to pay for it) of the surface.
When you render a pixel (of the real geometry), you try to figure to which point of the surface you want to represent it matches by solving a plane/line eq. (on higher order methods, you may have more than one answer). You then use that virtual point's position to figure out the texture coordinates you plan to use.

Share this post


Link to post
Share on other sites
So they're just saying to use a different texture depending on the view from which a surface is observed? I can see some clever uses for this (like even better cartoon shaders, in one case) but still it seems a lot of hooah for nothing. The "better lossy compression" thing sounded much nicer to me, given the nature of the Unreal engines - after all, the UT games have a propensity for allowing game servers to host tons of new and fun game content (mutators, models, maps, etc), then bog the clients down with downloading layers upon layers of lossless texture data.

Share this post


Link to post
Share on other sites
Quote:
So they're just saying to use a different texture depending on the view from which a surface is observed?


They alter the texture coordinates used depending on view angle the texture stays the same, there's a paper on it here.

Share this post


Link to post
Share on other sites