Archived

This topic is now archived and is closed to further replies.

Stringer

Normal Mapping Questions....

Recommended Posts

I''m aware that there is 2 ways of doing this 1. Normal mapping information from a high-res model onto a low-res polygon ''game'' model. 2. as above - but the actual normal information is used as displacement too. ok, now some questions : - 1. How long will this likely be around, 2 years maybe? 2. If I was to start putting together ideas / concept artwork for a game for the ''next-gen'' consoles, would it be useful to utilise normal mapping, or would it be a pointless exercise as the next-gen consoles will have so much power that the high-res meshes normally created to make the normal maps would in fact be what was used ingame anyway? 3. What would people like to see the next-gen consoles capable of, or what would you expect to see? TIA

Share this post


Link to post
Share on other sites
as this is a preprocessing step, its yet possible today..
displacementmapping on the new ati radeon9700, as well as the next nvidia card, the nv30..

the bumpmapping is no problem even on a gf1, or radeon, so what?

the generation of the meshes like this is done in doom3, as well as there is a demo doing this on www.afrohorse.com

"take a look around" - limp bizkit
www.google.com

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I think that you should really listen to Carmack''s keynote speach from this year''s QuakeCon (downloadable from the web). He goes into saying that the target card for the Doom III engine was the Geforce 1 card (Geforce 256??) While he was developing it, there were 3 more iterations of Geforce series cards, with the NV30 due out before the Doom III engine (game) is ever released.

If you look at the enige features so far, they aren''t doing anything special, other than adding cohesion to all of the special effects techniques availiable at that time. Shadow mapping, dot3 bump mapping, environment mapping, etc, etc...These ideas were along around before he started work on the engine.

His approach to the renderer is to tie everything together and optimize things in ways that weren''t concieved before. For instance, the Carmack reverse when rendering shadow volumes, and unknown/countless other optimizations.

Make sence?

Share this post


Link to post
Share on other sites
Yes, it all makes sense, but again, the main query is, should it be something I should be looking at to utilise now, or will the next-gen consoles be so powerful (polygon crunching) that cheap effects to make objects look like they have more polys will be redundant anyway, as the console will indeed be able to handle a , say, 10, 20, 50,000 polygon character no sweat.

I''m referring mostly to an article I read at redherring about the PS3 hoping to be at least 1,000 times more powerful than the current PS2, a jump ahead of Moore''s law.

S

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Well, I''ve been reading about using vertex shaders to increase the polygon count of a model when it processes it using bezier (sp?) patches. This combined with bump mapping should be pretty impressive, but I''m not sure that you could combine displacement mapping with the increased polygon count because the height displacements would be off(incorrect).

So you are looking at trade offs going into this newer technology. I understand your delima when approaching which techniques to use and how worthwhile they will be in 2 years.

The buzzword for quite some time has been backwards compatability (however, the 2d paths are starting to sloth off). These tricks and techniques will still be around, only with more tricks and techniques that will come closer to aproximating what you want.

I look at it this way: when I''m done programming for the latest hardware of today, everybody will have at least this level of hardware, tomorrow.

John makes an interesting comment about "tricks." It turns out that if you are targetting for tomorrow''s hardware, you should start using techniques that are "technically correct". Mr. Carmack used optic flares as an example in his speach. Most engine approximate the effect by rendering textured(alpha) quads along t intervals of a vector shooting out of teh light source. He proposed a way of sampling the screen during a post processing stage and calculating the flares using math.

I think you''ll find that when technology progresses, not only does it in crease in speed, but this increase in speed has a potential for aproximating effects more closely using real math equations and not "tricks".

As for your engine being obsolete, Moore''s law may say that technology doubles every 18 months (or so), but it doesn''t state that everybody goes out and buys the latest hardware. I think in 2 years PC manufactioners may start shipping geforce2/256 level cards on their low end models, geforce3/4 on their midrange machines, and NV30+ for the high-end gamer''s solution.

Look around to see what the current level of technology is VS. the widespread implementation of it and base your target around those figures.

Share this post


Link to post
Share on other sites