cube/environment mapping algorithm questions

Started by
12 comments, last by deryk 20 years, 8 months ago
hi everyone. before i immerse myself completely in programming environment mapping the way i understand it, i just thought i''d drop by and ask the experts first. here''s how i understand this cubemapping thing (which i plan to use for env. maps). i got this from reading the tutorial over at nvidia: 1. for every vertex, find the normal. the greatest coordinate on the normal specifies which direction the polygon is mostly facing toward (thus we use this to find the texture map which we will map onto this polygon. ex. if the x coordinate is the greatest, and it points to positive, then use the right-hand side texture map). 2. divide the two lesser coordinates by the greater one to find the coordinates of the texture map to map onto the vertex. that''s how i understood it, but i''ve been pretty confused by some other references out there i read prior to reading the stuff at nvidia. also, how do i get the reflective shine? but first though, please do correct my algorithm if it is any bit wrong, or even if it''s completely wrong. i''d really like to get to work as soon as possible, so any info would do great. thanks!
Advertisement
Your algorithm is correct. However, keep in mind that DX/OGL will do cubemapping for you; there''s no need to implement it yourself.

How appropriate. You fight like a cow.
you mean there''s a way to automate cubemapping in OGL?

how might i do that? if it''ll save me the trouble, i''ll go for it
Take a look at the ARB_texture_cubemap extension. The whole cubemapping processs is supported by 3D hardware nowadays.
hi. apparently, while my pc at home supports this cube map thing, the pcs at my school (for which we have to design our projects) don''t. older 3d cards i guess.

so, this one time i''m gonna have to manually implement cube mapping.

so now that i have a pretty ok way of generating the texture coordinates (i''m programming the algorithm right now), how do i work on getting the shiny, slick, look to it? i had noticed upon looking at various liquid metal demos, (i.e. terminator 2, virtua fighter 3) that reflections aren''t enough, and the models themselves look really smooth and reflective and all that. how can this be done?
what object do you intend to environment map?....to get a nice smooth surface just increase the geometric detail, tesselate to a finer level
¿We Create World?
Yes. The shiny stuff you see is simply the cubemap applied to the model. Also: Don''t forget you need 2 pass rendering (or multitexturing). 1 pass for the models real texture, and 1 pass for the cubemap (shiny part).

If you refer to the white glimmer that occurs only at certain angles (near the edge of the model) then look up specular highlights.

Sander Maréchal
[Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]


GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

<hr />
Sander Marechal<small>[Lone Wolves][Hearts for GNOME][E-mail][Forum FAQ]</small>

Deryk, have you tried to subdivide your marching cubes?
That way they gain more tessellation
Also smooth the normals after generating geometry!

kind regards, Nik

Niko Suni

sander: for two-pass texturing, do i simply bind one texture, assign it to vertices, then bind another texture, and assign it to vertices again? that''s always how i understood 2-pass texturing, but, wouldn''t the second texture completely cover up the first one?

nik02: hey there man. nice to hear from you again! for the floors and walls of my scene, i already use vertex normals (i manually compute them). but for the actual metaballs, this''ll be impossible. my problem now is the function. while i do get the idea(averaging the normals of the faces containing the vertex and assigning it to the vertex), i assign normals to vertices during the glBegin()/glEnd() sequence. if this is how i do it, how can i average the normals of vertex-sharing faces that haven''t been constructed yet, since i assign normals to vertices during their construction? i can''t really increase subdivisions since they slow down the computers in school... and will therefore mess up my presentation
Hi Deryk,

In this case, you should construct some buffer (a vertex buffer in common literature) in which you add vertices from whichever algorithm you use.

You can manipulate the geometry in the buffer before sending them in to gfx card. (Eg. calculate normals or something).

Then, when finally sending the geometry, the actual process of sending is faster, because it needs to be done only once per buffer. This procedure is known as ''batching''.

I think that batching speeds up your presentation considerably

Nik

Niko Suni

This topic is closed to new replies.

Advertisement