Archived

This topic is now archived and is closed to further replies.

deryk

cube/environment mapping algorithm questions

Recommended Posts

deryk    122
hi everyone. before i immerse myself completely in programming environment mapping the way i understand it, i just thought i''d drop by and ask the experts first. here''s how i understand this cubemapping thing (which i plan to use for env. maps). i got this from reading the tutorial over at nvidia: 1. for every vertex, find the normal. the greatest coordinate on the normal specifies which direction the polygon is mostly facing toward (thus we use this to find the texture map which we will map onto this polygon. ex. if the x coordinate is the greatest, and it points to positive, then use the right-hand side texture map). 2. divide the two lesser coordinates by the greater one to find the coordinates of the texture map to map onto the vertex. that''s how i understood it, but i''ve been pretty confused by some other references out there i read prior to reading the stuff at nvidia. also, how do i get the reflective shine? but first though, please do correct my algorithm if it is any bit wrong, or even if it''s completely wrong. i''d really like to get to work as soon as possible, so any info would do great. thanks!

Share this post


Link to post
Share on other sites
Sneftel    1788
Your algorithm is correct. However, keep in mind that DX/OGL will do cubemapping for you; there''s no need to implement it yourself.


How appropriate. You fight like a cow.

Share this post


Link to post
Share on other sites
deryk    122
you mean there''s a way to automate cubemapping in OGL?

how might i do that? if it''ll save me the trouble, i''ll go for it

Share this post


Link to post
Share on other sites
deryk    122
hi. apparently, while my pc at home supports this cube map thing, the pcs at my school (for which we have to design our projects) don''t. older 3d cards i guess.

so, this one time i''m gonna have to manually implement cube mapping.

so now that i have a pretty ok way of generating the texture coordinates (i''m programming the algorithm right now), how do i work on getting the shiny, slick, look to it? i had noticed upon looking at various liquid metal demos, (i.e. terminator 2, virtua fighter 3) that reflections aren''t enough, and the models themselves look really smooth and reflective and all that. how can this be done?

Share this post


Link to post
Share on other sites
mictian    138
what object do you intend to environment map?....to get a nice smooth surface just increase the geometric detail, tesselate to a finer level

Share this post


Link to post
Share on other sites
Sander    1332
Yes. The shiny stuff you see is simply the cubemap applied to the model. Also: Don''t forget you need 2 pass rendering (or multitexturing). 1 pass for the models real texture, and 1 pass for the cubemap (shiny part).

If you refer to the white glimmer that occurs only at certain angles (near the edge of the model) then look up specular highlights.

Sander Maréchal
[Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]


GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

Share this post


Link to post
Share on other sites
Nik02    4348
Deryk, have you tried to subdivide your marching cubes?
That way they gain more tessellation
Also smooth the normals after generating geometry!

kind regards, Nik

Share this post


Link to post
Share on other sites
deryk    122
sander: for two-pass texturing, do i simply bind one texture, assign it to vertices, then bind another texture, and assign it to vertices again? that''s always how i understood 2-pass texturing, but, wouldn''t the second texture completely cover up the first one?

nik02: hey there man. nice to hear from you again! for the floors and walls of my scene, i already use vertex normals (i manually compute them). but for the actual metaballs, this''ll be impossible. my problem now is the function. while i do get the idea(averaging the normals of the faces containing the vertex and assigning it to the vertex), i assign normals to vertices during the glBegin()/glEnd() sequence. if this is how i do it, how can i average the normals of vertex-sharing faces that haven''t been constructed yet, since i assign normals to vertices during their construction? i can''t really increase subdivisions since they slow down the computers in school... and will therefore mess up my presentation

Share this post


Link to post
Share on other sites
Nik02    4348
Hi Deryk,

In this case, you should construct some buffer (a vertex buffer in common literature) in which you add vertices from whichever algorithm you use.

You can manipulate the geometry in the buffer before sending them in to gfx card. (Eg. calculate normals or something).

Then, when finally sending the geometry, the actual process of sending is faster, because it needs to be done only once per buffer. This procedure is known as ''batching''.

I think that batching speeds up your presentation considerably

Nik

Share this post


Link to post
Share on other sites
Sander    1332
Look up the multitexture extension. Basically you bind a texture to texture unit 1 and another to texture unit 2 (you also give texture coords ofcourse). Then you render. The 2 textures will be blended together so you get the skin with some shiny additions.

You can do it manually too (2 pass algorithm):

//pass one
bind texture 1
render model

//pass 2
enable blending
offset z buffer (to prevent z-fighting)
bind texture 2
render model again


Sander Maréchal
[Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]


GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

Share this post


Link to post
Share on other sites
deryk    122
hmm i see.

let me just clarify first though, just what do we mean exactly by "render"? i know it sounds stupid, but basically everytime i hear "render" i just think of the vertex declaration phase within the glBegin and glEnd statements. that''s what i''ve been thinking all along. corrections, anyone?

but anyway, for the 2-pass algorithm, i plan on implementing it myself just to be able to explain it more. so for the second pass, i just texture as normal except i enable blending and z-offset?

Share this post


Link to post
Share on other sites
Atheist    150
If you are using a polygonized scalar field you can get the normal of any vertex by simply taking the first derivative of your field at this point and nomalizing it.

Share this post


Link to post
Share on other sites
Sander    1332
Yes. Render can be a begi/end statement. It can also be calling a displaylist or calling a glDrawArrays or glDrawElements function. Basically, rendering is what puts the triangles on the screen.

In the 2 pass algo for the second pass you just bind a texture and render as normal but with bending and z-offset enabled yes. Only the texture settings are different (e.g. enviromental mapping or spheremapping or cubemapping instead of normal texturemapping).

Sander Maréchal
[Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]


GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

Share this post


Link to post
Share on other sites