Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

deryk

cube/environment mapping algorithm questions

This topic is 5460 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

hi everyone. before i immerse myself completely in programming environment mapping the way i understand it, i just thought i''d drop by and ask the experts first. here''s how i understand this cubemapping thing (which i plan to use for env. maps). i got this from reading the tutorial over at nvidia: 1. for every vertex, find the normal. the greatest coordinate on the normal specifies which direction the polygon is mostly facing toward (thus we use this to find the texture map which we will map onto this polygon. ex. if the x coordinate is the greatest, and it points to positive, then use the right-hand side texture map). 2. divide the two lesser coordinates by the greater one to find the coordinates of the texture map to map onto the vertex. that''s how i understood it, but i''ve been pretty confused by some other references out there i read prior to reading the stuff at nvidia. also, how do i get the reflective shine? but first though, please do correct my algorithm if it is any bit wrong, or even if it''s completely wrong. i''d really like to get to work as soon as possible, so any info would do great. thanks!

Share this post


Link to post
Share on other sites
Advertisement
Your algorithm is correct. However, keep in mind that DX/OGL will do cubemapping for you; there''s no need to implement it yourself.


How appropriate. You fight like a cow.

Share this post


Link to post
Share on other sites
you mean there''s a way to automate cubemapping in OGL?

how might i do that? if it''ll save me the trouble, i''ll go for it

Share this post


Link to post
Share on other sites
hi. apparently, while my pc at home supports this cube map thing, the pcs at my school (for which we have to design our projects) don''t. older 3d cards i guess.

so, this one time i''m gonna have to manually implement cube mapping.

so now that i have a pretty ok way of generating the texture coordinates (i''m programming the algorithm right now), how do i work on getting the shiny, slick, look to it? i had noticed upon looking at various liquid metal demos, (i.e. terminator 2, virtua fighter 3) that reflections aren''t enough, and the models themselves look really smooth and reflective and all that. how can this be done?

Share this post


Link to post
Share on other sites
what object do you intend to environment map?....to get a nice smooth surface just increase the geometric detail, tesselate to a finer level

Share this post


Link to post
Share on other sites
Yes. The shiny stuff you see is simply the cubemap applied to the model. Also: Don''t forget you need 2 pass rendering (or multitexturing). 1 pass for the models real texture, and 1 pass for the cubemap (shiny part).

If you refer to the white glimmer that occurs only at certain angles (near the edge of the model) then look up specular highlights.

Sander Maréchal
[Lone Wolves Game Development][RoboBlast][Articles][GD Emporium][Webdesign][E-mail]


GSACP: GameDev Society Against Crap Posting
To join: Put these lines in your signature and don''t post crap!

Share this post


Link to post
Share on other sites
Deryk, have you tried to subdivide your marching cubes?
That way they gain more tessellation
Also smooth the normals after generating geometry!

kind regards, Nik

Share this post


Link to post
Share on other sites
sander: for two-pass texturing, do i simply bind one texture, assign it to vertices, then bind another texture, and assign it to vertices again? that''s always how i understood 2-pass texturing, but, wouldn''t the second texture completely cover up the first one?

nik02: hey there man. nice to hear from you again! for the floors and walls of my scene, i already use vertex normals (i manually compute them). but for the actual metaballs, this''ll be impossible. my problem now is the function. while i do get the idea(averaging the normals of the faces containing the vertex and assigning it to the vertex), i assign normals to vertices during the glBegin()/glEnd() sequence. if this is how i do it, how can i average the normals of vertex-sharing faces that haven''t been constructed yet, since i assign normals to vertices during their construction? i can''t really increase subdivisions since they slow down the computers in school... and will therefore mess up my presentation

Share this post


Link to post
Share on other sites
Hi Deryk,

In this case, you should construct some buffer (a vertex buffer in common literature) in which you add vertices from whichever algorithm you use.

You can manipulate the geometry in the buffer before sending them in to gfx card. (Eg. calculate normals or something).

Then, when finally sending the geometry, the actual process of sending is faster, because it needs to be done only once per buffer. This procedure is known as ''batching''.

I think that batching speeds up your presentation considerably

Nik

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!