Sign in to follow this  
Kateplate

depth not recalculating upon rotation with translucent objects

Recommended Posts

hi, i'm having a problem with a 3D array of translucent cubes (like a rubix cube): initially it all looks dandy, when I rotate the view, it appears as if it still thinks the cubes that were at the front are still at the front and renders them as more visible than the current closest objects - so you can see the new back better than the front. Its as if it hasnt adjusted the depth value given the rotation and draws the object at the same intensity as before. How can I force it to draw the translucent cubes intensity in relation to the current camera position rather than the initial position they were in? thankyou!! Kate :) - I also have opaque objects and upon rotation these draw fine, and block the view of other cubes behind them. - incase that helps!

Share this post


Link to post
Share on other sites
- a bit more info!

- its not actually drawing the translucent cubes that are infront of an opaque cube after rotation - instead it has just altered the colour on the opaque cube to include the blended one in front, but it looks like theres empty space between the camera and the next nearest opaque cube - rather than having some ghostly cubes infront - as it looks initially without rotation.

please please help!

thanks
Kate :)

Share this post


Link to post
Share on other sites
For most blending functions transparent objects need to be rendered from back to front. This is because blending calculates the final color based on what's already in the frame buffer and what you're rendering. So if you render something in front first, it will blend with whatever your background color is. Then when you render the farther away object it blends with the nearer object. You want it the other way around though. The farther away object should blend with the background, then the nearer object should blend with the farther away one.

One way around this limitation is called depth peeling, but it is generally not efficient enough for most real-time applications. You may not need your app to run very fast however, so it's up to you whether or not you want to bother implementing it.

The standard way of dealing with this is to render all your opaque objects, sort your transparent objects (or polygons if there aren't too many or if your app doesn't need to be as fast as possible), turn off depth writing (with glDepthMask), then render your now sorted transparent objects. Remember to turn depth writing back on.

Share this post


Link to post
Share on other sites
If I understand correctly, you are trying to render the cubes with an alpha value (opaqueness) dependent on how close the camera is to the cube (or pixel)?

Share this post


Link to post
Share on other sites
thats right, except it doesnt seem to work it out in relation to where the camera currently is, but just uses the origninal order of the objects. I've had it suggested that i should force it manually by calculating a value for each cube giving its distance from the current camera position, then render the objects in order of they're distance away, furthest first. This seems quite a long way round tho as getting the current camera position is not simple (unless anyone knows a quick way?) - i think i have to recalculate it every timestep by tracing back through all the transformations and rotations that have been done to the original camera position.

any simpler ideas?

cheers
Kate :)

Share this post


Link to post
Share on other sites
Quote:
Original post by Kateplate
thats right, except it doesnt seem to work it out in relation to where the camera currently is, but just uses the origninal order of the objects. I've had it suggested that i should force it manually by calculating a value for each cube giving its distance from the current camera position, then render the objects in order of they're distance away, furthest first. This seems quite a long way round tho as getting the current camera position is not simple (unless anyone knows a quick way?) - i think i have to recalculate it every timestep by tracing back through all the transformations and rotations that have been done to the original camera position.

any simpler ideas?

cheers
Kate :)
Well as I mentioned in my other post, this is the way it's done by pretty much everyone. You will need to sort your objects by depth (distance to camera). You could do this in eye-space, in which the camera is at the origin, but then you need to transform all your objects to eye-space before calculating their depth. The easiest is to do it in world-space, which is probably the space in which all your object's positions are stored. You should also have a camera class which knows its position in world space; then it is simple to calculate the distance of each object to the camera in worldspace. You can even use the squared distance to get rid of the per-object sqrt operation. You may also be able to make use of temporal coherence (ie: objects will generally stay in roughly the same order from frame to frame) so that you won't need to do a full sort every frame.

Hope that helps.

Share this post


Link to post
Share on other sites
cool, thanks. Although my problem now is how do you get the current cam position? - Do I just mulitply the current rotation 4x4 matrix (this is getting updated by the user in a glui function with a rollerball) with the camp position 1x4 matrix (x,y,z,1)? - this doesnt take account of the x and y translations tho? Is there an easier way?

glui2->add_column( false );
GLUI_Rotation *view_rot = glui2->add_rotation( "Rotate", view_rotate );
view_rot->set_spin( 0 );

float view_rotate[16]=identity initially.

do I do:

campos=[x,y,z,1];
glMatrixMode(campos);
glMultMatrix(view_rotate); ?

on every time step perhaps?

much appreciated!
Kate

Share this post


Link to post
Share on other sites
Hmm, that depends on what the roller is controlling. It's been a long time since I've used GLUI so I don't remember exactly what those are capable of. Do you use it to rotate the objects, rotate the camera around the objects (such as the camera looks at one point and is rotated around that point so that it is always positioned on the surface of some sphere that is radius units from the point, like a third-person camera), or is it just rotating the camera's view (as in the camera stays in one place and just "looks around" with the roller, like a first-person camera)?

If it's the first option, then you will either have to multiply all the objects' positions by that matrix to get them into view space, or multiply (0,0,0) (the camera's position in view space) by the inverse of that matrix (should just be the transpose of the matrix unless there is a non-uniform scale in there somewhere) to get the camera into world space.

If it's the second option, I think you will need to take the 12th, 13th, and 14th elements of the matrix array (the translation part of the matrix) and then multiply that vector by either the top-left 3x3 of the matrix or the inverse of that. I'm not sure which but I'm pretty sure that is what you have to do.

If it's the third option than I think you wouldn't need to worry about the rotation at all and just use the translation from the other control as the camera's position.

I'm sorry I couldn't be 100% positive on those points but I don't have time to go through the math right now. I'm sure if there's something wrong up there one of the more math oriented members will come by soon enough and correct me.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this