Hi!

Currently, i'm writing a little raytracing library, everything worked very well - but now, i'm implementing matrices for the transformation.. so i would like to make some things right, because i think they could be wrong:

- LoadMatrix: Simply resets all Push and Pop stages and loads the given values into CurrentMatrix.

- PushMatrix: Creates a new matrix, copyies the current stage and pushes the current matrix on stack

- MultMatrix: Multiplies CurrentMatrix * ObjectMatrix

- PopMatrix: Restores the last pushed stage

- Normal Matrix: Upper 3x3 of CurrentMatrix, Inversed, Transposed (Compute it every time Load or MultMatrix is called)

- Set a Vertex: Vector(x, y, z) is multiplied by CurrentMatrix

- Set a Normal: Normal(nx, ny, nz) is Multiplied by CurrentMatrix -- or, when computed by intersection algorythm, this normal is used (and not transformed by NormalMatrix?)

- The Lights: Position and Direction is recomputed each "LoadMatrix" is called, not multiplied by CurrentMatrix

- Rays of Projection: Computed once per "SetViewport" - because our world is transformed, it isn't necessary to transform them

Would be nice if someone knows, if my thoughts are right or not..

Happy coding,

Lunatix

**0**

# Matrices, Lights, Objects - Transformation Question

Started by Lunatix, Sep 04 2012 12:41 PM

3 replies to this topic

Sponsor:

###
#2
Members - Reputation: **1557**

Posted 05 September 2012 - 07:52 AM

The functions you mention above are mostly deprecated legacy opengl. See http://www.opengl.or...i/Legacy_OpenGL for more information. That is, a ray tracing library based on this might not be possible to use from modern applications.

It is possible to emulate the push/pop mode of using matrices, and this has been done elsewhere. But I don't really see the need for it (except to facilitate the transition from immediate mode to contemporary OpenGL). Using a math library like glm (http://glm.g-truc.net/) will do it very efficiently with extensive support.

The correct way to define geometry is no longer to used the fixed function calls, but to define them in a buffer that is transferred to the GPU. Using shaders also gives you much more advanced support for transformations. Lights are entirely handled by the shader now, there is no explicit support for it in the API.

It is possible to emulate the push/pop mode of using matrices, and this has been done elsewhere. But I don't really see the need for it (except to facilitate the transition from immediate mode to contemporary OpenGL). Using a math library like glm (http://glm.g-truc.net/) will do it very efficiently with extensive support.

The correct way to define geometry is no longer to used the fixed function calls, but to define them in a buffer that is transferred to the GPU. Using shaders also gives you much more advanced support for transformations. Lights are entirely handled by the shader now, there is no explicit support for it in the API.

**Edited by larspensjo, 05 September 2012 - 07:53 AM.**

###
#3
Members - Reputation: **143**

Posted 05 September 2012 - 09:30 AM

Umm.. okay, this wasn't clear enough >_< I don't use any function of opengl, theese are my own functions, but i'd like to get them work as OpenGL 1.1 supported them.. it was more the question, if i'm right or wrong with my thoughts..

You like voxels? Then you may like... http://gameworx.org/?p=36

###
#4
Crossbones+ - Reputation: **2341**

Posted 06 September 2012 - 06:35 AM

- LoadMatrix: ~~Simply resets all Push and Pop stages and~~ loads the given values into CurrentMatrix.

- PushMatrix: Creates a new matrix, copies the current stage and pushes the current matrix on stack

- MultMatrix: Multiplies CurrentMatrix * ObjectMatrix

- PopMatrix: Restores the last pushed stage

- Normal Matrix: Upper 3x3 of CurrentMatrix, Inversed, Transposed (Compute it every time~~Load or MultMatrix is called~~ geometry is rendered)

- Set a Vertex: Vector(x, y, z) is multiplied by CurrentMatrix. Possibly. The alternative is to transform the lights & rays by the inverse CurrentMatrix. It depends which one will be computationally cheaper.....

- Set a Normal: Normal(nx, ny, nz) is Multiplied by NormalMatrix~~CurrentMatrix -- or, when computed by intersection algorythm, this normal is used (and not transformed by~~

- The Lights:~~Position and Direction is recomputed each "LoadMatrix" is called, not multiplied by CurrentMatrix~~ Transformed into local space of geometry when needed (or transformed into world space - depends which one is cheaper)

- Rays of Projection: Computed once per "SetViewport" - because our world is transformed, it isn't necessary to transform them. Sort of. You'll need to compute the horizontal and vertical angles when the viewport changes size (using tan()), and then the rays can be constructed using basic triangle ratios. You may find transforming the rays into the local space of the geometry being rendered is useful (and this can be accelerated by rendering the front/back faces of a local space bounding box whose vertex colours represent the direction from the origin). See GPU raycasting for more details....

- PushMatrix: Creates a new matrix, copies the current stage and pushes the current matrix on stack

- MultMatrix: Multiplies CurrentMatrix * ObjectMatrix

- PopMatrix: Restores the last pushed stage

- Normal Matrix: Upper 3x3 of CurrentMatrix, Inversed, Transposed (Compute it every time

- Set a Vertex: Vector(x, y, z) is multiplied by CurrentMatrix. Possibly. The alternative is to transform the lights & rays by the inverse CurrentMatrix. It depends which one will be computationally cheaper.....

- Set a Normal: Normal(nx, ny, nz) is Multiplied by NormalMatrix

- The Lights:

- Rays of Projection: Computed once per "SetViewport" - because our world is transformed, it isn't necessary to transform them. Sort of. You'll need to compute the horizontal and vertical angles when the viewport changes size (using tan()), and then the rays can be constructed using basic triangle ratios. You may find transforming the rays into the local space of the geometry being rendered is useful (and this can be accelerated by rendering the front/back faces of a local space bounding box whose vertex colours represent the direction from the origin). See GPU raycasting for more details....