You can subdivide the regions with axis aligned rectangles (it seems the section of your cuts are either vertical or horizontal) and quite easily compute the area of the regions by summing all the areas of the rectangles.
 Home
 » Viewing Profile: Posts: apatriarca
apatriarca
Member Since 03 Jul 2006Offline Last Active Yesterday, 02:22 PM
Community Stats
 Group Crossbones+
 Active Posts 865
 Profile Views 10,431
 Submitted Links 0
 Member Title Member
 Age 31 years old
 Birthday February 11, 1985

Gender
Male

Location
Torino, Italy
Posts I've Made
In Topic: Calculating used space on a gaming board
08 July 2016  04:43 AM
In Topic: Approximating Diffuse Indirect Illumination
01 July 2016  04:05 AM
If you do not understand the mathematics (and physics?) of indirect illumination, why are you trying to come up with a realtime algorithm for it? The first step to solve any problem (and thus also creating an algorithm for that) is indeed to understand it in depth. You should have at least looked for algorithms doing the same thing!
I find your description very confusing. I am not sure I understand what you mean by "photon" in your algorithm. A photon is simply light, there is no direct or indirect contribution. It also look quite expensive since you probably need to render your scene for a lot of photons and (I guess) store these render buffers in several textures. You will then need to retrieve all these information in some way in the final pass.
In Topic: Right vs LeftHanded Matrix Representation
29 July 2015  04:58 AM
I do not understand in your code what are lookAt and forward. I have always considered the two terms to mean the same exact thing. I always used Forward, Up and Side as in the GLU code you have posted. Note that by calling the Right vector as Side, you are actually making the code independent on handness/orientation. If you are working in a RH coordinate system, then Side will be Right, otherwise it will be Left. But the code is the same in both cases.
The following pseudocode describes the LookAt function from the old D3D9 documentation:
zaxis = normal(cameraTarget  cameraPosition) xaxis = normal(cross(cameraUpVector, zaxis)) yaxis = cross(zaxis, xaxis) xaxis.x yaxis.x zaxis.x 0 xaxis.y yaxis.y zaxis.y 0 xaxis.z yaxis.z zaxis.z 0 dot(xaxis, cameraPosition) dot(yaxis, cameraPosition) dot(zaxis, cameraPosition) 1It is different from your code mainly because the projection transformations used in the two APIs are defined differently. In DX the camera is looking in the +Z direction while in OpenGL it is looking in the Z direction. Note however that the other axes are defined slightly differently as well. The xaxis points to the right of the camera (note that cameraUpVector and zaxis are swapped but the cross product works differently so you get the right vector in the end). The yaxis points up (the discussion is similar to the xaxis vector). If you use this code with a RH convention, you get the xaxis pointing to the left, the yaxis pointing up and the zaxis pointing forward.
Hope this help,
Antonio
In Topic: Right vs LeftHanded Matrix Representation
28 July 2015  10:03 AM
We say two (orthogonal for simplicity) coordinate frame have the same orientation if we can transform one in the other by applying a rigid transformation (rotation + translation). There are only two possible orientations which we call righthanded and lefthanded because they correspond to the two constructions already described by mv348. If we have to transform something from a coordinate frame with a righthanded (resp. lefthanded) coordinate frame to a lefthanded (resp. righthanded) one, we have to use a reflection and thus use a matrix with negative determinant. However, If we want to transform something between two coordinate frames with the same orientation (this is basically always true in graphics APIs) we use a matrix with positive determinant. As long as you are consistent with your conventions, you can simply ignore these issues.
In Topic: Cubic interpolation over a triangular surface
13 June 2015  08:42 AM