Jump to content

  • Log In with Google      Sign In   
  • Create Account

apatriarca

Member Since 03 Jul 2006
Offline Last Active Yesterday, 02:22 PM

Posts I've Made

In Topic: Calculating used space on a gaming board

08 July 2016 - 04:43 AM

You can subdivide the regions with axis aligned rectangles (it seems the section of your cuts are either vertical or horizontal) and quite easily compute the area of the regions by summing all the areas of the rectangles.  


In Topic: Approximating Diffuse Indirect Illumination

01 July 2016 - 04:05 AM

If you do not understand the mathematics (and physics?) of indirect illumination, why are you trying to come up with a real-time algorithm for it? The first step to solve any problem (and thus also creating an algorithm for that) is indeed to understand it in depth. You should have at least looked for algorithms doing the same thing! 

 

I find your description very confusing. I am not sure I understand what you mean by "photon" in your algorithm. A photon is simply light, there is no direct or indirect contribution. It also look quite expensive since you probably need to render your scene for a lot of photons and (I guess) store these render buffers in several textures. You will then need to retrieve all these information in some way in the final pass. 


In Topic: Right vs Left-Handed Matrix Representation

29 July 2015 - 04:58 AM

This discussion has more to do with interpretation than numbers. If we use the left-handed convention (LH), then cross(Forward, Up) = Left and not Right as in the right-handed convention (RH). The exact same formula for the cross product is used. You are however interpreting the same numbers in a different way. For example, you may decide your Forward vector is (0, 0, -1) in RH and (0, 0, 1) in LH. The Up and Right vectors are, for example, defined as (0, 1, 0) and (1, 0, 0) in both cases. If we compute the cross product of Forward and Up we then get (1, 0, 0) in RH and (-1, 0, 0) in LH. You thus get Right in one case and -Right (or Left) in the other, but you have done the same operation.

I do not understand in your code what are lookAt and forward. I have always considered the two terms to mean the same exact thing. I always used Forward, Up and Side as in the GLU code you have posted. Note that by calling the Right vector as Side, you are actually making the code independent on handness/orientation. If you are working in a RH coordinate system, then Side will be Right, otherwise it will be Left. But the code is the same in both cases.

The following pseudo-code describes the LookAt function from the old D3D9 documentation:
zaxis = normal(cameraTarget - cameraPosition)
xaxis = normal(cross(cameraUpVector, zaxis))
yaxis = cross(zaxis, xaxis)

 xaxis.x           yaxis.x           zaxis.x          0
 xaxis.y           yaxis.y           zaxis.y          0
 xaxis.z           yaxis.z           zaxis.z          0
-dot(xaxis, cameraPosition)  -dot(yaxis, cameraPosition)  -dot(zaxis, cameraPosition)  1
It is different from your code mainly because the projection transformations used in the two APIs are defined differently. In DX the camera is looking in the +Z direction while in OpenGL it is looking in the -Z direction. Note however that the other axes are defined slightly differently as well. The xaxis points to the right of the camera (note that cameraUpVector and zaxis are swapped but the cross product works differently so you get the right vector in the end). The yaxis points up (the discussion is similar to the xaxis vector). If you use this code with a RH convention, you get the xaxis pointing to the left, the yaxis pointing up and the zaxis pointing forward.

Hope this help,
Antonio

In Topic: Right vs Left-Handed Matrix Representation

28 July 2015 - 10:03 AM

We say two (orthogonal for simplicity) coordinate frame have the same orientation if we can transform one in the other by applying a rigid transformation (rotation + translation). There are only two possible orientations which we call right-handed and left-handed because they correspond to the two constructions already described by mv348. If we have to transform something from a coordinate frame with a right-handed (resp. left-handed) coordinate frame to a left-handed (resp. right-handed) one, we have to use a reflection and thus use a matrix with negative determinant. However, If we want to transform something between two coordinate frames with the same orientation (this is basically always true in graphics APIs) we use a matrix with positive determinant. As long as you are consistent with your conventions, you can simply ignore these issues.


In Topic: Cubic interpolation over a triangular surface

13 June 2015 - 08:42 AM

The basic idea is to start from the equation of the Bézier triangle above and then write the relations the control points should satisfies so that your control points are part of the surface. The vertices of the triangle already satisfies your condition. Let's now consider the case when one barycentric coordinate is zero (we are on an edge of the triangle). You have six unknown (the coordinates of the Bézier intermediate control points) and you thus need size equations. You can for example force your control points to be the points at coordinate (1/3, 2/3) and (2/3, 1/3) on this edge and you have a system of equations you can quite easily solve. Doing this for all the edges should give you a solution for all control points but one. At this point you can solve for the middle one and you are done. I can try to write the equation if you have problems following my explanation.

PARTNERS