Jump to content

  • Log In with Google      Sign In   
  • Create Account


apatriarca

Member Since 03 Jul 2006
Offline Last Active Today, 09:28 AM

#5095937 Matrix storage layout and multiplication order woes

Posted by apatriarca on 22 September 2013 - 08:10 AM

It is in my opinion easier to thing about this transformation orders if you also consider where the vector is multiplied and how it should be transformed. In your case you first want to apply the local transform and then move it using the parent transform. The correct transformation in your case is thus (v*L)*P and not (v*P)*L where v is the vector to transform, L is the local transformation and P the parent one.


#5094886 Is Win32 API sufficient to make a user interface like the latest Visual Studio

Posted by apatriarca on 18 September 2013 - 03:43 AM

As of QT, well, it is nice and cross-platform technology, tho if you want to see it in action, just look at the EA Origin desktop client - it is horrible and very laggy.

 

You shouldn't judge a framework like Qt by looking at one bad example. Qt has been successfully used in a lot of different software. Autodesk Maya interface use Qt for example. I have worked with both win32 and Qt and Qt is much better in my opinion..




#5081908 Techniques to avoid skinny triangles with constrained delaunay triangulation

Posted by apatriarca on 31 July 2013 - 06:13 AM

How big is your world? How big are the players and NPC? Do you really need this kind of optimization?

Are you implementing the constrained Delaunay triangulation yourself? If you do not want skinny triangles you have to introduce additional points and construct a conforming Delaunay triangulation. This means you also have to increase the triangle count relative to your constrained triangulation.


#5078935 Orbit (ellipse) collision detection between many many objects

Posted by apatriarca on 19 July 2013 - 09:17 AM

Two spheres moving on two elliptical orbits may collide even if the two orbits do not intersect. You thus have to somewhat compute the intersection of two tori (this is how mathematicians call donuts-like surfaces). Moreover, intersection "times" will be periodical and they will depend on the parametrization of both ellipses. It looks like you will have to maintain and work with a quite complicated data structure.

I think it is better to simply compute the intersections between the spheres (maybe using some kind of spatial acceleration structure or the GPU).


#5076345 Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different...

Posted by apatriarca on 09 July 2013 - 08:04 AM

Computing that number in Javascript on the CPU solves the problem only partially. At some point you may have problems using doubles too. Big numbers and trigonometric functions do not work really well together... Why are you using such formula to generate the points?




#5071070 "Soft" shader, or, how do I get this skin-lighting effect?

Posted by apatriarca on 19 June 2013 - 03:49 AM

All the examples except the cube men are in my opinion just faked rim lights. The basic idea is to add light in regions where the eye and surface normal are roughly perpendicular. The Unity example posted by TiagoCosta implement this technique. This technique does not simulate some kind of reflection but the effect of the back (or rim) light in the three point lighting system used in traditional art. I suggest learning about it. This kind of lighting is for example used to highlight the actors separating them from the background. 




#5070828 Rope Simulation with Point Based Dynamics

Posted by apatriarca on 18 June 2013 - 06:28 AM

I think it may be a typo and the correct contact constraint function may be simply

 

C(p) = || p - (pn0 + pv ) ||

 

This constraint function seem to make sense since pn0 is defined as the current position and pv is defined as the required displacement to satisfy the constraint. pv is defined as the vector parallel to pn0 - pn1 and such that (pn0 - pn1) + pv has length r. This definition makes sense in my opinion.




#5070766 distance field landscape demo (with attachment)

Posted by apatriarca on 18 June 2013 - 01:23 AM

It is 1-4 fps for me.. I tested it using a Quadro K1000M (it is a laptop GPU..).




#5070402 Homogenous space

Posted by apatriarca on 17 June 2013 - 06:46 AM

Yeah, AFAIK, the word "space" doesn't belong there. We're contrasting 4D homogeneous coordinates to 3D/2D Cartesian coordinates.

 

Personally, I call it post-projection space, and after the divide by w, I call it NDC-space.

 

We (programmers) generally use "blah-space" to refer to some particular "basis" (view-space, world-space, post-projection space, etc...), but "w" comes into play not just because we're using some particular basis/space, but because we've switched from one coordinate system to another.

 

i.e. When working in object-space, view-space or world-space, we generally use 3D cartesian coordinates to identify points. When we're dealing with post-projection-space we switch over to using 4D homogenous coordinates to represent our points. We then transform them back to 2D cartesian coordinates in screen/viewport-space to identify pixels.

 

The reason we switch over to using 4D homogenous coordinates in order to implement perspective is because all our linear algebra (matrix/vector math) is... well... linear... whereas perspective is a non-linear effect! i.e. you want to scale something with "1/distance", which when you plot it, it isn't a straight line (not linear).

By working with 4D coordinates, where we say "we will divide by w later on", then we can continue to use linear algebra (matrices) to operate in a "non-linear space" (laymens terms, not formal math terms tongue.png).

 

 

Yes, I know. But homogeneous space is not a particularly useful search query in any case. I've actually always used the terms clip(ping) coordinates (and thus clip(ping) space) and normalized device coordinates (NDC) (and thus NDC-space) to denote the coordinates post-projection and post W-division in the graphics pipeline. The homogeneous coordinates has always been a more general term for me.

 

I understand that the way we use the homogeneous coordinates in the graphics pipeline is often not mathematically correct. We don't really care about what a projective space really is and what operation we can do on these coordinates. Some more advanced theory is however in my opinion sometimes useful to understand, for example, what is preserved by a projective transformation.




#5070399 Homogenous space

Posted by apatriarca on 17 June 2013 - 06:37 AM

I understand that "homogeneous space" is how your book call it, but if you search that on google you get something completely different. "projective space" is a much better search query. This is the main reason I have introduced that name in my previous post.

 

I think you should simply look for something like "homogeneous coordinates computer graphics" on google. There are hundreds of beginner articles about these things already and it is not possible to explain everything is a simple forum post. If you don't care about mathematical concepts, then you should probably just understand how perspective projections are derived and basically consider the W division step as a technical trick. Indeed, if you compute the perspective projection equations, you will see that it is necessary to divide by z at some point. You can't divide by a coordinate using matrices. The trick is thus to write the denominator in a fourth coordinate and then divide by it after the matrix multiplication.

 

The additional coordinate is actually also used to make it possible to use matrices to represent translations but this has nothing to do with the notion of homogeneous coordinates. 




#5070381 Homogenous space

Posted by apatriarca on 17 June 2013 - 04:22 AM

First of all, the correct (or at least more common) name of this space is (real) projective space and not homogeneous space. An homogeneous space is usually something different in mathematics*. I've never seen the term homogeneous space used in this context.

 

The term projective space comes from the fact it is the smaller/more natural space in which (central) projections can be defined. You can visualize this space as the set of lines passing through the origin in the (real) vector space one dimension larger. So the projective plane (the projective space of dimension two) is for example the set of 3D lines in the form

 

P(t) = t * (X, Y, Z).

 

Each line can be identified by any of its non-zero points. The homogeneous coordinates of a point of the projective space are the coordinates of one of the non-zero points of the corresponding lines and are written between square brackets (and often separated by a colon) [X0 : X1 : ... : Xn]. These homogeneous coordinates are not unique: two coordinates which differ by a non-zero scalar multiple represent the same point. The division by W is simply a way to choose a unique representative of a point. You basically represent all lines by their intersection with the plane W=1. All the lines parallel to that plane cannot be represented in this way. They are called points at infinity and they usually represent directions.. Note that dividing by W is just a convention and it is possible to divide by any other coordinate or use any other general plane. 

 

So, why using a space like that for projections? The main reason is that a projection transform a point with coordinates [x : y : z : 1] to some point [X : Y : Z : W] where W is no longer equal to one.. It is thus necessary to divide by W to get back a 3D point.. This is the main reason homogeneous coordinates have been defined and used in computer graphics.

 

The projection matrices are usually chosen so that the W coordinates basically represents the depth of the transformed point and the view frustum is transformed to the cube with all three coordinates inside [-1,1] after W divide (there are actually different conventions here between DirectX and OpenGL here..). But this is just a useful convention.

 

* It is usually used to denote some kind of space with a transitive G-action.. 




#5069332 Finding a unit vector orthogonal to another vector, on a plane

Posted by apatriarca on 13 June 2013 - 01:44 AM

You can simply do a cross product of the vector A with the plane normal and then normalize/negate as required.




#5066897 Space Partitioning on Sphere

Posted by apatriarca on 02 June 2013 - 10:38 AM

Regions computed by uniformly subdividing spherical coordinates are not very balanced. The regions near the poles will be a lot smaller than regions at the equator. It is better to uniformly subdivide the cylindrical coordinates (so you consider the longitude and z).




#5065469 Projection Matrix generation

Posted by apatriarca on 28 May 2013 - 04:02 AM

The depth buffer resolution at a certain depth is almost exclusively determined by the near clip plane, the far clip plane has almost no impact at all. Trying to improve the depth resolution by adjusting the far clip plane is usually not going to help at all.

 

A recent paper has indeed argued that setting the far plane to infinity can actually improve the precision (particularly if the modelview and projection matrices are not multiplied together before transforming the vertices). But it's the near plane who makes the bigger difference. One should always try to make the near plane as far as possible. 




#5065444 A Daft Statement

Posted by apatriarca on 28 May 2013 - 01:27 AM

I'm a mathematician and I have actually produced something worth publishing. I'm currently working as a graphics programmer, so I am in some sense also a computer scientist. I have no problem understanding the paper you have posted, but I don't see how it is related to your thesis. Since you can't really provide any argument to it and I'm annoyed, I don't think I will reply anymore. Bye.






PARTNERS