function f(user, logger) User.setName(user, "name") User.setGroup(user, "group") Logger.output(logger, "User is set.") endThis isn't really what you were asking, but I think the only way to associate methods to data exported to lua is to use full user data. You can't do it using lightdata.
 Home
 » Viewing Profile: Reputation: apatriarca
apatriarca
Member Since 03 Jul 2006Offline Last Active Yesterday, 02:13 PM
Community Stats
 Group Crossbones+
 Active Posts 847
 Profile Views 6,194
 Submitted Links 0
 Member Title Member
 Age 29 years old
 Birthday February 11, 1985

Gender
Male

Location
Torino, Italy
#5101530 lightuser data in Lua
Posted by apatriarca on 15 October 2013  07:29 AM
#5098736 How do I use multithreading?
Posted by apatriarca on 04 October 2013  07:24 AM
#5096079 Matrix storage layout and multiplication order woes
Posted by apatriarca on 23 September 2013  01:32 AM
I think your are mixing different concepts which should be separated. Columnmajor and rowmajor have nothing to do with the order of multiplication. They are simply different ways to store a matrix in memory. You can have columnmajor matrices and use row vectors or rowmajor matrices and use column vectors. The order of multiplication is given by the choice of row or column vectors. When you use column vectors you have to multiply the matrix on the left and then the transformations go from the right to the left. On the other hand, if you use row vectors, you have to multiply the matrix on the right and transformations go from left to right. So, in your case you have to compose the transformations as L*P while in OpenGL (and basically everywhere in mathematics) we usually do P*L. Have I misunderstood your post and your are already doing it?
#5095940 Should you work with limitations or overcome them?
Posted by apatriarca on 22 September 2013  08:27 AM
#5095937 Matrix storage layout and multiplication order woes
Posted by apatriarca on 22 September 2013  08:10 AM
#5094886 Is Win32 API sufficient to make a user interface like the latest Visual Studio
Posted by apatriarca on 18 September 2013  03:43 AM
As of QT, well, it is nice and crossplatform technology, tho if you want to see it in action, just look at the EA Origin desktop client  it is horrible and very laggy.
You shouldn't judge a framework like Qt by looking at one bad example. Qt has been successfully used in a lot of different software. Autodesk Maya interface use Qt for example. I have worked with both win32 and Qt and Qt is much better in my opinion..
#5081908 Techniques to avoid skinny triangles with constrained delaunay triangulation
Posted by apatriarca on 31 July 2013  06:13 AM
Are you implementing the constrained Delaunay triangulation yourself? If you do not want skinny triangles you have to introduce additional points and construct a conforming Delaunay triangulation. This means you also have to increase the triangle count relative to your constrained triangulation.
#5078935 Orbit (ellipse) collision detection between many many objects
Posted by apatriarca on 19 July 2013  09:17 AM
I think it is better to simply compute the intersections between the spheres (maybe using some kind of spatial acceleration structure or the GPU).
#5076345 Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different...
Posted by apatriarca on 09 July 2013  08:04 AM
Computing that number in Javascript on the CPU solves the problem only partially. At some point you may have problems using doubles too. Big numbers and trigonometric functions do not work really well together... Why are you using such formula to generate the points?
#5071070 "Soft" shader, or, how do I get this skinlighting effect?
Posted by apatriarca on 19 June 2013  03:49 AM
All the examples except the cube men are in my opinion just faked rim lights. The basic idea is to add light in regions where the eye and surface normal are roughly perpendicular. The Unity example posted by TiagoCosta implement this technique. This technique does not simulate some kind of reflection but the effect of the back (or rim) light in the three point lighting system used in traditional art. I suggest learning about it. This kind of lighting is for example used to highlight the actors separating them from the background.
#5070828 Rope Simulation with Point Based Dynamics
Posted by apatriarca on 18 June 2013  06:28 AM
I think it may be a typo and the correct contact constraint function may be simply
C(p) =  p  (p_{n0 }+ p_{v} ) 
This constraint function seem to make sense since p_{n0} is defined as the current position and p_{v} is defined as the required displacement to satisfy the constraint. p_{v} is defined as the vector parallel to p_{n0}  p_{n1} and such that (p_{n0}  p_{n1}) + p_{v} has length r. This definition makes sense in my opinion.
#5070766 distance field landscape demo (with attachment)
Posted by apatriarca on 18 June 2013  01:23 AM
It is 14 fps for me.. I tested it using a Quadro K1000M (it is a laptop GPU..).
#5070402 Homogenous space
Posted by apatriarca on 17 June 2013  06:46 AM
Yeah, AFAIK, the word "space" doesn't belong there. We're contrasting 4D homogeneous coordinates to 3D/2D Cartesian coordinates.
Personally, I call it postprojection space, and after the divide by w, I call it NDCspace.
We (programmers) generally use "blahspace" to refer to some particular "basis" (viewspace, worldspace, postprojection space, etc...), but "w" comes into play not just because we're using some particular basis/space, but because we've switched from one coordinate system to another.
i.e. When working in objectspace, viewspace or worldspace, we generally use 3D cartesian coordinates to identify points. When we're dealing with postprojectionspace we switch over to using 4D homogenous coordinates to represent our points. We then transform them back to 2D cartesian coordinates in screen/viewportspace to identify pixels.
The reason we switch over to using 4D homogenous coordinates in order to implement perspective is because all our linear algebra (matrix/vector math) is... well... linear... whereas perspective is a nonlinear effect! i.e. you want to scale something with "1/distance", which when you plot it, it isn't a straight line (not linear).
By working with 4D coordinates, where we say "we will divide by w later on", then we can continue to use linear algebra (matrices) to operate in a "nonlinear space" (laymens terms, not formal math terms ).
Yes, I know. But homogeneous space is not a particularly useful search query in any case. I've actually always used the terms clip(ping) coordinates (and thus clip(ping) space) and normalized device coordinates (NDC) (and thus NDCspace) to denote the coordinates postprojection and post Wdivision in the graphics pipeline. The homogeneous coordinates has always been a more general term for me.
I understand that the way we use the homogeneous coordinates in the graphics pipeline is often not mathematically correct. We don't really care about what a projective space really is and what operation we can do on these coordinates. Some more advanced theory is however in my opinion sometimes useful to understand, for example, what is preserved by a projective transformation.
#5070399 Homogenous space
Posted by apatriarca on 17 June 2013  06:37 AM
I understand that "homogeneous space" is how your book call it, but if you search that on google you get something completely different. "projective space" is a much better search query. This is the main reason I have introduced that name in my previous post.
I think you should simply look for something like "homogeneous coordinates computer graphics" on google. There are hundreds of beginner articles about these things already and it is not possible to explain everything is a simple forum post. If you don't care about mathematical concepts, then you should probably just understand how perspective projections are derived and basically consider the W division step as a technical trick. Indeed, if you compute the perspective projection equations, you will see that it is necessary to divide by z at some point. You can't divide by a coordinate using matrices. The trick is thus to write the denominator in a fourth coordinate and then divide by it after the matrix multiplication.
The additional coordinate is actually also used to make it possible to use matrices to represent translations but this has nothing to do with the notion of homogeneous coordinates.
#5070381 Homogenous space
Posted by apatriarca on 17 June 2013  04:22 AM
First of all, the correct (or at least more common) name of this space is (real) projective space and not homogeneous space. An homogeneous space is usually something different in mathematics*. I've never seen the term homogeneous space used in this context.
The term projective space comes from the fact it is the smaller/more natural space in which (central) projections can be defined. You can visualize this space as the set of lines passing through the origin in the (real) vector space one dimension larger. So the projective plane (the projective space of dimension two) is for example the set of 3D lines in the form
P(t) = t * (X, Y, Z).
Each line can be identified by any of its nonzero points. The homogeneous coordinates of a point of the projective space are the coordinates of one of the nonzero points of the corresponding lines and are written between square brackets (and often separated by a colon) [X0 : X1 : ... : Xn]. These homogeneous coordinates are not unique: two coordinates which differ by a nonzero scalar multiple represent the same point. The division by W is simply a way to choose a unique representative of a point. You basically represent all lines by their intersection with the plane W=1. All the lines parallel to that plane cannot be represented in this way. They are called points at infinity and they usually represent directions.. Note that dividing by W is just a convention and it is possible to divide by any other coordinate or use any other general plane.
So, why using a space like that for projections? The main reason is that a projection transform a point with coordinates [x : y : z : 1] to some point [X : Y : Z : W] where W is no longer equal to one.. It is thus necessary to divide by W to get back a 3D point.. This is the main reason homogeneous coordinates have been defined and used in computer graphics.
The projection matrices are usually chosen so that the W coordinates basically represents the depth of the transformed point and the view frustum is transformed to the cube with all three coordinates inside [1,1] after W divide (there are actually different conventions here between DirectX and OpenGL here..). But this is just a useful convention.
* It is usually used to denote some kind of space with a transitive Gaction..