Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 03 Jul 2006
Offline Last Active Yesterday, 02:13 PM

#5101530 lightuser data in Lua

Posted by apatriarca on 15 October 2013 - 07:29 AM

I'm not an expert of lua myself, but you can create tables called User and Logger (or something else) containing the methods you need and then write something like the following in your script:

function f(user, logger)

   User.setName(user, "name")
   User.setGroup(user, "group")

   Logger.output(logger, "User is set.")

This isn't really what you were asking, but I think the only way to associate methods to data exported to lua is to use full user data. You can't do it using lightdata.

#5098736 How do I use multithreading?

Posted by apatriarca on 04 October 2013 - 07:24 AM

I suggest using OpenMP or Intel TBB to add parallelism to your program. Working with thread and low-level synchronization primitives is hard if you do not understand what you are really doing and it is not very productive. It is very easy to implement data parallel programs using those libraries and it is not so easy to surpass their performance (particularly if you do not have much experience).

#5096079 Matrix storage layout and multiplication order woes

Posted by apatriarca on 23 September 2013 - 01:32 AM

I think your are mixing different concepts which should be separated. Column-major and row-major have nothing to do with the order of multiplication. They are simply different ways to store a matrix in memory. You can have column-major matrices and use row vectors or row-major matrices and use column vectors. The order of multiplication is given by the choice of row or column vectors. When you use column vectors you have to multiply the matrix on the left and then the transformations go from the right to the left. On the other hand, if you use row vectors, you have to multiply the matrix on the right and transformations go from left to right. So, in your case you have to compose the transformations as L*P while in OpenGL (and basically everywhere in mathematics) we usually do P*L. Have I misunderstood your post and your are already doing it?

#5095940 Should you work with limitations or overcome them?

Posted by apatriarca on 22 September 2013 - 08:27 AM

It does not make sense in my opinion to have some parts of the map with no tile and no logic. What about simply assume that a blank tile should be threaded as air? I actually also think spaces would be better than Xs to represent air since it woukd be easier to distinguish the other tiles.

#5095937 Matrix storage layout and multiplication order woes

Posted by apatriarca on 22 September 2013 - 08:10 AM

It is in my opinion easier to thing about this transformation orders if you also consider where the vector is multiplied and how it should be transformed. In your case you first want to apply the local transform and then move it using the parent transform. The correct transformation in your case is thus (v*L)*P and not (v*P)*L where v is the vector to transform, L is the local transformation and P the parent one.

#5094886 Is Win32 API sufficient to make a user interface like the latest Visual Studio

Posted by apatriarca on 18 September 2013 - 03:43 AM

As of QT, well, it is nice and cross-platform technology, tho if you want to see it in action, just look at the EA Origin desktop client - it is horrible and very laggy.


You shouldn't judge a framework like Qt by looking at one bad example. Qt has been successfully used in a lot of different software. Autodesk Maya interface use Qt for example. I have worked with both win32 and Qt and Qt is much better in my opinion..

#5081908 Techniques to avoid skinny triangles with constrained delaunay triangulation

Posted by apatriarca on 31 July 2013 - 06:13 AM

How big is your world? How big are the players and NPC? Do you really need this kind of optimization?

Are you implementing the constrained Delaunay triangulation yourself? If you do not want skinny triangles you have to introduce additional points and construct a conforming Delaunay triangulation. This means you also have to increase the triangle count relative to your constrained triangulation.

#5078935 Orbit (ellipse) collision detection between many many objects

Posted by apatriarca on 19 July 2013 - 09:17 AM

Two spheres moving on two elliptical orbits may collide even if the two orbits do not intersect. You thus have to somewhat compute the intersection of two tori (this is how mathematicians call donuts-like surfaces). Moreover, intersection "times" will be periodical and they will depend on the parametrization of both ellipses. It looks like you will have to maintain and work with a quite complicated data structure.

I think it is better to simply compute the intersections between the spheres (maybe using some kind of spatial acceleration structure or the GPU).

#5076345 Javascript's “Math.sin/Math.cos” and WebGL's “sin/cos” give different...

Posted by apatriarca on 09 July 2013 - 08:04 AM

Computing that number in Javascript on the CPU solves the problem only partially. At some point you may have problems using doubles too. Big numbers and trigonometric functions do not work really well together... Why are you using such formula to generate the points?

#5071070 "Soft" shader, or, how do I get this skin-lighting effect?

Posted by apatriarca on 19 June 2013 - 03:49 AM

All the examples except the cube men are in my opinion just faked rim lights. The basic idea is to add light in regions where the eye and surface normal are roughly perpendicular. The Unity example posted by TiagoCosta implement this technique. This technique does not simulate some kind of reflection but the effect of the back (or rim) light in the three point lighting system used in traditional art. I suggest learning about it. This kind of lighting is for example used to highlight the actors separating them from the background. 

#5070828 Rope Simulation with Point Based Dynamics

Posted by apatriarca on 18 June 2013 - 06:28 AM

I think it may be a typo and the correct contact constraint function may be simply


C(p) = || p - (pn0 + pv ) ||


This constraint function seem to make sense since pn0 is defined as the current position and pv is defined as the required displacement to satisfy the constraint. pv is defined as the vector parallel to pn0 - pn1 and such that (pn0 - pn1) + pv has length r. This definition makes sense in my opinion.

#5070766 distance field landscape demo (with attachment)

Posted by apatriarca on 18 June 2013 - 01:23 AM

It is 1-4 fps for me.. I tested it using a Quadro K1000M (it is a laptop GPU..).

#5070402 Homogenous space

Posted by apatriarca on 17 June 2013 - 06:46 AM

Yeah, AFAIK, the word "space" doesn't belong there. We're contrasting 4D homogeneous coordinates to 3D/2D Cartesian coordinates.


Personally, I call it post-projection space, and after the divide by w, I call it NDC-space.


We (programmers) generally use "blah-space" to refer to some particular "basis" (view-space, world-space, post-projection space, etc...), but "w" comes into play not just because we're using some particular basis/space, but because we've switched from one coordinate system to another.


i.e. When working in object-space, view-space or world-space, we generally use 3D cartesian coordinates to identify points. When we're dealing with post-projection-space we switch over to using 4D homogenous coordinates to represent our points. We then transform them back to 2D cartesian coordinates in screen/viewport-space to identify pixels.


The reason we switch over to using 4D homogenous coordinates in order to implement perspective is because all our linear algebra (matrix/vector math) is... well... linear... whereas perspective is a non-linear effect! i.e. you want to scale something with "1/distance", which when you plot it, it isn't a straight line (not linear).

By working with 4D coordinates, where we say "we will divide by w later on", then we can continue to use linear algebra (matrices) to operate in a "non-linear space" (laymens terms, not formal math terms tongue.png).



Yes, I know. But homogeneous space is not a particularly useful search query in any case. I've actually always used the terms clip(ping) coordinates (and thus clip(ping) space) and normalized device coordinates (NDC) (and thus NDC-space) to denote the coordinates post-projection and post W-division in the graphics pipeline. The homogeneous coordinates has always been a more general term for me.


I understand that the way we use the homogeneous coordinates in the graphics pipeline is often not mathematically correct. We don't really care about what a projective space really is and what operation we can do on these coordinates. Some more advanced theory is however in my opinion sometimes useful to understand, for example, what is preserved by a projective transformation.

#5070399 Homogenous space

Posted by apatriarca on 17 June 2013 - 06:37 AM

I understand that "homogeneous space" is how your book call it, but if you search that on google you get something completely different. "projective space" is a much better search query. This is the main reason I have introduced that name in my previous post.


I think you should simply look for something like "homogeneous coordinates computer graphics" on google. There are hundreds of beginner articles about these things already and it is not possible to explain everything is a simple forum post. If you don't care about mathematical concepts, then you should probably just understand how perspective projections are derived and basically consider the W division step as a technical trick. Indeed, if you compute the perspective projection equations, you will see that it is necessary to divide by z at some point. You can't divide by a coordinate using matrices. The trick is thus to write the denominator in a fourth coordinate and then divide by it after the matrix multiplication.


The additional coordinate is actually also used to make it possible to use matrices to represent translations but this has nothing to do with the notion of homogeneous coordinates. 

#5070381 Homogenous space

Posted by apatriarca on 17 June 2013 - 04:22 AM

First of all, the correct (or at least more common) name of this space is (real) projective space and not homogeneous space. An homogeneous space is usually something different in mathematics*. I've never seen the term homogeneous space used in this context.


The term projective space comes from the fact it is the smaller/more natural space in which (central) projections can be defined. You can visualize this space as the set of lines passing through the origin in the (real) vector space one dimension larger. So the projective plane (the projective space of dimension two) is for example the set of 3D lines in the form


P(t) = t * (X, Y, Z).


Each line can be identified by any of its non-zero points. The homogeneous coordinates of a point of the projective space are the coordinates of one of the non-zero points of the corresponding lines and are written between square brackets (and often separated by a colon) [X0 : X1 : ... : Xn]. These homogeneous coordinates are not unique: two coordinates which differ by a non-zero scalar multiple represent the same point. The division by W is simply a way to choose a unique representative of a point. You basically represent all lines by their intersection with the plane W=1. All the lines parallel to that plane cannot be represented in this way. They are called points at infinity and they usually represent directions.. Note that dividing by W is just a convention and it is possible to divide by any other coordinate or use any other general plane. 


So, why using a space like that for projections? The main reason is that a projection transform a point with coordinates [x : y : z : 1] to some point [X : Y : Z : W] where W is no longer equal to one.. It is thus necessary to divide by W to get back a 3D point.. This is the main reason homogeneous coordinates have been defined and used in computer graphics.


The projection matrices are usually chosen so that the W coordinates basically represents the depth of the transformed point and the view frustum is transformed to the cube with all three coordinates inside [-1,1] after W divide (there are actually different conventions here between DirectX and OpenGL here..). But this is just a useful convention.


* It is usually used to denote some kind of space with a transitive G-action..