Jump to content

  • Log In with Google      Sign In   
  • Create Account

max343

Member Since 04 Nov 2012
Offline Last Active Jun 27 2013 11:14 PM

#5034950 Point inside convex polyhedron defined by planes

Posted by max343 on 21 February 2013 - 06:02 AM

I haven't tested it, but maybe you can do this:

Your plane equations are: Ni*x-Di, with Ni normalized row vectors, Di are scalars, and x is a column vector.
Now minimize the sum of squared distances with respect to x: S=sum[(Ni*x-Di)^2]
Obtain: x=((sum[Ni^T*Ni])^+) * sum[Di*Ni^T]
I'm almost sure that the pseudo-inverse is not required (and you can take the regular inverse), because this reminds me a lot of SVD. However, this is just a hunch.

EDIT: Yup, the matrix should be invertible for any closed polyhedrons. No need for pseudo-inverse.


#5027981 XNA Matrix Optimization and Animated Models

Posted by max343 on 01 February 2013 - 05:35 PM

Exactly. For instance, pure scaling and translation have only three meaningful elements in them. You can use this to reduce the number of floating point operations drastically. Only thing to note is to use SIMD operations rather than scalar.


#5023927 Best way to downscale a heightmap during runtime

Posted by max343 on 21 January 2013 - 09:40 AM

Thanks, I'll look into the Haar stuff, sounds interesting.

I am not doing physics on the GPU, only visuals. The LOD is also only for the visuals. I need to do accurate physics for the whole level, since I am doing multiplayer. At least on the server.

I agree that the physics should be "fixed". I am using Bullet though, so the fixing will most likely be creating a smoother terrain, and more movement friendly physics meshes for the vehicles.

Wavelet transform might work for you, but you should know that it doesn't differ from the mean by much. Linear filters are notorious for eliminating a lot of important data (mostly transitions), while all you want is to eliminate small bumps with is as little effect on the other data as possible.
You should really consider non-linear filters, like the bilateral filter or the more general nlm filter. Efficient implementation of these two is a bit tricky but doable as long as you don't go wild with the filter radius. Also, don't even try to implement them in the naive way, they'll be terribly slow.


#5014729 Most efficient way to batch drawings

Posted by max343 on 27 December 2012 - 10:31 AM

So if I have 100 sprites I should send 100 view model matrix with glUniformMatrix4fv and select them with gl_VertexID/4?


I didn't read into it the first time, but the answer is no. A big no. It's much better to use uniform buffers for something this big (or for something that you're going to share). In fact it's better to limit the usage of global uniforms only to those cases in which the overhead of using the buffer is greater.




#5014701 Most efficient way to batch drawings

Posted by max343 on 27 December 2012 - 09:09 AM

I always prefer using uniform buffers. Initially the piping is a bit tricky to understand, but once you grasp that part, their advantages over textures are apparent.

BTW, OpenGL 3 supports instancing.


#5014694 Most efficient way to batch drawings

Posted by max343 on 27 December 2012 - 08:12 AM

1) Use only one program for everything. The projection matrix is one, i can group the vertexes and send via glVertexAttribArray the values for shader and draw everything with one call. The problem is the model view matrix, that should be one for every vertex and this isn't the thing that I want because every sprite has own matrix.
I'm not sure what you mean by this, but maybe what you want is instancing.

2) Continue to use various shader. The projection matrix is shared between programs (how can I do it?), every sprite has own shader with own model view matrix and uniform values. The problem here is that I need to switch the program between sprite draws.
Uniform buffers make sharing easy.


#5014135 Dealing with bind-to-edit in OpenGL

Posted by max343 on 25 December 2012 - 03:44 AM

Ok, that solution sounds good.
 
However, one thing I just thought of that could be a problem with both these designs is texture deletion.  Once the Texture instance is destroyed (either with the delete keyword or when it goes out of scope) won't the GraphicsDevice class have a dangling pointer? For example, if you called CreateTexture2D() and the object pointed to by m_texture had been deleted, when the GraphicsDevice instances tries to restore the previously bound texture, it will be calling a member function of a deleted object, causing a crash.
 
Do you have any ideas on how to fix this issue?
 
Also, what's FL9?  I googled it but I can't seem to find anything.

FL9 is feature level 9 mode of D3D11.

As for your first question, then yes it could potentially cause a problem, though this problem is much easier to identify with a simple assert during bind, just check whether the texture name seems valid (for instance, you can keep track on how many textures you've allocated).
This way of deletion is just how D3D does things, you don't have to do the same. As Aldacron already said, your GraphicsDevice can handle the deletion (and check if what you're deleting is currently bound). However, as far as the state goes textures can delete themselves because for that you don't need to bind them (you don't alter the state).


#5014064 Dealing with bind-to-edit in OpenGL

Posted by max343 on 24 December 2012 - 07:33 PM

It seems that in your case there are no shortcuts available, so you'll just have to do it the hard way.

You can do exactly what D3D11 does. This is pretty much like you suggested. Instead of having the method "Create" in the "Texture" class, move it to the "GraphicsDevice" class and rename it to something like "CreateTexture2D". This method will create the GL texture, create an object of "Texture", initialize its name (possibly through the constructor), restore the previously bounded texture correctly, and finally return the texture object. You'll have to do the same for copying (and probably a few more things). Your "Texture" class will essentially become a resource interface which will be able to do only simple things (like return its name, and delete itself).
Just keep in mind that if you're going to follow D3D11 on this, don't copy it exactly as it also supports views. This is way beyond the scope of 2.1 context. Not that it's impossible to emulate it (pretty much what FL9 does), but you'll just have to write a lot more code.


#5013921 Dealing with bind-to-edit in OpenGL

Posted by max343 on 24 December 2012 - 06:45 AM

As mhagain pointed out, direct state access is a great solution to your problem.

However, you have alternatives. It really depends on the GL version you're aiming to support. Suppose it's 4, then you can force your textures to be created only once (let's say in the constructor), and from there on they can only be modified by ARB_copy_image, or casted by ARB_texture_view. Also, the only entity that can create textures is the device. With this design you'll have less problems, and it also resembles the D3D design.

Also, what mhagain said about the state being cached in the driver is true. Generally the driver stores the current state in a local copy. It's just less pretty to have all those glGet in your code. A rule of thumb is that if you have to resort to glGet to query the state, then your design has flaws.


#5012472 Blend animation problem

Posted by max343 on 19 December 2012 - 09:46 AM

This kind of shortcut seems wrong to me. Just by assuming that the matrix is orthogonal (I guess that what CreateFromRotationMatrix does) doesn't make it so, so who knows what kind of quaternion you obtain from this operation.
Also, this kind of interpolation of scaling doesn't make sense to me either, spatial transformations aren't generally formed like that. It should work fine when there's no scaling because you'll be interpolating between zero matrices, but with scaling I'm not even sure what kind of output it'll produce.

The right way to do it would be to decompose A=UP (polar decomposition), obtain quaternion from U and lerp/slerp it, while interpolating P component-wise.
I'm not sure why the original author refrained from implementing the polar decomposition as its implementation is straightforward and easy.


#5012015 Which physics library can also step backwards in time?

Posted by max343 on 18 December 2012 - 06:36 AM

Generally you won't be able to reverse the simulation.
Simplest example is, drop a box on the floor. Now it's in rest. Reverse the time. Will the box lift up? No.

In general terms, if a non conservative force is involved in the simulation (in non elastic collisions, friction is involved) you won't be able to reverse the simulation.


#5011723 Minkowski addition and convex hulls.

Posted by max343 on 17 December 2012 - 08:58 AM

Ok, sorry for being rude and not checking whether it was you who downvoted (I just assumed that, apparently wrongly).
But still, if you see something that is not familiar to you, instead of saying that it's a wild claim, you can just ask on how it's done. I didn't elaborate on it simply because it's a relatively basic topic in computational geometry, so I wasn't sure what's exactly the problem.

Now that I see what the problem is, I can give you the full answer. Using plane equations for meshes is suboptimal for this types of queries, because you want to exploit the convexity of your mesh. Multiple plane equations don't contain that information explicitly. On the other hand using verts/indcs is useful because you can construct the Dobkin-Kirkpatrick hierarchy (during pre-processing, takes linear time) and traverse only the most relevant parts of your convex mesh which results in a logarithmic time algorithm.
Implementation details can be found in almost any textbook that mentions this hierarchy.


#5011679 Minkowski addition and convex hulls.

Posted by max343 on 17 December 2012 - 07:41 AM

think about what information needs to be represented from that triangle, if you want to test if a point is inside your convex hull, then you need to test that it's inside all the planes of each triangle, if you want to test ray intersection with your triangle, first you have to get the point's intersection point with the plane that the triangle resides on, then you have to test that the point is inside the triangle.


Testing whether a point is inside a convex mesh takes O(log(n)) time. Why test it against all triangles?
Same thing goes for intersecting a ray with a convex mesh. Testing it against all triangles is suboptimal (can be done in O(log(n)) time).


#5006741 Compute volume from mesh Mathematical demonstration

Posted by max343 on 03 December 2012 - 01:43 PM

This is not a problem since this is exactly the same point.
Let's recap it by steps:
1. You start with the volume integral.
2. Afterwards, you choose a vector field whose divergence is 1. This vector field is defined up to a constant x0.
3. As any x0 is good, you choose one (for instance the origin) and obtain the flux integral by the divergence theorem.
4. Once finished computing the flux integral, you choose to interpret the formula as the sum of volumes of tetrahedrons that share the vertex x0.
5. You rewrite the formula from step 4, with an equivalent formulation that uses Det. Again, this is just a change in interpretation.

In steps 2-5 x0 is fixated and chosen exactly once. Hence there's no problem.


#5005355 Compute volume from mesh Mathematical demonstration

Posted by max343 on 29 November 2012 - 10:42 AM

Yes So it doesn't prove my formula with vertices ??
The aim of this formula is to not use distance but just vertices to gain time !


Why not?
For tetrahedron: A*d/3 == Det(v1-v0,v2-v0,v3-v0)/6, where 'A' is the area of some triangle and 'd' is the distance from the opposing vertex to the plane of the triangle. In the Det formula v1,v2,v3 define the same triangle, while v0 is the opposing vertex.




PARTNERS