Complex shapes and setting their position.

Started by
13 comments, last by Gtadam 3 months, 4 weeks ago

Hi!

I'm rather new to game programming and have developed a simple 2D platform game.
I'm looking into improving it with slopes, which means i have to handle overlap of triangles and other convex shapes.

My question is this in my current implementation all entities have an x and y coordinate. They don't have a shape other than an AABB.
I dont store the vertices other than the top left corner for the rectangles.

In my new implementation all my shapes have a list of vertices, for a triangle i.e. {v1,v2,v3}. I'm a bit confused at the moment.
Should the entities still have an X and Y coordinate? And what is the x and y coordinate of a general shape? Should i take the center point of the mesh or triangle (centroid) as the X and Y? Or should i ignore it and simply manipulate the vertices with a transformation matrix when i set x and y?

I'm not sure i understand the problem either, so sorry if my question is a bit unclear 🙂

Advertisement

Gtadam said:
In my new implementation all my shapes have a list of vertices, for a triangle i.e. {v1,v2,v3}. I'm a bit confused at the moment. Should the entities still have an X and Y coordinate?

I would rephrase the question to ‘Should / can we divide our world into a set of objects?’
Usually the answer is yes, but the larger our models become, the more it shows the answer is actually no.

To show the problem, let's consider two functions we might want per object: Collision detection and level of detail.
And let's say our model is static but complex terrain with caves and cliffs.
Eventually we divide our terrain into many convex cells like Quake did for all it's static geometry making up a level.

The problems we might encounter are:

A dynamic object might slide between two convex cells and get stuck, although there is no empty space between them. In the real world this could not happen, because there is no artificial segmentation of connected matter into a set of cells. It is just solid matter and a dynamic objects can not get inside.

To maintain high frame rate, we may display distant cells from our terrain with lower geometric resolution. (In a 3D game ofc.)
We will face the problem of cracks between cells that do not have the same level of detail, because geometry does not match.
To solve the problem we can ensure all our cells have a geometrical boundary around the whole object. So instead seeing cracks and holes, we now see internal cell boundaries, which is better.
But this also means we can now see the partial inside of objects, and we also waste loads of triangles just to show interior segmentation boundaries which should not be visible at all. And we might have problems with lighting those interior surfaces, which would be totally occluded and not lit at all in the real world.

So there are some potential technical problems and limitations with the idea to create entire worlds from ‘small objects’.
The limitations also affect content creation. For example, i can scan some rocks and display them at insane detail in UE5. It's awesome if i look at it close up. But if i look at it from some distance, at a larger scope, i realize i can not model global processes such as eroded valleys, landslides, or natural mountain peaks with copy pasting a set of rocks, even if those rocks appear so realistic up close. And i realize with a frown: I'm still limited to faking facades and working with smoke and mirrors.

This dilemma applies to 2D games too. And likely it's this dilemma which gives you some subconscious doubts about the idea to create worlds from ‘objects’.
So it's good to keep that in mind and try to be aware about it. It is how almost all games work, but we are not totally happy about it.

Gtadam said:
Should i take the center point of the mesh or triangle (centroid) as the X and Y?

It's really up to you. Some options:

Use the center of the bounding rectangle of the object. (Or just the top left corner of the rectangle, whatever you prefer.)
That's probably good especially if you quantize objects to snap on a grid. E.g. a rock model may take exactly 4 x 3 grid cells, but not 1.7 * 3.3 cells. The easy alignment and placement can speed up the process of creating levels, and make some other things easier too.

Use the area center of the object, which would be the analogue to the center of mass we would use for 3D rigid bodies.
That's maybe good if you don't use quantization. E.g. you can draw a spline, and then your tools place rock / soil / grass models along the spline. Such ways of modeling became more popular after the tiled hardware limitations of early HW was no more issue.

It should not matter much, but you have to find some option that works for your current goal.

Gtadam said:
Or should i ignore it and simply manipulate the vertices with a transformation matrix when i set x and y?

A transform per object is likely a good idea. You can rotate and scale gradually, eventually even sheer the objects.
You can also go beyond that and allow to bend objects a long a spline for example.

But you always need some center and reference orientation for your objects anyway, independent from such additional flexibility.

Giving each entity a position separate from vertex data means that vertex data can be shared between entities, which is usually a good thing. This means that if you have five identical slopes, you only need to store the vertex data for that slope once. It also means that you can make the vertex data immutable and the position mutable, which means that you only need to serialize the position of an entity when saving the game, not its vertex data.

As for where in the shape the origin point should be: wherever is convenient for you. Usually at the point around which you want to rotate the shape, if you have rotation. Usually where the lines across which you might want to flip the shape converge, if you have flipping. I often find it useful to use the center bottom point for entities that can be placed on the floor, to make snapping to the floor easier.

Thanks Joe and light breeze for the quick and elaborate answers!, I will spend some time pondering for a while.
I will probably have some kind of followup question nonetheless.

It's a bit of a hassle going from AABB to other shapes. The math is a bit elusive to grasp.

a light breeze said:
As for where in the shape the origin point should be: wherever is convenient for you. Usually at the point around which you want to rotate the shape, if you have rotation.

That's a good argument, and i would call such point a ‘pivot’ eventually. Often such point is at a expected ground level of some object, and choosing it well makes placement and manipulation easier.

But then it's worth to mention we may use multiple reference points per object. A pivot to help with editing, a center of mass to help with simulation, eventually attachment points so characters can hold items, etc.

Wow thats good to know. But do you have problems with creeping floating point issues?

Should i apply the delta changes to each position independently or should i recalculate i.e. pivot point from the vertices each time i need it.

Gtadam said:
But do you have problems with creeping floating point issues?

No, this should not be a problem.

Usually the vertices of a model are defined in the local space of that model.

Then you may add multiple instances of the model in the game world. Each has it's own world transform.
To get the world space vertices, you transform the models local vertices with that world transform. (Often this happens just on GPU)

As long as things do not move, that's all static data and you'll get the same results each frame. Floating point error can not accumulate.

If the objects are dynamic, you may need to be careful about the world transform, but never about the vertices.

Thanks so much for the information JoeJ!
I think I grasp it a lot better!

Will return to my code for a while :D

I'm just wondering, as im building my own game engine, about storing world space vertices.

Where should they be located? As im doing right now i have an update method for each shape that refreshes their worldSpaceVertices. I want to do that because i have various intersect and overlap methods that is included in the class which need this information.

Do you usually store world space vertices in the entities instead? I need them for collision detection. It would be great if i could reuse the shapes for various other entities also.

I don't store world space vertices at all, but calculate them on demand.

This topic is closed to new replies.

Advertisement