Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 29 Sep 2012
Offline Last Active Jul 27 2016 12:02 PM

Posts I've Made

In Topic: 2D Plain/layer Shift Game Play Mechanics

26 July 2016 - 10:33 AM

Thank you KnolanCross. I will have to investigate further on this.

Any engine suggestion for a 2d platformer scenario like shown in the vid?


I use orx (http://orx-project.org/), but it has a huge learning curve and few tools available.


I personally don't like Unity3D, because I don't like the way you use it, I would rather control everything via code.

That being said, if you don't have any problem with it, it is probably your best bet as it is the most popular one between indies meaning it has a lot of resources and docs.

In Topic: 2D Plain/layer Shift Game Play Mechanics

25 July 2016 - 04:19 PM

Either McGrane solution or, if your engine allows, simply use the actual 3D coord, it will save you a big headache and there are many 2D engines that allow it.


PS: There is http://overlap2d.com/ for level editing as well.

In Topic: Memory Allocating Error ( I'm Massed Up )

25 July 2016 - 03:48 PM

Have you tried running it in debug mode? It should be F8.

Remember to add debug symbols in the Settings > Compiler > Compiler settings, mark the "Produce debugging symbols [-g]" option.


You can also run it in valgrind (use "-v --track-origins=yes" options).

In Topic: C vs Java (+ frameworks), Engines...?

13 January 2016 - 06:43 AM


Just an extra, in case you decide to use libgdx (which, AFAIK is the most well developed open source java engine), you should take a look at overlap2d. It is a scene editor that integrates very well with libdgx.



In Topic: Caching paths

14 December 2015 - 09:11 AM

I don't know any articles about caching A* path finder results.


(I assume you are working in the case of optimal paths.)

Pondering about it, the biggest issue is probably how to decide your path finding results have become invalid. I don't see an easy answer to that question.

Knowing when your result is still valid is probably easier. I would define "valid" as "if you run the computation again, you will get the same answer" (where "same" is still complicated).


In free open space, I would expect that a computed path stays valid  if reachability of all nodes that you explored in the search is the same (ie it stays free open space), and all traversal costs from one node to the next in the set explored nodes is the same (ie not suddenly making it more or less attractive to reach a node).

The reasoning here is that in your path search, you explore a number of nodes while looking for the shortest path.  If you do that computation again, and reachability and traversal costs of all nodes that you explore have not changed, you should end up with the exact same set of nodes, and the same path. "Same" is a bit tricky here, you can have several distinct routes of equal length, and due to the order of the nodes in the data structures, and order of considering traversals, I would expect you could end up with a different route of equal length. (Shorter cannot happen I think, since the path you got the first time was already the shortest given the costs, and longer cannot happen since the path you derived the first time still exists, and would then be better.)


So in free open space, if you store the set of explored nodes (ie all nodes in the open and closed lists) along with the path, and costs of reaching each of these nodes does not change, I think your path is still valid (modulo alternative routes of equal length).


If you add blockages to the equation (ie some nodes cannot be entered) it gets more complicated. A new blockage at the current path makes the path invalid (duh smile.png ).

A new blockage in the set explored nodes but not on the path seems harmless at first sight. Reasoning is that the node with the new blockage is no longer reachable, but that doesn't affect the path, since the blockage was not on the path. In addition, nodes reached from the node that is now blocked, cannot be reached from that node any more. Maybe they can be reached through some other node, but that path is not going to be shorter (or the original path would not have gone through the now blocked node). Thus if anything happens to the explored nodes off the path, their distances get bigger rather than smaller. Your set of explored nodes for the path may get smaller thus. This makes sense, adding new blockages generally means you're blocking off some options.

Adding new blockages outside the set explored nodes does not give new (shorter) paths.


Removing blockages is quite complicated. Note that blocked nodes are never in the open or closed lists, since you cannot reach the node at all. If the removed blockage is neighbour of a closed node, the blockage prevented all paths through that now unblocked node. Some of those newly available paths to the destination may be shorter than the path you currently have. You could check that, I think, by extending the path from the closed neighbours to the now unblocked node. If the sum of the real distance to the now unblocked node and its estimate is less than the current length of the path to the destination, a path through the now unblocked node may indeed be shorter (ie your current path is now invalid). If the sum is equal, a path may exist, but it is not shorter (since your estimate is never above the real distance). It may have equal length though, which leads you back to the "same path" discussion above.


I think removing a blockage from a node that only has neighbouring explored nodes in the open list is harmless. Nodes in the open list have equal or higher costs than the real path, you add some travel costs from the neighbour to the now unblocked node, so its total cost is never going to be lower than the path you have now.



If you have blockages, I think a reasonable approximation could be

- New blockage on the shortest path breaks the path

- New blockage elsewhere should not cause any change to the shortest path

- Removed blockage of a neighbour node of the explored nodes invalidates the path. (The discussion above was more precise, but it requires knowledge of real distances from the source to all explored nodes, which is probably too costly.)


I didn't consider changes in costs for traveling to a neighbour, but it seems to me you could perhaps apply similar reasoning as with blockages.


Thank you for your answer, albert.


I actually read an article that uses a very similar strategy of what you describe (reusing the already computed parts of a path), but my focus is to reuse the paths itself. It is quite hard to find because caches are very limited, as you stated, they need to be clear at changes and most algorithms work on a very limited set of rules (there seems to be no answer that fits all the constraints).