Jump to content

  • Log In with Google      Sign In   
  • Create Account

Dennis Technologies

Member Since 04 Sep 2013
Offline Last Active Nov 06 2013 01:24 PM

Posts I've Made

In Topic: How to stock large amount of data for a game

10 September 2013 - 08:52 PM

50% of the programmers work is to figure out how not to do the other 50%.

+1 for this :)

In Topic: Pathfinding on 7 oceans

09 September 2013 - 03:28 PM

I was thinking about multiple heirarchy nav meshes after I posted. This is a good idea and would probably be how I solved the problem. The trick is to make each grid align with the next finer grid so that every node knows not only the 4 peer nodes attached to it but also the node it is sitting on top of in the next finer graph. Additionally you need to check when it is time to use the finer grained graph. I would do this as a fifth test where you are basically testing to see if it is better to stay at the current node instead of moving to the next node. An example of the implementation I would use:

Assume a line of points numbered from 0 to 1000. I would create two graphs, one with every point evenly divisible by 100 (0, 100, 200,...) and one with every point that is evenly divisible by 10 (0, 10, ... 100, 110,...). You can see that the two graphs intersect at every one of the coarser grained graph's nodes. Now lets say I am standing at point 0 and want to move to point 240. I would start in the coarser grained graph and move to point 100 then to point 200. I can then see that I stay closer to my goal by staying at point 200 then by moving to 300 which would overshoot it. I then switch to the fine grained graph and I'm already standing at one of its nodes (200) so I then move to 210, 220, 230, and finally 240. Tada! I get cake :)

I hope this made sense.

In Topic: The mystery of the spike

09 September 2013 - 03:08 PM

Have you tried reading all of the data in one go or maybe bigger chunks? I'm wondering if you are getting disk seeks from other processes while you are decompressing. Causing your process to constantly have to seek back to the point in the file it left off when it goes back for more. I've found that file I/O is a get in, do the job, and get out process with no lollygagging inbetween. I could be way off base though and I wouldn't consider myself an expert on file systems.

Also, is your hard drive severely fragmented? This will cause poor performance with file I/O as well.

In Topic: Pathfinding on 7 oceans

09 September 2013 - 10:11 AM

I suggest perusing the AI forum. There was a recent post there with a link to a fantastic Gamasutra article all about pathfinding.

Off the top of my head I would recommend this:
1. Break the world map into a grid, like a globe, with nodes at every intersection of latitudinal and longitudinal lines.
2. Every node should know how to get to the 4 nodes attached to it.
3. Calculate the closest node to your destination.
4. Calculate the closest node to your origin.
5. Calculate a path from the origin node to the destination node.
6. Use A* to get your ship to the origin node.
7. Use A* to get your ship from the destination node to the actual destination.

The trick here is that steps 1 & 2 can be be precomputed at compile time. I believe this is called a navmesh. Also I am assuming there is just open water between any two points and no land masses. Introducing intervening land masses will complicate step 5 slightly but not by much. It will be up to you find the optimum size of the grid.

In Topic: Component based architecture and messaging

09 September 2013 - 09:50 AM

Most people will tend towards decoupling out of efficiency, being easier to change components. Coupling can work, and can (I stress 'can') be faster to implement as well as be faster overall, however your system will be less robust and will be far more fragile.

Your second idea does not require 'coupling' as I would understand it. Have you looked at Service Oriented Architectures? I think it would fit well and is a decoupled system.