Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 02 Dec 2004
Offline Last Active Today, 03:36 PM

#5287632 towards a faster A*

Posted by wodinoneeye on 19 April 2016 - 11:31 AM

You are lucky if you can fix your window overlay grid dimensions into constants


Then you can do pointer arithematic on a fixed size map array   (ie   ptr+1  ptr-1  ptr+span  ptr+span+1 ptr-span+1 etc..) .... example - to access the usual 8 neighbors when finding open list candidates    You can do this with a single value offset (array of the 8 of offsets and use a for loop) if you make your map 1 dimensional array   [90000]


Fixed size HeapQ for the open list  (also doing pointer math to walk nodes of the tree  --  a systematic power of 2 progression for the binary tree)      Max depth shouldnt need to be sized to worst case  as you can have them fall off the bottom and IF they ever become canditates again they can be readded to the open list.


use flags ON the window A* grid map (overlay) nodes for your closed list (flags)


If you are still doing link lists then absolutely use node pools instead of costly allocate/deallocate overhead


crush your map/open list data as much as is possible to maximize cache hits

   (300 coord  size calls for 16 bit ints ...)   use bitmaps for flags  avoid floats if you can for weight values  to use try to use bytes/int16 .

all needed AI* data extracted from the real world map this 300x300 window is overlaid on  (yanking from chunks at tthat time


Eliminate if-then X Y edge of map tests being done when getting open list candidates by oversizing map +2 and pre-marking the boundry lines as CLOSED (flagged) - the map edges will now be excluded as part of the normal Closed node test


Finish the current pathfind task - dont timeslice multiple A* pathfindings


if precalc'd edge cost values can be stored in the real world map nodes,  then do that (memory is cheap) and make them contiguous so they might be copied/extracted in a block into the A* working data


IF additional map data needs to be extracted/examined  BUT the number of nodes this is required for is largely culled down THEN that data  doesnt need to be extracted and put into the A* working data (less initial copying AND smaller A* working data....)


figure out how to maximize reuse of the same A* working data for multiple objects




Something they rarely include for A* speed calc is the setup time of the working data you will use  (interpretting/extracting from the world map and resetting between A* runs)




#5286686 A* A star vs huge levels

Posted by wodinoneeye on 13 April 2016 - 10:13 AM

OK so this is a generalized simulation for many of the entities (exact bookkeeping isnt needed).


Long ago I looked into this same idea and split entities between :


"Dum' types (local flavoring, move not so much from their area, wake up/auto-generate when player is nearby -- non-persistant)


These probably won't be doing long range pathfinding, and even then only enough medium movement range to appear to be acting naturally.

You can still have them do ecosystem interactions - locally  (predators hunting prey, herbivores moving between grazing and water, etc..)

Big map migrations and such is handled by adjusting patterns on high level coefficients which then control the spawning content.

For complex simulation patterns, high level control entities on the 'big map' could be persistant, they effect/control shifts in coefficients of areas (cellular atomata methods).


Pathfinding for these is generally local (still cross chunks) but depending on how realistic and detailed you want them, many might  run on schedule driven repetative medium level paths (compute set once, and then local A* immediate steps including dynamic (if you have blocking by other entities).

Unfortunately, dense terrain is a favorite environment for many animals BECAUSE its difficult for their predators OR its where alot of thesustinance is located.   But then dense also greatly slows movement, so more real time to decide on any longer pathing)


The high level (persistant) entities will navigate around major blocking obstacles  ocean/rivers/mountains deserts and will set direction for long distance migration of herds of animals.  High level simulation can be done (group fo predator interacting with group/herds that wander by, etc..)  These high level entities still can have migration patterns - precalc'd general paths.     When the player comes close, then whatever interactive entities are realized (auto generated).  and have general behavior motives set to do their appropriate behavior variants.




'Smart'  (persistant, entities that have very complex player interactive/motive driven which will/can follow players around the big map with intent, requiring more exact bookkeeping because players see them over and over and result of interactions would be important and persistant)


These, even though they can have 'generalization' when well away from where the player sees them, have to transition to full detail sufficiently far away to be able to 'Act Natural' by the time the player sees them  (which can include forcing high detail around where THEY are centered to a sufficient area to all the things THEY interact with).    Their long range pathing has to be more exact/realistic/proper so they wont be where they shouldnt be (and are going about their proper 'business') when the player comes near and they are visible.


These entities (probably other humans) have the far more complex AI  (and interacting with them is a good(more interesting)  part of the game).

You can STILL auto-generate them (like if the game is a life journey and you cross the world and probably will never meet those local 'smart' residents again).  You use Templates (hierachical parameterized templates) to generate these local  occupants (probably with a largely hand built seed map) and likewise use the high level persistant entity (tribes?)   to make their overall simulation flowing/balancing across time.   Those Templates and the rules that direct them are the major difficulty (work to get them to be plausible) for the auto generation when used to generate  secondary characters ('main' characters probably will be mostly hand crafted)


These can do the long ranged pathfinding (but still probably will have daily/weakly/seasonal reused paths) - alot more data, but there are many fewer of them compared to the 'Dum' background entities.     You probably will find that their pathfinding is the LEAST part of their AI processing (you might use things like Planner driven AI which use pathfinding as a mere 'tool')




It all depends about how detailed the game is to be (and how much mundane stuff is cut out)


Funny thing is in one design (for a very detailed game) I actually had a 'micro' simulation for crafting activities  (you kill an animal and now you have to butcher it and convert it for transport, or 'gathering' of edible/useful plants in a small area (or even just gathering firewood).   Game where (if you want to look "there is something interesting under every rock").   Using the same methods to LOD a small area into very fine details .... Thats usually overkill for a 'game' and at that point it was more 'simulation'.   


Likewise it is the  transitioning -- close high detail with a boundry shifting into the lower detail 'generalized' simulation, which is much of the programming headache.




Another thing - once you do the autogeneration to fine detail (can be alot of processing if you have intricate Templates) you dont 'throw it away when the player walks on (he might suddenly decide to turn around and go back, or constantly move around the same large area.   So I had a system to save the detailed chunks for an extended time (the 'Dums' were local on them) - roll out to disk,  and then 'roll them in' when the player comes back in range and 'patch' the local details to compensate for any  intervening time (and have any persistant 'smart' entities added back to them).      If you are gone long enough any changes the player made can melt away and you can throw away that saved local chunk.   But the autogeneration would have to be sufficiently deterministic for much detail for when the player later comes back again (particularly if your terrain isnt overly generic' and specific terrain matters).


My chunk data was actually encapsulated (a block of map memory with its own local dynamic object heap, internal offset pointers)  so that a simple memory copy of the whole thing could save/restore it all with minimal processing.   They would constantly get loaded ahead of the players visibility (partial low detail LOD out furthest away).   Again depends on your game's detail relevancy if you have to go that far with such a mechanism.

#5285667 A* A star vs huge levels

Posted by wodinoneeye on 07 April 2016 - 05:37 PM

"the big concern is when i get about 300 active entities in visual range at once"



How big a map area is this 'visual range' (of the high detail interactions)  ?  It may be simpler in that case to 'window' the immediate areas map data to fine detail (if its reasonable size) and use plain A*     (with window enlarged beyond the visual radius enough to last for a while (be valid) with a typical player movement rate)


The usual problem also  is entities transitioning between the realized (window around the players view) area activity and  the generalized (big map) mode of operation.    Then the problem with the 'window' is (how) Are the other entities behavior responding to other entities that are beyond the  windows edge properly.   Some entities are more important for interactions than others and might call for 'realization' of the area around themselves (making the 'window' blobby rather than one nice regular grid area)

#5285663 A* A star vs huge levels

Posted by wodinoneeye on 07 April 2016 - 05:14 PM


which means i'm back to my original chunking algo of:   A* across a chunk to the next chunk, and repeat, for each chunk along the line from original start to ultimate goal.



No you originally wrote


"find the closest open edge node/tile/square in the desired direction that is adjacent to an open node on the next chunk edge"


The "closest open edge node/tile/square" may NOT be a good choice if immediately past/beyond THAT targeted part of the grid-square's edge  is largely a 'wall' (and likewise the exit points on that further "node/tile/square"  may be poor as well (this all causing alot of inside-a-chunk processing which then is thrown out).    


Assuming a regular grid method   - When true (optimal path ) exit point is NOT near the  'bee line' path on that chunk the adjacent chunk(s) may have the better path (the chunk corners endcase).     


SO your Fine A* really should include processing the adjacent (side) chunks subnodes (probably from the start instead of in a backtracking strategy)







11111112222222           look at the corner    1234 when a simplistic chunk logic would always be trying only through 1 and 3








Likewise a simplistic super chunk precomputed evaluation may not give give a good estimation depending on the final destination of any particular point to point path  on the entire map.


Creating Better estimations means more info prestored PER chunk containing better 'going thata way-over there'  connectivity info for the high level A* to use (now for super  'regions of chunks' where the destination lies - a precalc'd third tier A*  and the chunk has a data list for those super regions)


If the data gets too big for that, a 8/16/32 piechart compass 'general direction'  best-adjacent candidate set  (per chunk) may be good enough for most cases.

#5284879 Ways to implement hybrid procedural gen/handcrafted worlds

Posted by wodinoneeye on 03 April 2016 - 11:07 AM

worlds include the objects and props within them (and decorators applied to generic pieces that cross sections)


Props can have fairly restrictive rules for what items go with what to seem realistic (assemblies of props) and its often easier to define more customized combinations (templates) than try to build complex rules.   There still can be auto-generation techniques used for fitting of templatized assemblies into the terrain spaces (and key features they are typically placed near - attractor logic)  and for variations. 


Spawn points/effect sources/lights likewise are 'props'  and have their own rules (and also get used in combinations  ... more templates)


Patterns of decorators (dir levels applied in a guided randomness can get rid of 'the generic' look quite a bit 


A macro entity for an area may define themes and situationally changeable  modes and supply parameters for the auto-generation within their scope (and 'splatting' overlap mixing for transitions, etc...)


"Custom" handcrafting  can STILL make use of these auto-generation templates (just that they are fully parameter specified and with exact combinations defined instead of randoms) and some (preferably few) might be fully realized/actualized if further tweaking is needed to make them fully unique beyond what templating could do.



Identify what you have 'one-ofs' and make that unique (but at some point they DO interface where the autogeneration takes over so you will need some standard joining around the cutonm thingee's edges ....)



#5284875 parallel precomputations

Posted by wodinoneeye on 03 April 2016 - 10:31 AM

would helped if you defined the problem more clear


a network of nodes you have to compute some internode edges for ...


First, you have multiple cores? because  multiple threads wont go any faster on a single core....


1056 nodes computing to their near/adjacent nodes ????   (otherwise 1056 toeveryother node is fairly trivail to divide work n ways)


Geographic position of nodes which defines their adjacency ???   Roughly equidistant spacing in 2Dor 3D  ??


Yes- split into 4 processing sets to do all adjacents  within the set, then stitch the border (a width larger than any adjacent node)  bewteen  the 4 seperate sections (if laid out below even THAT can be organized into 4 subsets)


















The M (middle) may have to be a final seperate pass  to stitch in the small center set of nodes


If you have more than 4 cores then might be split like a finer pie cut  


Minimize data locks (map is all read only -- so none)  writing within 4 section results are independent so none needed there

the 4 B border stitch edges can be independant (so again none)  the center is done on a single core again none.





If the nodes are mapped out more like a 3D blob then the borders basically are dividing planes (works the same but identifying the  sets is a litle more ciomplicated  (exact division processing might take more work than just letting one cores workload take a little longer).

#5284258 Is death to intense for kid players (<13 years old)

Posted by wodinoneeye on 30 March 2016 - 08:08 AM

I suppose you could have the 'losers' eject from their destroyed spacecraft,  which for a (more) story based game opens up alternate activities like rescues and injury recovery such things.


Death of main characters being staged only at important story junctures (they genenerally are a limited resource if the players are to have any attachment to them).


Remember in Top Gun where the main character lost his nerve and simply disengaged from the battle (a dogfight), so there can be more than a few alternatives.

#5281476 A moddable MMOG

Posted by wodinoneeye on 16 March 2016 - 07:23 AM

"When players die, I want to set them back a ways, to make death a genuinely bad thing"


Then you want it harder to happen, with various persistant damaging well before 'death'  and contigencies/compensations those force players into (and their ability to react and creatively handle their limitations).  Of course that requires a much more complicated game mechanics affecting the player (many more degrees) and actions made by opponents  (and interactions with the terrain/objects).  The NPCs would have to have a symetrical system and thus much greater AI than the usual 'see-player/attack-player/die' simplicity.

#5281475 A moddable MMOG

Posted by wodinoneeye on 16 March 2016 - 07:14 AM

To be effective you need :


a common set of tools (at least for the formating into game assets)   Basic editors (open sources/freeware)  and easy-to-use online help to get people productive with the least difficulties (tutorials,advice etc...)


An online communitity of varying skills and abilities to share their creations (not everyone can do everything and what one person starts another can finish and another can polish).   The high end (skilled)  programmers to do the really hard stuff like game mechanics/behavioral (programming/scripting) mods.  Many with lesser abilities cans still make additions to a bulk of the available assets (ie - textures, or selections of objects to combine into coordinated assemblies of objects).


A clearinghouse for assets to be freely used and modified (and improved)


Many simple assets are creatable/mod'able  by people with little skills (again needs easy-to-use tools)


Community and process to examine and VET your assets/assemblages of assets BEFORE they ever get close to being published


A vetting process to sufficiently TEST  the assets before they go out to the users


Preferably a systematic use of asset object templates to maximize reuse and to minimize effort  (this includes attributes which tie into the game mechanics).    If possible, procedural generation via parameterized templates to auto generate variations - certainly finishing effects (ie- dirt and grime applied that can significantly change appearances AND might apply local thematic variations to the base objects)

#5281474 What AI Problems do YOU want discussed at the AI Summit?

Posted by wodinoneeye on 16 March 2016 - 06:49 AM

I would say  :  How to get domain knowledge INTO a game AI system.    Building of the logic which the AI runs for the particular task (whatever aspects of the game the AI is run for).


This would be largely offline, though part of such logic might be recognizing what ways a player plays and then adjusting/applying appropriate strategies/tactics against it.


The bulk of logic for even simple AI systems is prohibitive, so methods of (partially) automating it to improve/increase  the amount a game can possess.

#5279909 Manufacturing chain for paper

Posted by wodinoneeye on 06 March 2016 - 10:17 PM

If this is a modern age production chart then recyclingcan be a big component -- paper/plastics/steel/asphalt/etc can have a good bulk of their materials come from recycling.

#5279570 Weapon and damage vs enemies (RPG)?

Posted by wodinoneeye on 04 March 2016 - 05:25 PM

One element for attacks (yo r chosen fighting game mechanics)  is whether Area Attacks (sometimes other types)  are allowed to do collateral damage against your own side (thus being a limitation upon their use).

#5266488 Preventing players leaving the play area where an obvious geographic obstacle...

Posted by wodinoneeye on 15 December 2015 - 10:34 AM

North is the ice where nothing can survive (btw some deserts ARE on the edges of placed that are fricken cold)



East and West are the magic lines that are harder to 'force'   



Really fetid disease-filled swamps/poisonous snakes/bogs that people just disappear if they try to travel through them ???

("Its not on the map because NOBODY goes there ....")

#5266481 Sailing game: should players consider the wind?

Posted by wodinoneeye on 15 December 2015 - 10:25 AM

In a sailing game you might be simulating issuing orders to a sailing master who WONT destroy his ship (and just wont do things that might) and thus your orders get limited by that reality  (wont keel over, but the order wont be executed to do fully what you want).


Sailing was about thinking ahead to be in the right position/heading either in relation to land or to other ships (or both) to be able to make use of the wind to carry out your goal.   For distance travel it was matching the wind to eventually achieve the distance and position you want and often requiring a sequence of moves (ie- tacking)


Really realistic sailing also includes compensating for uncertainty of the wind conditions and trying to stay in a flexible state when possible.




An old quote I remember about some of the best sailing ships ever built  - the Clippers, whose Captains LIKED storms because it meant that their ships could sail all the faster  (Good  captains,  good ships and good sailors manning them)

#5265967 AI for NPC

Posted by wodinoneeye on 11 December 2015 - 06:51 PM

You could implement a planner that builds schedules for daily tasks for a set of NPCs in a simplified symbolic environment.


The AI  functions that decide fitness of decisions - like order to do them in or depletion priority (which thing of most importance I will run out of and need to get first).   Time would be the cost there, different sources might have different costs.  Having a time limit per daily 'run' as a ceiling limitation.   Minimum requirements per day (like food) as a floor limitation.   Some resources may have only one source and others more than one(with partial supply possible - requiring you to go to more than one.    Some resources might (to complicate things) have a probability of their supply available - introducing uncertainty into the operation).


Goal is to minimize time and cost spent (on these routne tasks) leaving the remainder for profit (in that day)


Basic pathfinding on a map (to get distances between points of activities (obtaining/using resource) would be a 'tool'.


A Planner to decide the different sets of options (tasks in what order at which location for the days schedule)


Display of the results would be via some existing game library (someone mentioned Unreal4)  to symbolicly represent objects and positioning.


Learning might come in where the results are statistically recorded day after day  (need a result metric system for that)  and compared to see which rule tunings of the AI decisions were the most effective 


Swarm Logic (a method of monte carlo) possibly to adjust/supply  the tuning candidates


Possibly a changing environment that shifts somewhat after a time (like year quarterly with daily schedule of tasks) to offer a variation of the costs of the different operations (movement and ?), but with a long enough interval for the 'learning'


The learning is the comparison of actual results with their picked solutions   against similar 'runs' and trying to extract the most important factors.


Demonstration examples would be done on different 'maps' to show the feature as they get more complex  (and also to simplify testing as you prove it works)    - the pathfinding  and planning via distances (time),  then resources with different costs (now time and cost)   then comparing/learning with uncertainty factor of supply  - figure out which sources cause trouble by recording their history to develop an expectation pattern   and then to have the AI consider that versatility pays out more regular than purely optimal sequences.


Then the adaption feature where the system rebuilds its knowledge against a changed/new  map environment (where you have a discovery phase where you reevaluate(explore the new map) to then decide when you have enough info for the second phase of normal operation for the best results.