Jump to content

  • Log In with Google      Sign In   
  • Create Account

wodinoneeye

Member Since 02 Dec 2004
Online Last Active Today, 04:11 PM

#5293334 What is the top factor for MMO engines limiting world size?

Posted by wodinoneeye on 25 May 2016 - 06:18 AM

I might suggest that the 90% Crap Idea™  is pretty much what we already get from the MMORPG companies.

 

Players are constantly starved for content and wake up their accounts for a month or two and then stop playing and paying (til 6+ months for the next 'drop').  The big games can continue as they have, but have only in those limited genres.

 

The description Ive given here (above) lacks alot of the details of what the full system I propose  would have to be.

 

Again the things produced by some small percentage of the playerbase (who wantto be creators) get selected for use (stringent functional testing at minimum).    REUSE is a key element shortcircuiting further additions.

 

The collaboration model (only mentioned above ) allows people to build on what other people have already done, thus improving them incrementally.  Hardly anyone is good at everything required, so it would take multiple people to produce each of the complete 'Assets' finally used in the game.  Someone does good ideas or planning, another basic shapes/structures, another refines that (and possibly others do later), another is good at textures and applying them, another is good at realistic weathering/usifying, another can adapt behavior attributes (tweaking or just installing existing templates) and animations/sound effects , some other can do any needed specialized behaviors.   A whole lot can combine objects into scene assemblages (which become someone elses building block for mission scenarios (which have add in creativity for dialogs/story plot/pacing/theater style scripted interplay -- the real aim of this production.

 

Obviously the Community vetting and collaboration is the key to this system, but dedication CAN be found for such.     Advice and commenting for revision  and testing and inspection all to be done through a well define process.

 

Those who have skills and know the tools have much higher efficiency (so its not so tedius for them to do alot in their speciality)  BTW SOME people are good at creating tutorials to TEACH others how to be proficient...

 

The publishing model is to share everything and asset projects are forked and resubmitted (and ANYONE can come along and mod it if they want to try)

A WHOLE lot of the low level fiddley bits would be done by the company (game mechanics, object atrributes systems for standard interactions, etc..)

 

I didn't mention that the detail level of objects is more along  the 'deformable' world type definition and play use (much more genericly interactive and reactive).  Thus more you can use things for IN-GAME (and ALOT of creation ALSO can potentially be done by any Player In-Game).   There are LOTS of small things to create for a rich world - not everyone has to create A Mech-Tiger-Tank.   Many aren't that hard. with so much basic stuff already predone and inherant tweakability and (much more) Idiot Proof Tools and integration of processes.

 

The GOOD tools (fundamental to this system) comprehensively cover producing all of these things -- thats why they will be as big a project as a AAA game by itself to build (and some players can be better at Tool Making/Improving than most in the companies, and THAT is part of this whole thing TOO)

 

Creators get credit for the part they do, and add to, and those things they add are structured for reusability and modification.

 

The company would try to set standards and the community would have to maintain those strong standards (and the company would have the Final Word to enforce adherence).   Obviously there are legal issues like copyright infringement which have to be enforced strictly, and the vetting system would be defined to prevent publishing anything with such issues.

 

I never said it would be easy (and DID say this is Next-Next generation stuff), but the way costs are going up and playtimes going down for these games,  the Wodinoneeye Law says that within some number of years with games progressing as they are, they will each cost as much as the US Yearly Economy and playtime will Last a fraction of a second.   Well before that, most players will stop buying them.

 

 

Consider IF players could create using already defined 'objects' and use them to create higher-order things for the game.  Guns already work, chairs already work, NPC already have improved AI.  The TEMPLATES are designed for mod'ing  with the least needed work.   Now large numbers of Player Creators DONT have to fumble around trying to build everything they envision up from scratch (and no longer fail when they couldnt do  EVERYTHING so complex and tedious).   Now you (many more players) can build the more interesting aspects of the actual game instead of get stuck reinventing all the building blocks.

 

-

 

I suppose I could say that Open Source never could work because of this Sturgeons Law,  but what is the reality there ???

 

Yep, all all a miserable failure ... right ?      Nobody in their right mind will do anything quality for free ... right ?

 

      (now do that in a more organized fashion....)

 

-

 

This would be a largely new paradigm for game production, employed in a more complex/thorough  way.  It really has to be done with consistency or wont work (and its a daunting project that only a visionary (with sufficient cash) could attempt and will probably take some such pioneer to eventually do it)

 

-

 

"Second Life had almost exactly the vision that you lay out."

 

Vision is one thing, carrying it out is another.  This system I speak of is far larger and would need to be much better designed as to expandibility ('Templates',  as the fundamental design for EVERYTHING involved  - parameterized hierarchical )

 

Second Life had(has)  this fundamental element of people $ELLING their in-game productions/creation which NIXED most/alot of collaboration.

 

 

-

 

Result - a constant flow of new APPROVED content,  improvement of the assets already deployed.  Heavy use of procedurally generated game terrain/scenarios are possible (again via that comprehensive TEMPLATE system the whole thing is based on).   Creation On-The-Fly (alot of it)   instead of  'static' level worlds.

 

An interesting aspect in such a system is that Micro Genre games can be built upon generic items already produced (and working),  being tweaking instead of a complete rebuilding and then you can so much sooner get to doing  the 'good part' of creating the game.




#5293207 What is the top factor for MMO engines limiting world size?

Posted by wodinoneeye on 24 May 2016 - 08:12 AM

This Content limitation  (as in DETAILED content and INTERESTING content)  to fill any bigger world?

 

I say that as big as they currently are,  they are ALREADY still largely pretty deserts  -- mostly devoid of uniqueness and interesting detail and interactions.

 

 

Some day (maybe in our lifetime) we might have games where Players produce alot of the MMORPGs Assets

 

(Ive talked about this before  -- Tap into the Players abilities/imagination/creativity to build the game worlds)

 

 

Advantages :

 

Major cost of MMORPG - the Assets is cut out of the company expense

 

1000X as much imagination and labor available in the players, than from the game company (note all player production would be done for free)

With a broad spectrum of abilities (from simple assets and assemblies of assets, all the way to behavior AI, and even game mechanics improvements/additions)  Players to be able to  create upto their abilities.

 

What one player creates, 1000 players will play with in the game

 

Assets can be incrementally improved  by expertise in different areas of production (hierarchically template everything to maximize reuse and minimize reinventing the wheel...)

 

Assets can be shared across genres (one system + many games) to maximize whats available from the Players efforts

 

New content being added constantly  (and for some 'players' the creation will be THEIR game)

 

Leverage of more than a little Open Source  Tools which exist

 

 

Problems :

 

Need ALOT of easy to use tools which rival these games in their cost (though can be reused across many games) -- idiot proofing for general player use is a monumental task (and integrating all the tools into an online production  system)

 

Need a really thorough vetting system BEFORE anything published to the running game worlds (and that too largely would be the work of Players)

 

The game would need alot of definitions (to be kept to) to the genre/canon/quality levels of acceptance

 

A major community effort is needed (has to be managed -- and largely NOT by the company)  to facilitate cooperation and collaboration (and especially to NOT waste anyones time, where possible)     Comprehensive planning/testing/review/advice/collaboration/publishing  processes.

 

To get started, certain popular genres will have to be used, to get their interested player groups having a critical mass (after that, reuse can make many smaller genres workable building upon the basics)

 

Broad well done generic design (not just the bits and mechansism used for a particular game)  The company would have to build sufficient basic assest to get the games going  (possibly reusing/converting assets they already possess from previous games)

 

All kinds of Legal Crap.

 

 

 

Why it wont happen soon :

 

The cost of creating the whole system... TOOLS  (even with one of the game engine companies being the organizers of it)

 

The game companies losing the profits they make for content   (fire all the artists ....)

 

Risk adverse companies who know the model they use NOW can work and want nothing to do with an unproven system (will wait for SOMEONE ELSE to prove it works)

 

The 'sharing' parts (like asset standards)  might be blocked by company rivalries

 

 

This is Next Next generation type of stuff (the extent of the thing I would have it be), but its  Development Utility also could be used for Media production and Advertisements (and even facilitation and lowered costs of creation of Solo games by inhouse talent)    THINK of it as something of the magnitude of what  Computer Publishing  was.

 

An no this isnt Second Life Plus Plus ...   that thing is a shadow of a shadow of what I would envision this possibility .




#5292758 how can neural network can be used in videogames

Posted by wodinoneeye on 21 May 2016 - 08:17 AM

Unfortunately game situations are magnitudes more complex than the single problem of  telling one handily pictured husky from another  (which if ever needed as a 'tool' takes up a good sized NN by itself, and then we will need the 10000 other 'tools' (and their training sets) for all the other classification/differentiator/deobfuscator tasks, and then the processing resources to run them ALL in a timely manner.

 

Maybe if you were using NN to spot 20 year old game pixel patterns for game objects in a clutter of on-screen scenery this would be relevant. Unfortunately that IS still just a basic sensor filtering task and does little for the rest of the problem of playing the game.

 

-

 

"They can translate sentences between any two languages, with very little additional machinery."

 

I'd like to see the project that claims THAT.   Particularly with your use of the word 'any' -  when there are so many world  languages to map between and more than a few that dont have exact translations of certain words/idiom-contexts with other languages.   (English:"The spirit is willing but the flesh is weak" --> Russian:"The wine is good but the meat is rotten"...)

 

Text to text  NN input ???  or again you are claiming some subtool NN of a much more complex program and data set (dictionaries/grammar rule  translators) where the NN component actually turns out to be a trivial part of the whole thing.  ???

 

-

 

Temporal cause and effect  pattern spotting has major difficulties with noise from nonrelevant situational factors and a further combinatoric explosion of endcases, some coming now from irregular event timings.   Again in more complex simulation environments this forces greater human intervention being required in the training (hand training the logic which otherwise could simply further just be built as conventional logic) which is the most significant  chokepoint  of complex NN solutions.

 

-

 

For Go I could see use of convolutional neural networks to convert the simple Go grid into higher and higher level features and trying to spot the needed decision patterns .  How well can the future-assessment be evaluated - training the NN effectively for that generalizing, without it having terrible gaps?      But again that is for an example of a game with  'situation' that is utterly flattened out/narrowed in detail complexity, compared to just about all other 'games'.




#5292547 can c++ be used as a scripting language

Posted by wodinoneeye on 19 May 2016 - 06:23 PM

Ive done this employing C's macro mechanism to simplify what 'Script'  has to be written (used for behavioral control of simple intelligent objects) .

 

It is also useful for  structured scripts  (like the start/end/else state constructs of Finite State machines)  ... not just individual function calls  (several of the AI Game Programming Wisdom books had several articles about  macros  doing that including, hierarchical finite state machines)

 

Advantage is that a routine/repetitive pattern of code (which sometimes are rather bulky) can be reduced to much simpler 'script form'   and an assumption of the script features being used in a systematic way (eliminating nesting/spaghetti code  hell).    The macro Script restricts what variables and calls can be accessed through the 'Script'

 

Another is that specialize code can be created/customized by just adding another 'macro' to your 'language'  (and if when needed you can still insert actual NATIVE code in (hopefully few) trouble spots to get done exactly what you want/need)

 

The C preprocessor then converts your simpler macro Script into native code now subject to the compilers optimizing abilities and can run directly without interpretor overhead (including eliminating subroutine calls)

 

 

Disadvantage : is some extra difficulty debugging where the 'script' produced code is mutated 

 

-

 

Some people may say 'why bother optimize' , but when you are running thousands of active/reactive objects of this level of complexity EVERY Turn,  the optimization can spell a great difference in the size of the players game environment

 

---

 

 

One thing that added a little difficulty was   : that my  'nice' grouping of  Script Code   created for each different object typewere (in my usage) run in different sections of the program, even though the 'Script' has their chunks defined  right next to each other (instead of breaking them up onto separate files, and the bother/confusion/disorganization that entails, and you seek to eliminate).

 

So its good  learning how to use   #define  #ifdef  etc...   to modalized the script text  so that by using multiple #include of the same file (each employing a different  #define  'mode' for a different place in the program)  

 

 

 

example  (a LOCKSTEP behavior processing for active/reactive object on a grid map which use finite state machines)

 

Situation detection  phase - all objects Scan and detect a number of stimulus to potentially react to (from their current local situation) and filter/prioritize them according to the objects mode  (with  potential interrupts for 'tactics' already in progress)

 

Solution Classification phase - all objects digest their 'stimulus'  set,  decided ONE best  action/tactic initiation (avoiding conflicting actions with other objects already busy interacting - which could change since previous phase)

 

Action Processing - carry out all decided actions (including animations) and generate results (resolve conflicts),  and adjust the game situation 

 

-

 

The above runs a simulation in a lockstep manner, so the separate phase  chunks of code for each object type (even though grouped together in the script file)   get placed in the corresponding 'phase' section of the program whose basic organization was 'the Big Switch'  (nested)    (My program used three phases, but the chunk split-up still happens if there are only two phases required by a lockstep simulation)




#5292542 Low level Resources regarding Convolutional Neural Networks

Posted by wodinoneeye on 19 May 2016 - 05:19 PM

You can get the 'theory'  by simply searching for it online.

 

Basically it is a way to preprocess certain types of data (like largish pixel images) by repeatedly running 'small'  feature filter NN in parallel locally (with overlaps) across the whole image .  

 

Each local area of the regular grid (image) is processed to extract/integrate generic patterns/trends (like line/boundry detection or spotting a solid blob) from the basic data.   Further layer processing  then (in parallel) integrate that first order detecting larger patterns/trends (like spotting a 'corner').   Later layers then look for the super patterns which classify the picture.

 

The advantage is the lower 'detail' filter NNs are fairly small (some like 5x5 local groupings) and can be well formed to do their task.  They can be run in a massively parallel manner (you apply that layers same filter in an array scanning fashion ) and integrate/collapse each next layers input data til the final classification  (several layers itself) which detects combinations of the macro patterns.  

 

A 'divide and conquer' solution eliminating/minimizing ALOT of the  NxN  input weights (in the lower layers) such large data input arrays would require if done monolithicly.  

 

 

40+ years ago anatomical research was done that showed that the retina of eyes do operations like this (the low level feature detection). 




#5292539 how can neural network can be used in videogames

Posted by wodinoneeye on 19 May 2016 - 04:31 PM

"You sound like someone that has never programmed either a checkers engine or a chess engine."

 

You sound like someone who hasn't programmed anything more complex than a "checkers engine or a chess engine".

 

 

"Yes, NNs are only a tool. I don't see who you are arguing with here"

 

Unfortunately they are the 'hammer' some people see all problems as 'nails' to use on.    NNs for basic classification of SIMPLE situational factors are fine, but once the situations are no longer simple (like spotting temporal  cause and effect)  they just dont work too well.   And even then (As stated above) there is the REST of the logic to be done to actually work a game problem solver.    Likewise, there usually is ALOT of complex data massaging that needs to be done FIRST to be able to feed it into an NN.    

 

 

"parts that can be implemented using neural networks"  

 

But are they 'parts'  which simpler hand crafted logic can do more simply and efficiently (Are they there done as NN just for the sake of doing them in NN ???)       

 

 

 

"not old enough to remember"    then read about it.  google   history of AI

 

 

Go's mechanism is very nice and simple - actually its point is it being boiled down into simplicity of mechanism.  The game situation representation and play process and  actions likewise are quite limited.   So of all games it may be nearly the best to use NNs on.   Too bad so many other games dont have its major advantages that allow NNs to be employed so easily.




#5292445 how can neural network can be used in videogames

Posted by wodinoneeye on 19 May 2016 - 03:30 AM

I'm not that scared by your FUD about how complex things can get. :)

 

EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.


I already mentioned I have used a NN as evaluation function in checkers. Using one as evaluation function in chess is not [much] harder: http://arxiv.org/abs/1509.01549

Other uses of NNs for chess are possible: http://erikbern.com/2014/11/29/deep-learning-for-chess/

 

 

 

Checkers as a equivalent to  Chess ?     ok .........

 

'Chess' Evaluation function  ( as in 'tool' ??) .... but is it the  fundamental core of the decision logic ?   Which is what Im talking about being a  problematic thing for  NN usage.

 

'possible'   -- Where AI is concerned I recall that little situation in the 50s where they thought AI was just around the corner, and all kinds of Computer AI goodness was just about solved.   Here we are 60 years later.   'Complexity' has proven to be quite perplexing.




#5292263 how can neural network can be used in videogames

Posted by wodinoneeye on 18 May 2016 - 06:43 AM

By "complex"  I am talking NOT about some NPC-bot navigating around a static map, but one that has to react to friendly/enemy/neutral dynamic objects (possibly including one or more players)  -- spatial relations between objects of different classifications.   Now have the typical number of action options and whatever metabolic goals the NN is supposed to 'think.  Suddenly a plethora of contradicting and irregular situation factors to be 'comprehended' (again, interpreting a situation which isn't just some terrain grid) need to be process to generate a 'good enough' current solution for what that 'smart' object is going to try to do.    The training set expands exponentially with complexity, and a divide and conquer method cant work -- except as tool analysis interpretation which  STILL has to be integrated in a complex fashion.   Multiple metrics of 'good/bad'  and situational adjustments for priorities  (fun - modal factors to add in -   big->huge NN, or breaking up into specialized NNs (which now STILL have to be intergated to decide which applies/overrides) etc....

 

Again 'tool' because any analysis leading to Temporally effective actions takes programming methods like finite state machines to carry out sequential solutions once some decision is made (and then possibly reevaluated and redirected - even WHEN to reevaluate and cancel current activity is a complex logic problem).    Not just do action X or Y or Z  and rinse,  it is start strategy/tactic A or B or C  and carry through/adjust...

 

We already have plenty of  relatively mindless 'ant' objects done in games without needing NN,  moving the AI up a few notches and suddenly the problem space expands hugely and the (richer) situational complexity likewise (training set hell).   Thats the environment where NN fall down REAL fast  -- very difficult especially any self-learning mechanism, and an assisted learn NN (being told whats good and bad in many very specific endcases) suddenly its the human limitation to get through the bulk of the work required. 

 

Carefully targetted analysis is where I might consider using NN - limited domain and many small ones if that many different analysis are required.  The primary logic for anything tactically game complex is still most efficiently created being hand crafted, where you wind up doing most of the work either way and trying to force a NN to do what you already have worked out the discrete logic for is pointless.

 

EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.




#5292162 how can neural network can be used in videogames

Posted by wodinoneeye on 17 May 2016 - 04:04 PM

The problem with neural nets is that the inputs (the game situation) has to be fed to it as a bunch of numbers.

That means that there usually is a heck of alot of interpretation pre-processing required to generate this data.first.

 

Another problem is that the process of 'training' the neural nets is usually only understood from the outside - the logic is NOT directly accessible to the programmer.      Alot of 'test' game situational data needs to be build up and maintained, and connected with a CORRECT action (probably done by a human) to force the neural net into producing is required.   Again alot of indirect work.

 

Neutral nets also generally dont handle complex situations very well, too many factors interfere with the internal learning patterns/processes, usually requiring multiple simpler neural nets to be built to handle different strategies/tactics/solutions.

 

Usually with games (and their limited AI processing budgets),  after you have already done the interpretive preprocessing, it usually just takes simple hand written logic to use that data -- and that logic CAN be directly tweaked to get the desired results.

 

It might be that practical neural nets may be just a 'tool' the main logic can use for certain analysis (and not for many others) .

 

 




#5290586 Most important principles of game design?

Posted by wodinoneeye on 07 May 2016 - 04:23 PM

Given that I am going to spend just one unit on the basics of game design in the high school curriculum I am designing, what would you all say would be the most important game design fundamentals and/or principles to teach that would be most useful or necessary for students to use when they begin making video games in Game Maker?

 

 

Games should be 'fun'   (understanding also that a 'game' purpose MIGHT be a method of interactive demonstration for education)

 

Game interfaces should NOT be frustrating to interactw with (too many games Ive had to fight a poor interface more than the opponents)

 

Games should offer sufficient surprises (not be fully deterministic)  to the player - giving them a reason/incentive to replay.  Creativity in playing to solve unexpected situations...




#5289468 towards a faster A*

Posted by wodinoneeye on 30 April 2016 - 03:29 PM

http://www.codeproject.com/Articles/118015/Fast-A-Star-D-Implementation-for-C

This guy here wrote an extremely fast heap priority queue.

Let's try it.

It runs very fast in my program

 

Looking at the large grid map (on that article) with all those zigzags what besides a computer scientist would think something real has as detailed a set of terrain information to be able to employ such a detailed pathing  (Mars Rover where its GAME OVER if the thing gets stuck/rolls over - and even then the info is quite spotty to be so precise)

 

Anyway,    animals have paths they repeat  and repeated exploration to find their 'good enough' paths to things they routinely do.

Really the coarse pathfinding around blocking rivers and mountain ridges/swamps  done at a fairly coarse scale (for long range movements) is the closest they do for such distances,  with close range exactness for immediate (next 5 minutes) movement  being where any precision exists.    At the large scale in their tiny brains it is more an irregular network of interconnected nodes adjacencies of regions having certain properties (and moreso the resources in them - the goal info)

 

So really its an organic mapping with very rough costs (more 'can I get through' or not or 'do I want to even attempt it')  And then relying on  fairly short range movements stepwise precision.    Solving a maze when you cant even really see the maze doesnt  need to be attempted.

 

For the 'out of sight' beasty AI nature of the game being described it can be very roughly done.

 

---

 

SO with fairly short paths still needing efficient pathing (particularly at close range and likely in a dynamic environment requiring frequent repathings over short intervals)    The HeapQ i used for the open list used pointer math because of the tree relations between the nodes where every 'parent' or left/right' sibling was a fixed mathematical offset within the data structure being used  (so when you add new cadidates at the top and displace downward and pull the top and promote upwards  it was very fast processing)

 

 

---

 

I do also remember at that time thinking that for multitudes of pathing objects using similar paths between resources,  that developing common routes by some monte carlo samplings between resouce areas would allow reuse of a much simpler individual system of paths.  These already determined withing the local area via A* to be leading to a particular resource (like a waterhole) that once you reached a node on that precanned route it was obvious it was part of an open path to that resource (could be followed with MUCH simpler logic - drasticly cutting down on the pathfinding processing).




#5287632 towards a faster A*

Posted by wodinoneeye on 19 April 2016 - 11:31 AM

You are lucky if you can fix your window overlay grid dimensions into constants

 

Then you can do pointer arithematic on a fixed size map array   (ie   ptr+1  ptr-1  ptr+span  ptr+span+1 ptr-span+1 etc..) .... example - to access the usual 8 neighbors when finding open list candidates    You can do this with a single value offset (array of the 8 of offsets and use a for loop) if you make your map 1 dimensional array   [90000]

 

Fixed size HeapQ for the open list  (also doing pointer math to walk nodes of the tree  --  a systematic power of 2 progression for the binary tree)      Max depth shouldnt need to be sized to worst case  as you can have them fall off the bottom and IF they ever become canditates again they can be readded to the open list.

 

use flags ON the window A* grid map (overlay) nodes for your closed list (flags)

 

If you are still doing link lists then absolutely use node pools instead of costly allocate/deallocate overhead

 

crush your map/open list data as much as is possible to maximize cache hits

   (300 coord  size calls for 16 bit ints ...)   use bitmaps for flags  avoid floats if you can for weight values  to use try to use bytes/int16 .

all needed AI* data extracted from the real world map this 300x300 window is overlaid on  (yanking from chunks at tthat time

 

Eliminate if-then X Y edge of map tests being done when getting open list candidates by oversizing map +2 and pre-marking the boundry lines as CLOSED (flagged) - the map edges will now be excluded as part of the normal Closed node test

 

Finish the current pathfind task - dont timeslice multiple A* pathfindings

 

if precalc'd edge cost values can be stored in the real world map nodes,  then do that (memory is cheap) and make them contiguous so they might be copied/extracted in a block into the A* working data

 

IF additional map data needs to be extracted/examined  BUT the number of nodes this is required for is largely culled down THEN that data  doesnt need to be extracted and put into the A* working data (less initial copying AND smaller A* working data....)

 

figure out how to maximize reuse of the same A* working data for multiple objects

 

---

 

Something they rarely include for A* speed calc is the setup time of the working data you will use  (interpretting/extracting from the world map and resetting between A* runs)

 

 

 




#5286686 A* A star vs huge levels

Posted by wodinoneeye on 13 April 2016 - 10:13 AM

OK so this is a generalized simulation for many of the entities (exact bookkeeping isnt needed).

 

Long ago I looked into this same idea and split entities between :

 

"Dum' types (local flavoring, move not so much from their area, wake up/auto-generate when player is nearby -- non-persistant)

 

These probably won't be doing long range pathfinding, and even then only enough medium movement range to appear to be acting naturally.

You can still have them do ecosystem interactions - locally  (predators hunting prey, herbivores moving between grazing and water, etc..)

Big map migrations and such is handled by adjusting patterns on high level coefficients which then control the spawning content.

For complex simulation patterns, high level control entities on the 'big map' could be persistant, they effect/control shifts in coefficients of areas (cellular atomata methods).

 

Pathfinding for these is generally local (still cross chunks) but depending on how realistic and detailed you want them, many might  run on schedule driven repetative medium level paths (compute set once, and then local A* immediate steps including dynamic (if you have blocking by other entities).

Unfortunately, dense terrain is a favorite environment for many animals BECAUSE its difficult for their predators OR its where alot of thesustinance is located.   But then dense also greatly slows movement, so more real time to decide on any longer pathing)

 

The high level (persistant) entities will navigate around major blocking obstacles  ocean/rivers/mountains deserts and will set direction for long distance migration of herds of animals.  High level simulation can be done (group fo predator interacting with group/herds that wander by, etc..)  These high level entities still can have migration patterns - precalc'd general paths.     When the player comes close, then whatever interactive entities are realized (auto generated).  and have general behavior motives set to do their appropriate behavior variants.

 

--

 

'Smart'  (persistant, entities that have very complex player interactive/motive driven which will/can follow players around the big map with intent, requiring more exact bookkeeping because players see them over and over and result of interactions would be important and persistant)

 

These, even though they can have 'generalization' when well away from where the player sees them, have to transition to full detail sufficiently far away to be able to 'Act Natural' by the time the player sees them  (which can include forcing high detail around where THEY are centered to a sufficient area to all the things THEY interact with).    Their long range pathing has to be more exact/realistic/proper so they wont be where they shouldnt be (and are going about their proper 'business') when the player comes near and they are visible.

 

These entities (probably other humans) have the far more complex AI  (and interacting with them is a good(more interesting)  part of the game).

You can STILL auto-generate them (like if the game is a life journey and you cross the world and probably will never meet those local 'smart' residents again).  You use Templates (hierachical parameterized templates) to generate these local  occupants (probably with a largely hand built seed map) and likewise use the high level persistant entity (tribes?)   to make their overall simulation flowing/balancing across time.   Those Templates and the rules that direct them are the major difficulty (work to get them to be plausible) for the auto generation when used to generate  secondary characters ('main' characters probably will be mostly hand crafted)

 

These can do the long ranged pathfinding (but still probably will have daily/weakly/seasonal reused paths) - alot more data, but there are many fewer of them compared to the 'Dum' background entities.     You probably will find that their pathfinding is the LEAST part of their AI processing (you might use things like Planner driven AI which use pathfinding as a mere 'tool')

 

-

 

It all depends about how detailed the game is to be (and how much mundane stuff is cut out)

 

Funny thing is in one design (for a very detailed game) I actually had a 'micro' simulation for crafting activities  (you kill an animal and now you have to butcher it and convert it for transport, or 'gathering' of edible/useful plants in a small area (or even just gathering firewood).   Game where (if you want to look "there is something interesting under every rock").   Using the same methods to LOD a small area into very fine details .... Thats usually overkill for a 'game' and at that point it was more 'simulation'.   

 

Likewise it is the  transitioning -- close high detail with a boundry shifting into the lower detail 'generalized' simulation, which is much of the programming headache.

 

---

 

Another thing - once you do the autogeneration to fine detail (can be alot of processing if you have intricate Templates) you dont 'throw it away when the player walks on (he might suddenly decide to turn around and go back, or constantly move around the same large area.   So I had a system to save the detailed chunks for an extended time (the 'Dums' were local on them) - roll out to disk,  and then 'roll them in' when the player comes back in range and 'patch' the local details to compensate for any  intervening time (and have any persistant 'smart' entities added back to them).      If you are gone long enough any changes the player made can melt away and you can throw away that saved local chunk.   But the autogeneration would have to be sufficiently deterministic for much detail for when the player later comes back again (particularly if your terrain isnt overly generic' and specific terrain matters).

 

My chunk data was actually encapsulated (a block of map memory with its own local dynamic object heap, internal offset pointers)  so that a simple memory copy of the whole thing could save/restore it all with minimal processing.   They would constantly get loaded ahead of the players visibility (partial low detail LOD out furthest away).   Again depends on your game's detail relevancy if you have to go that far with such a mechanism.




#5285667 A* A star vs huge levels

Posted by wodinoneeye on 07 April 2016 - 05:37 PM

"the big concern is when i get about 300 active entities in visual range at once"

 

 

How big a map area is this 'visual range' (of the high detail interactions)  ?  It may be simpler in that case to 'window' the immediate areas map data to fine detail (if its reasonable size) and use plain A*     (with window enlarged beyond the visual radius enough to last for a while (be valid) with a typical player movement rate)

 

The usual problem also  is entities transitioning between the realized (window around the players view) area activity and  the generalized (big map) mode of operation.    Then the problem with the 'window' is (how) Are the other entities behavior responding to other entities that are beyond the  windows edge properly.   Some entities are more important for interactions than others and might call for 'realization' of the area around themselves (making the 'window' blobby rather than one nice regular grid area)




#5285663 A* A star vs huge levels

Posted by wodinoneeye on 07 April 2016 - 05:14 PM

""

which means i'm back to my original chunking algo of:   A* across a chunk to the next chunk, and repeat, for each chunk along the line from original start to ultimate goal.

""

 

No you originally wrote

 

"find the closest open edge node/tile/square in the desired direction that is adjacent to an open node on the next chunk edge"

 

The "closest open edge node/tile/square" may NOT be a good choice if immediately past/beyond THAT targeted part of the grid-square's edge  is largely a 'wall' (and likewise the exit points on that further "node/tile/square"  may be poor as well (this all causing alot of inside-a-chunk processing which then is thrown out).    

 

Assuming a regular grid method   - When true (optimal path ) exit point is NOT near the  'bee line' path on that chunk the adjacent chunk(s) may have the better path (the chunk corners endcase).     

 

SO your Fine A* really should include processing the adjacent (side) chunks subnodes (probably from the start instead of in a backtracking strategy)

 

 

 

11111112222222

11111112222222

11111112222222

11111112222222           look at the corner    1234 when a simplistic chunk logic would always be trying only through 1 and 3

33333334444444

33333334444444

33333334444444

33333334444444

 

-

 

Likewise a simplistic super chunk precomputed evaluation may not give give a good estimation depending on the final destination of any particular point to point path  on the entire map.

 

Creating Better estimations means more info prestored PER chunk containing better 'going thata way-over there'  connectivity info for the high level A* to use (now for super  'regions of chunks' where the destination lies - a precalc'd third tier A*  and the chunk has a data list for those super regions)

 

If the data gets too big for that, a 8/16/32 piechart compass 'general direction'  best-adjacent candidate set  (per chunk) may be good enough for most cases.






PARTNERS