Look up StarFlight on wikipedia. It has a article describing that oldie PC game which had alot of interesting aspects (exploring to have the randomness for the suprise element and problem solving).
wodinoneeyeMember Since 02 Dec 2004
Offline Last Active Jun 27 2016 05:40 PM
- Group Members
- Active Posts 1,845
- Profile Views 11,772
- Submitted Links 0
- Member Title Member
- Age Age Unknown
- Birthday Birthday Unknown
Posted by wodinoneeye on 14 June 2016 - 06:59 PM
Just a thought : that instead of specific vignettes of these plot twist 'negatives' that you systematically have 'shit happens' situations across the whole game where the 'dangerous world' is a constant factor.
Things you HAVE to run away from (or die)
Things that are hazardous which if you dont respect them, will kill you.
Unfortunate circumstances that happen randomly despite vigilance
Probably being a game where killing the player off alot is not condusive to fun, then partial disasters which the player then can compensate for (and learn to do this) and react-to to make right (or make the best of) all the unfortunate perils of living.
Thus the player also has to be offered a sufficient plutrality of options and ways to react and compensate for specific 'negatives'.
Perhaps it depends how much your game is "sandboxy" or a closer controlled story arc.
Edit - anything that deals with human interactions adds immense complexity of proper reactions to interactions (and of indirect actions) if its to be a plausible simulation of that kind of thing. Human behavior goes beyond Fight or Flight simplicity.
Posted by wodinoneeye on 14 June 2016 - 06:48 PM
These evaluation functions (fuzzy or otherwise) are still just lower level tools to be employed by a higher level decision making (and action guiding) framework. Solution sets are modal as regards to the situational factors and can be severely different and even inverted for the particular problem being solved. The process goes : Classifying the situation, then looking for solutions to multiple potential strategies (which then make use of specific analysis specific to each goal type) and then carrying out the execution of the strategy (tactical steps which adjust along their progress).
So lots of evaluatiion functions are needed for the different specific solution proposals. Flexibility to use dfferent approaches - some may have many factors requiring a fuzzylogic like approach, while others have very few relevant factors and simpler logic can be employed. Those would be used themselves within option Searches (like a targeting scan) which have their own parameters relevant to the goal its being carried out for (like distance or terrain considerations).
Priorities (part of the decision metrics) shift in non-linear ways depending on the condition (ie- you are unhurt, hurt a little, hurt alot, critically hurt -- which drasticly shifts the importance of certain goals and abilities to attain goals. Yes fuzzy type logic handles that but decisions controlling whole goal sets are best controlled at a high level, and that logic sits ABOVE at the goal selection level (which controls all the lower level processes).
Point of all this is that for something more complicated than a oldschool game object, with its flat instant decision making, it requires a great deal more layering and complexity (and all the human work to tune it all into a cohesive system).
Posted by wodinoneeye on 02 June 2016 - 04:16 PM
Chokepoint for GPUs often usually are complex functions which arent easily done by the simplified instruction sets used by the highly paralleled processors. How well do the usual NN sigmoid activation functions work within the GPU instruction sets (and might some table lookup possibly be substituted to get around that) ?
Posted by wodinoneeye on 29 May 2016 - 07:43 AM
I would love to see you succeed in that vision, I really do!
Me? I'll let you know when I somehow get about $100 million. Its not exactly something you could crowdfund.
Probably will take one of the big game producers teaming with one of the game engine companies to have the resources needed to break into that model.
The basic templating design scheme might first grow out of improved game engine architecture/toolset used for a couple of AAA games.
Posted by wodinoneeye on 25 May 2016 - 06:18 AM
I might suggest that the 90% Crap Idea is pretty much what we already get from the MMORPG companies.
Players are constantly starved for content and wake up their accounts for a month or two and then stop playing and paying (til 6+ months for the next 'drop'). The big games can continue as they have, but have only in those limited genres.
The description Ive given here (above) lacks alot of the details of what the full system I propose would have to be.
Again the things produced by some small percentage of the playerbase (who want to be creators) get selected for use (stringent functional testing at minimum). REUSE is a key element shortcutting further additions.
The collaboration model (only mentioned above ) allows people to build on what other people have already done, thus improving them incrementally. Hardly anyone is good at everything required, so it would take multiple people to produce each of the complete 'Assets' finally used in the game. Someone does good ideas or planning, another basic shapes/structures, another refines that (and possibly others do later), another is good at textures and applying them, another is good at realistic weathering/usifying, another can adapt behavior attributes (tweaking or just installing existing templates) and animations/sound effects , some other can do any needed specialized behaviors. A whole lot can combine objects into scene assemblages (which become someone elses building block for mission scenarios (which have add in creativity for dialogs/story plot/pacing/theater style scripted interplay -- the real aim of this production.
Obviously the Player Creation Community's Vetting and Collaboration is the key to this system, but dedication CAN be found for such. Advice and commenting for revision and testing and inspection would all to be done through a well define process.
Those who have skills and know the tools have much higher efficiency (so its not so tedious for them to do alot in their speciality) BTW SOME people are good at creating tutorials to TEACH others how to be proficient...
The publishing model is to share everything and asset projects are forked and resubmitted (and ANYONE can come along and mod it if they want to try)
A WHOLE lot of the low level fiddley bits would be done by the company (game mechanics, object atrributes systems for standard interactions, etc..)
I didn't mention that the detail level of objects is more along the 'deformable' world type definition and play use (much more genericly interactive and reactive). Thus more you can use things for IN-GAME (and ALOT of creation ALSO can potentially be done by any Player In-Game). There are LOTS of small things to create for a rich world - not everyone has to create A Mech-Tiger-Tank. Many aren't that hard. with so much basic stuff already predone and inherant tweakability and (much more) Idiot Proof Tools and integration of processes.
The GOOD tools (fundamental to this system) comprehensively cover producing all of these things -- thats why they will be as big a project as a AAA game by itself to build (and some players can be better at Tool Making/Improving than most in the companies, and THAT is part of this whole thing TOO)
Creators get credit for the part they do, and add to, and those things they add are structured for reusability and modification.
The company would try to set standards and the community would have to maintain those strong standards (and the company would have the Final Word to enforce adherence). Obviously there are legal issues like copyright infringement which have to be enforced strictly, and the vetting system would be defined to prevent publishing anything with such issues.
I never said it would be easy (and DID say this is Next-Next generation stuff), but the way costs are going up and playtimes going down for these games, the Wodinoneeye Law says that within some number of years with games progressing as they are, they will each cost as much as the US Yearly Economy and playtime will Last a fraction of a second. Well before that, most players will stop buying them.
Consider IF players could create using already defined 'objects' and use them to create higher-order things for the game. Guns already work, chairs already work, NPC already have improved AI. The TEMPLATES are designed for mod'ing with the least needed work. Now large numbers of Player Creators DONT have to fumble around trying to build everything they envision up from scratch (and no longer fail when they couldnt do EVERYTHING so complex and tedious). Now you (many more players) can build the more interesting aspects of the actual game instead of get stuck reinventing all the building blocks.
I suppose I could say that Open Source never could work because of this Sturgeons Law, but what is the reality there ???
Yep, all all a miserable failure ... right ? Nobody in their right mind will do anything quality for free ... right ?
(now do that in a more organized fashion....)
This would be a largely new paradigm for game production, employed in a more complex/thorough way. It really has to be done with consistency or wont work (and its a daunting project that only a visionary (with sufficient cash) could attempt and will probably take some such pioneer to eventually do it)
"Second Life had almost exactly the vision that you lay out."
Vision is one thing, carrying it out is another. This system I speak of is far larger and would need to be much better designed as to expandibility ('Templates', as the fundamental design for EVERYTHING involved - parameterized hierarchical )
Second Life had(has) this fundamental element of people $ELLING their in-game productions/creation which NIXED most/alot of collaboration.
Result - a constant flow of new APPROVED content, improvement of the assets already deployed. Heavy use of procedurally generated game terrain/scenarios are possible (again via that comprehensive TEMPLATE system the whole thing is based on). Creation On-The-Fly (alot of it) instead of 'static' level worlds.
An interesting aspect in such a system is that Micro Genre games can be built upon generic items already produced (and working), being tweaking instead of a complete rebuilding and then you can so much sooner get to doing the 'good part' of creating the game.
Posted by wodinoneeye on 24 May 2016 - 08:12 AM
This Content limitation (as in DETAILED content and INTERESTING content) to fill any bigger world?
I say that as big as they currently are, they are ALREADY still largely pretty deserts -- mostly devoid of uniqueness and interesting detail and interactions.
Some day (maybe in our lifetime) we might have games where Players produce alot of the MMORPGs Assets
(Ive talked about this before -- Tap into the Players abilities/imagination/creativity to build the game worlds)
Major cost of MMORPG - the Assets is cut out of the company expense
1000X as much imagination and labor available in the players, than from the game company (note all player production would be done for free)
With a broad spectrum of abilities (from simple assets and assemblies of assets, all the way to behavior AI, and even game mechanics improvements/additions) Players to be able to create upto their abilities.
What one player creates, 1000 players will play with in the game
Assets can be incrementally improved by expertise in different areas of production (hierarchically template everything to maximize reuse and minimize reinventing the wheel...)
Assets can be shared across genres (one system + many games) to maximize whats available from the Players efforts
New content being added constantly (and for some 'players' the creation will be THEIR game)
Leverage of more than a little Open Source Tools which exist
Need ALOT of easy to use tools which rival these games in their cost (though can be reused across many games) -- idiot proofing for general player use is a monumental task (and integrating all the tools into an online production system)
Need a really thorough vetting system BEFORE anything published to the running game worlds (and that too largely would be the work of Players)
The game would need alot of definitions (to be kept to) to the genre/canon/quality levels of acceptance
A major community effort is needed (has to be managed -- and largely NOT by the company) to facilitate cooperation and collaboration (and especially to NOT waste anyones time, where possible) Comprehensive planning/testing/review/advice/collaboration/publishing processes.
To get started, certain popular genres will have to be used, to get their interested player groups having a critical mass (after that, reuse can make many smaller genres workable building upon the basics)
Broad well done generic design (not just the bits and mechansism used for a particular game) The company would have to build sufficient basic assest to get the games going (possibly reusing/converting assets they already possess from previous games)
All kinds of Legal Crap.
Why it wont happen soon :
The cost of creating the whole system... TOOLS (even with one of the game engine companies being the organizers of it)
The game companies losing the profits they make for content (fire all the artists ....)
Risk adverse companies who know the model they use NOW can work and want nothing to do with an unproven system (will wait for SOMEONE ELSE to prove it works)
The 'sharing' parts (like asset standards) might be blocked by company rivalries
This is Next Next generation type of stuff (the extent of the thing I would have it be), but its Development Utility also could be used for Media production and Advertisements (and even facilitation and lowered costs of creation of Solo games by inhouse talent) THINK of it as something of the magnitude of what Computer Publishing was.
An no this isnt Second Life Plus Plus ... that thing is a shadow of a shadow of what I would envision this possibility .
Posted by wodinoneeye on 21 May 2016 - 08:17 AM
Unfortunately game situations are magnitudes more complex than the single problem of telling one handily pictured husky from another (which if ever needed as a 'tool' takes up a good sized NN by itself, and then we will need the 10000 other 'tools' (and their training sets) for all the other classification/differentiator/deobfuscator tasks, and then the processing resources to run them ALL in a timely manner.
Maybe if you were using NN to spot 20 year old game pixel patterns for game objects in a clutter of on-screen scenery this would be relevant. Unfortunately that IS still just a basic sensor filtering task and does little for the rest of the problem of playing the game.
"They can translate sentences between any two languages, with very little additional machinery."
I'd like to see the project that claims THAT. Particularly with your use of the word 'any' - when there are so many world languages to map between and more than a few that dont have exact translations of certain words/idiom-contexts with other languages. (English:"The spirit is willing but the flesh is weak" --> Russian:"The wine is good but the meat is rotten"...)
Text to text NN input ??? or again you are claiming some subtool NN of a much more complex program and data set (dictionaries/grammar rule translators) where the NN component actually turns out to be a trivial part of the whole thing. ???
Temporal cause and effect pattern spotting has major difficulties with noise from nonrelevant situational factors and a further combinatoric explosion of endcases, some coming now from irregular event timings. Again in more complex simulation environments this forces greater human intervention being required in the training (hand training the logic which otherwise could simply further just be built as conventional logic) which is the most significant chokepoint of complex NN solutions.
For Go I could see use of convolutional neural networks to convert the simple Go grid into higher and higher level features and trying to spot the needed decision patterns . How well can the future-assessment be evaluated - training the NN effectively for that generalizing, without it having terrible gaps? But again that is for an example of a game with 'situation' that is utterly flattened out/narrowed in detail complexity, compared to just about all other 'games'.
Posted by wodinoneeye on 19 May 2016 - 06:23 PM
Ive done this employing C's macro mechanism to simplify what 'Script' has to be written (used for behavioral control of simple intelligent objects) .
It is also useful for structured scripts (like the start/end/else state constructs of Finite State machines) ... not just individual function calls (several of the AI Game Programming Wisdom books had several articles about macros doing that including, hierarchical finite state machines)
Advantage is that a routine/repetitive pattern of code (which sometimes are rather bulky) can be reduced to much simpler 'script form' and an assumption of the script features being used in a systematic way (eliminating nesting/spaghetti code hell). The macro Script restricts what variables and calls can be accessed through the 'Script'
Another is that specialize code can be created/customized by just adding another 'macro' to your 'language' (and if when needed you can still insert actual NATIVE code in (hopefully few) trouble spots to get done exactly what you want/need)
The C preprocessor then converts your simpler macro Script into native code now subject to the compilers optimizing abilities and can run directly without interpretor overhead (including eliminating subroutine calls)
Disadvantage : is some extra difficulty debugging where the 'script' produced code is mutated
Some people may say 'why bother optimize' , but when you are running thousands of active/reactive objects of this level of complexity EVERY Turn, the optimization can spell a great difference in the size of the players game environment
One thing that added a little difficulty was : that my 'nice' grouping of Script Code created for each different object typewere (in my usage) run in different sections of the program, even though the 'Script' has their chunks defined right next to each other (instead of breaking them up onto separate files, and the bother/confusion/disorganization that entails, and you seek to eliminate).
So its good learning how to use #define #ifdef etc... to modalized the script text so that by using multiple #include of the same file (each employing a different #define 'mode' for a different place in the program)
example (a LOCKSTEP behavior processing for active/reactive object on a grid map which use finite state machines)
Situation detection phase - all objects Scan and detect a number of stimulus to potentially react to (from their current local situation) and filter/prioritize them according to the objects mode (with potential interrupts for 'tactics' already in progress)
Solution Classification phase - all objects digest their 'stimulus' set, decided ONE best action/tactic initiation (avoiding conflicting actions with other objects already busy interacting - which could change since previous phase)
Action Processing - carry out all decided actions (including animations) and generate results (resolve conflicts), and adjust the game situation
The above runs a simulation in a lockstep manner, so the separate phase chunks of code for each object type (even though grouped together in the script file) get placed in the corresponding 'phase' section of the program whose basic organization was 'the Big Switch' (nested) (My program used three phases, but the chunk split-up still happens if there are only two phases required by a lockstep simulation)
Posted by wodinoneeye on 19 May 2016 - 05:19 PM
You can get the 'theory' by simply searching for it online.
Basically it is a way to preprocess certain types of data (like largish pixel images) by repeatedly running 'small' feature filter NN in parallel locally (with overlaps) across the whole image .
Each local area of the regular grid (image) is processed to extract/integrate generic patterns/trends (like line/boundry detection or spotting a solid blob) from the basic data. Further layer processing then (in parallel) integrate that first order detecting larger patterns/trends (like spotting a 'corner'). Later layers then look for the super patterns which classify the picture.
The advantage is the lower 'detail' filter NNs are fairly small (some like 5x5 local groupings) and can be well formed to do their task. They can be run in a massively parallel manner (you apply that layers same filter in an array scanning fashion ) and integrate/collapse each next layers input data til the final classification (several layers itself) which detects combinations of the macro patterns.
A 'divide and conquer' solution eliminating/minimizing ALOT of the NxN input weights (in the lower layers) such large data input arrays would require if done monolithicly.
40+ years ago anatomical research was done that showed that the retina of eyes do operations like this (the low level feature detection).
Posted by wodinoneeye on 19 May 2016 - 04:31 PM
"You sound like someone that has never programmed either a checkers engine or a chess engine."
You sound like someone who hasn't programmed anything more complex than a "checkers engine or a chess engine".
"Yes, NNs are only a tool. I don't see who you are arguing with here"
Unfortunately they are the 'hammer' some people see all problems as 'nails' to use on. NNs for basic classification of SIMPLE situational factors are fine, but once the situations are no longer simple (like spotting temporal cause and effect) they just dont work too well. And even then (As stated above) there is the REST of the logic to be done to actually work a game problem solver. Likewise, there usually is ALOT of complex data massaging that needs to be done FIRST to be able to feed it into an NN.
"parts that can be implemented using neural networks"
But are they 'parts' which simpler hand crafted logic can do more simply and efficiently (Are they there done as NN just for the sake of doing them in NN ???)
"not old enough to remember" then read about it. google history of AI
Go's mechanism is very nice and simple - actually its point is it being boiled down into simplicity of mechanism. The game situation representation and play process and actions likewise are quite limited. So of all games it may be nearly the best to use NNs on. Too bad so many other games dont have its major advantages that allow NNs to be employed so easily.
Posted by wodinoneeye on 19 May 2016 - 03:30 AM
I'm not that scared by your FUD about how complex things can get.
EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.
I already mentioned I have used a NN as evaluation function in checkers. Using one as evaluation function in chess is not [much] harder: http://arxiv.org/abs/1509.01549
Other uses of NNs for chess are possible: http://erikbern.com/2014/11/29/deep-learning-for-chess/
Checkers as a equivalent to Chess ? ok .........
'Chess' Evaluation function ( as in 'tool' ??) .... but is it the fundamental core of the decision logic ? Which is what Im talking about being a problematic thing for NN usage.
'possible' -- Where AI is concerned I recall that little situation in the 50s where they thought AI was just around the corner, and all kinds of Computer AI goodness was just about solved. Here we are 60 years later. 'Complexity' has proven to be quite perplexing.
Posted by wodinoneeye on 18 May 2016 - 06:43 AM
By "complex" I am talking NOT about some NPC-bot navigating around a static map, but one that has to react to friendly/enemy/neutral dynamic objects (possibly including one or more players) -- spatial relations between objects of different classifications. Now have the typical number of action options and whatever metabolic goals the NN is supposed to 'think. Suddenly a plethora of contradicting and irregular situation factors to be 'comprehended' (again, interpreting a situation which isn't just some terrain grid) need to be process to generate a 'good enough' current solution for what that 'smart' object is going to try to do. The training set expands exponentially with complexity, and a divide and conquer method cant work -- except as tool analysis interpretation which STILL has to be integrated in a complex fashion. Multiple metrics of 'good/bad' and situational adjustments for priorities (fun - modal factors to add in - big->huge NN, or breaking up into specialized NNs (which now STILL have to be intergated to decide which applies/overrides) etc....
Again 'tool' because any analysis leading to Temporally effective actions takes programming methods like finite state machines to carry out sequential solutions once some decision is made (and then possibly reevaluated and redirected - even WHEN to reevaluate and cancel current activity is a complex logic problem). Not just do action X or Y or Z and rinse, it is start strategy/tactic A or B or C and carry through/adjust...
We already have plenty of relatively mindless 'ant' objects done in games without needing NN, moving the AI up a few notches and suddenly the problem space expands hugely and the (richer) situational complexity likewise (training set hell). Thats the environment where NN fall down REAL fast -- very difficult especially any self-learning mechanism, and an assisted learn NN (being told whats good and bad in many very specific endcases) suddenly its the human limitation to get through the bulk of the work required.
Carefully targetted analysis is where I might consider using NN - limited domain and many small ones if that many different analysis are required. The primary logic for anything tactically game complex is still most efficiently created being hand crafted, where you wind up doing most of the work either way and trying to force a NN to do what you already have worked out the discrete logic for is pointless.
EDIT - a simple thing to contemplate what Im talking about is --- try to program Chess via a NN based solution.
Posted by wodinoneeye on 17 May 2016 - 04:04 PM
The problem with neural nets is that the inputs (the game situation) has to be fed to it as a bunch of numbers.
That means that there usually is a heck of alot of interpretation pre-processing required to generate this data.first.
Another problem is that the process of 'training' the neural nets is usually only understood from the outside - the logic is NOT directly accessible to the programmer. Alot of 'test' game situational data needs to be build up and maintained, and connected with a CORRECT action (probably done by a human) to force the neural net into producing is required. Again alot of indirect work.
Neutral nets also generally dont handle complex situations very well, too many factors interfere with the internal learning patterns/processes, usually requiring multiple simpler neural nets to be built to handle different strategies/tactics/solutions.
Usually with games (and their limited AI processing budgets), after you have already done the interpretive preprocessing, it usually just takes simple hand written logic to use that data -- and that logic CAN be directly tweaked to get the desired results.
It might be that practical neural nets may be just a 'tool' the main logic can use for certain analysis (and not for many others) .
Posted by wodinoneeye on 07 May 2016 - 04:23 PM
Given that I am going to spend just one unit on the basics of game design in the high school curriculum I am designing, what would you all say would be the most important game design fundamentals and/or principles to teach that would be most useful or necessary for students to use when they begin making video games in Game Maker?
Games should be 'fun' (understanding also that a 'game' purpose MIGHT be a method of interactive demonstration for education)
Game interfaces should NOT be frustrating to interactw with (too many games Ive had to fight a poor interface more than the opponents)
Games should offer sufficient surprises (not be fully deterministic) to the player - giving them a reason/incentive to replay. Creativity in playing to solve unexpected situations...