Jump to content
  • Advertisement
Marc Klein

BMFont: Different results with forcing zero offsets

Recommended Posts

Hello,

I'm using BMfont to create a Bitmap font for my text Rendering. I noticed some strange behaviour when I use the option "force Offsets to Zero".

If I use this option my rendering resultions looks ok, but without it, there is some missing space between some characters.

I attached the BMFont configuration files and font that I used.

In the rendering result with variable Offset you can see that there is missing space right next to the "r" letter.

To get the source and destination of my render rectangles I basically to following:

void getBakedQuad(const fontchar_t* f, int* x_cursor, int * y_cursor, SDL_Rect* src, SDL_Rect* dest) {

dest->x = *x_cursor + f->xoffset;
dest->y = *y
_cursor + f->yoffset;
dest->w = f->width;

dest->h = f->height;

src->x = f->x;
src->y = f->y;
src->w = f->width;
src->h = f->height;

*x_cursor += f->xadvance;

}

Has somebody noticed a similar behaviour?

orbitron-bold.otf

SIL Open Font License.txt

variable_offset.bmfc

variable_offset.fnt

variable_offset_0.png

variable_offset_render.png

zero_offset.bmfc

zero_offset.fnt

zero_offset_0.png

zero_offset_render.png

Share this post


Link to post
Share on other sites
Advertisement

I see nothing wrong with the generated font files in either case. For this particular font there shouldn't be any difference between the two. When forcing the offsets to zero BMFont will embed the offsets within the quads themselves, and inspecting the font files I can see that this was done.

The difference you see might be a case of rounding errors when drawing the text. Either with the texture coordinates, or the placement of the quads on the screen.

For reference, here's the code I use to draw text with:

void CFont::InternalWrite(float x, float y, float z, const char *text, int count, float spacing)
{
    if( render->GetGraphics() == 0 ) 
        return;

    int page = -1;
    render->Begin(RENDER_QUAD_LIST);

    y += scale * float(base);

    for( int n = 0; n < count; )
    {
        int charId = GetTextChar(text, n, &n);
        SCharDescr *ch = GetChar(charId);
        if( ch == 0 ) ch = &defChar;

        // Map the center of the texel to the corners
        // in order to get pixel perfect mapping
        float u = (float(ch->srcX)+0.5f) / scaleW;
        float v = (float(ch->srcY)+0.5f) / scaleH;
        float u2 = u + float(ch->srcW) / scaleW;
        float v2 = v + float(ch->srcH) / scaleH;

        float a = scale * float(ch->xAdv);
        float w = scale * float(ch->srcW);
        float h = scale * float(ch->srcH);
        float ox = scale * float(ch->xOff);
        float oy = scale * float(ch->yOff);

        if( ch->page != page )
        {
            render->End();
            page = ch->page;
            render->GetGraphics()->SetTexture(pages

);
            render->Begin(RENDER_QUAD_LIST);
        }

        render->VtxColor(color);
        render->VtxData(ch->chnl);
        render->VtxTexCoord(u, v);
        render->VtxPos(x+ox, y-oy, z);
        render->VtxTexCoord(u2, v);
        render->VtxPos(x+w+ox, y-oy, z);
        render->VtxTexCoord(u2, v2);
        render->VtxPos(x+w+ox, y-h-oy, z);
        render->VtxTexCoord(u, v2);
        render->VtxPos(x+ox, y-h-oy, z);

        x += a;
        if( charId == ' ' )
            x += spacing;

        if( n < count )
            x += AdjustForKerningPairs(charId, GetTextChar(text,n));
    }

    render->End();
}

The complete code can be found here: http://svn.code.sf.net/p/actb/code/trunk/source/gfx/acgfx_font.cpp

 

Regards,
Andreas

 

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By EGDEric
      Have you ever played Starcraft, or games like it? Notice when you tell a group of air units to attack a target, they bunch up and move towards it, but once they're in range, they spread out around the target. They don't all just bunch up together, occupying the same spot, even though the game allows them to pass through each other.
      How would you approach this problem?
      I went with a "occupation grid": It's just a low-resolution 2D boolean array (640x480). Each ship (my game only has ships) has one, and updates it every frame. When attacking, they refer to the grid to figure out where they should move to. It works pretty good: The ships are nice and spread out, and don't just all occupy the same space, looking like they merged into one ship.
      The problem is is that this way is pretty inefficient. Just updating every ship's grid sometimes takes 24-31% of the CPU time. Using Bresenham line-drawing algorithm for every ship is the culprit.
      I'm thinking of having one shared grid for all the ships on a team, and instead of using a simple boolean 2D array, allow each square of the grid to keep track of every ship that is using it, by using a data structure with a linked list of references to the ships using that square. That way, I wouldn't have to update a grid for each and every ship.
      Maybe the solution would be to use a much simpler grid, minus the bresenham line drawing, just: a bunch of squares, try to stay in your grid square. Maybe allow larger ships to occupy more than one square.
      Another solution might be evading me completely, one that doesn't involve grids at all. Any thoughts?
       
       
    • By Gnollrunner
      I wanted to get some thoughts on player character collisions in MMOs for my project.   As I recall the original Everquest had them, however WoW didn’t. I also remember Age of Conan had them but I’m guessing they were done on the server which was highly annoying because you could collide with things that were unseen. I’m not sure how EQ did them but I don’t remember having this problem in EQ, but it was a long time ago.

      My general idea as to do player collisions on the client side. A collision would only affect a given clients player character as seem from that client. On the server, characters would pass right through each other.  This means that because of lag, two players might see different things during a collision or one may think there was a collision while the other doesn’t, but I figure that’s less annoying than colliding with something you can’t see and everything should resolve at some point.

      There is one more case which I can see being a bit problematic but I think there might be a solution (actually my friend suggested it).  Suppose two characters run to the same spot at the same time.  At the time they reached the spot it was unoccupied but once the server updates each other’s position, they both occupy the same space. In this case after the update, a force vector is applied to each character that tries to push them away from each other. The vector is applied by each client to its own player. So basically player to player collisions aren’t necessarily absolute.

      I was also thinking you could generalize this and allow players to push each other.  When two players collide their bounding capsule would be slightly smaller than the radius where the force vector would come into play. So if you stood next to another player and pushed he would move.  By the vector rules he is pushing back on you (or rather your own client is pushing) but since your movement vector could overcome the collision force vector, only he moves unless he decides to push back. You could add mass calculation to this, so larger characters could push around smaller characters more easily.

      Of course there are griefing aspects to this, but I was thinking I would handle that with a reputation/faction system. Any thoughts?


       
       

    • By Vik Bogdanov
      As DMarket platform development continues, we would like to share a few case studies regarding the newest functionality on the platform. With these case studies we would like to illuminate our development process, user requirements gathering and analysis, and much more. The first case study we’re going to share is “DMarket Wallet Development”: how, when and why we decided to implement functionality which improved virtual items and DMarket Coins collection and transfer.  
      DMarket cares about every user, no matter how big or small the user group is. And that’s why we recently updated our virtual item purchase rules, bringing a brand new “DMarket Wallet” feature to our users. So let’s take a retrospective look and find out what challenges were brought to the DMarket team within this feature and how these challenges were met. 
      DMarket and Blockchain Virtual Items Trading Rules
      Within the first major release of the DMarket platform, we provided you with a wide range of possibilities and options, assuring Steam account connection within user profile, confirmation of account and device ownership via email for enhanced security, DMarket Coins, and DMarket Tokens exchanging, transactions with intermediaries on blockchain within our very own Blockchain system called “Blockchain Explorer”. 
      And well, regarding Blockchain... While it has totally proved itself as a working solution, we were having some issues with malefactors, as many of you may already know. DMarket specialists conducted an investigation, which resulted in a perfect solution: we found out that a few users created bots to buy our Founder’s Mark, a limited special edition memorabilia to commemorate the launch of the platform, for lower prices and then sell them at higher prices. Sure thing, there was no chance left for regular users. A month ago we fixed the issue, to our great relief. We received real feedback from the community, a real proof-of-concept. The whole DMarket ecosystem turned out to be truly resilient, proving all our detractors wrong. 
      And while we’ve got proof, we also studied how users feel about platform UX since blockchain requires additional efforts when buying or selling an item. With our first release of the Demo platform, we let users sign transactions with a private key from their wallet. In terms of user experience, that practice wasn’t too good. Just think about it: you should enter the private key each time you want to buy or sell something. Every transaction required a lot of actions from the user’s side, which is unacceptable for a great and user-friendly product like ours. That’s why we decided to move from that approach, and create a single unified “wallet” on the DMarket side in order to store all the DMarket Coins and virtual items on our side and let users buy or sell stuff with a few clicks instead of the previous lengthy process. In other words, every user received a public key which serves as a destination address, while private keys were held on the DMarket side in order to avoid transaction signing each time something is traded. This improved usability, and most of our users were satisfied with the update. But not all of them...
      You Can’t Make Everyone Happy….. Can You?
      By removing the transaction signing requirement we made most of our users happy. Of course, within a large number of happy people, we can always find those who are worried about owning a public key wallet. When you don’t own a public key, it may disturb you a little bit. Sure, DMarket is a trusted company, but there are people who can’t trust even themselves sometimes. So what were we gonna do? Ignore them? Roll back to the previous way of buying virtual items and coins? No!
      We decided to go the other way. Within the briefest timeline, the DMarket team decided on providing a completely new feature on Blockchain Explorer — wallet creation functionality. With this functionality, you can create a wallet with 2 clicks, getting both private and public keys and therefore ensuring your items’ and coins’ safety. Basically, we separated wallets on the marketplace and wallets on our Blockchain in order to keep great UX and reassure a small part of users with a needed option to keep everything in a separate wallet. You can go shopping on DMarket with no additional effort of signing every transaction, and at the same time, you are free to transfer all the goods to your very own wallet anytime you feel the need. Isn’t it cool?
      Outcome 
      After implementation of a separate DMarket wallet creation feature, we killed two birds with one stone and made everyone satisfied. Though it wasn’t too easy since we had a very limited amount of time. So if you need it, you can try it. Moreover, the creation of DMarket wallet within Blockchain Explorer will let you manage your wallet even on mobile devices because with downloading private and public keys you also get a 12-word mnemonic phrase to restore your wallet on any mobile device, from smartphone to tablet. Wow, but that’s another story — a story about DMarket Wallet application which has recently become available for Android users in the Google Play.
      Stay tuned for more case studies and don't forget to check out our website and gain firsthand experience with in-game items trading!
    • By Monty Kiani
      This idea comes from the concept of making a game that specifically, fills the time of a person in travel when the person might not enjoy the adrenaline factor that accompanies many games.  I couldn't come up with anything so I thought about the general vibe I wanted and an action people did in general that emulated that. I found that it mostly happened, amongst other places no doubt, when people go through their messages panel on their devices; a plane traveler/businessperson perfectly calm for a minute eliminating messages, that moment extended. It's a kind of process of elimination. I don't know if this idea is common knowledge but I couldn't find anything and I'd love to see games based on this. 
    • By GameDev.net
      Abstract
      Recently in the field of Artificial Intelligence, scientists are wondering which approach best models the human brain — bottom-up or top-down. Both approaches have their advantages and their disadvantages. The top-down approach has the advantage of having all the necessary knowledge already present for the program to use (given that the knowledge was pre-programmed) and thus is can perform relatively high-level tasks such as language processing. The bottom-up approach has the advantage of being able to model lower-level human functions, such as image recognition, motor control and learning capabilities. Each method fails where the other excels — and it is this trait of the two approaches that is the root of the debate.
       
      In order to assess this, this essay deals with two areas of Artificial Intelligence — Natural Language Processing and robotics. Natural Language Processing uses the top-down methodology, whereas robotics uses the bottom-up approach. Jerry A. Fodor, Mentalese, and conceptual representation support the ideas of a top-down approach. The MIT Cog Robot Team fervently supports the bottom-up approach when modelling the human brain. This essay looks at the theory behind conceptual representation and its parallel in philosophy, Mentalese. The theory involved in the bottom-up approaches is skipped due to the background knowledge required in both computer science and neurobiology.
      After looking at the two approaches, I concluded that currently Artificial Intelligence is too segmented to create any universal approach to modelling the brain — the top-down approach has all the necessary traits needed for NLP, and the bottom-up approach has all the necessary traits for fundamental robotics. A universal model will probably arise from the bottom-up approach since Artificial Intelligence is seeking its information from neurobiology instead of philosophy, thus more concrete theories about the functioning of the brain are formed. For the moment, though, either the approaches should be used in their respective fields, or a compromise (such as an object-orientated approach) should be formulated.
       
      Introduction
      Throughout the history of artificial intelligence one question has always been asked when given a problem. Should the solution to a problem be solved via the top-down method or through the bottom-up method? There are many different areas of artificial intelligence where this question arises — but none more so than in the areas of Natural Language Processing (NLP) and robotics.
       
      As we grow up we learn the language of our parents, making mistakes at first, but slowly growing used to using language to communicate with others. Humans definitely learn through a bottom-up approach — we all start with nothing when we are born. It is through our own intellect and learning that we master language. In the field of computing, though, such methods cannot always be utilised.
      The two approaches to the problems are called top-down and bottom-up, according to how they tackle the problems. Top-down takes pre-programmed knowledge (like a large knowledge base) and uses symbolic creation, manipulation, linking and analysis to perform the calculations. The top-down approach is the one most commonly used in the field of classical (and neo-classical) artificial intelligence that utilises serial programming. The bottom-up approach tackles the problem at hand by starting with a relatively simple abstract program that is built to learn by itself — building its own knowledge base and commonsense assertions. This is normally done with parallel processing, or data structures simulating parallel processing, such as neural networks.
      When looking at the similarities of these approaches to the human brain, one can see the similarities in both. The bottom-up approach has the obvious connection that it uses learning mechanisms to gain its knowledge. The top-down approach has most of its connections in the field of natural language, and the philosophical computational models of the brain. Much philosophical theory supports the idea of an inherently computational brain, one that uses symbol manipulation in order to do its ‘calculations’.
      The two approaches differ greatly in their usefulness to the two fields concerned. In NLP, the bottom-up approach would take such a long time before the knowledge system was rich enough to paraphrase text, or infer from newspaper articles, that a large amount of pre-programmed (but reusable) starting information would be a much more practical approach. With robotics, though, the amount of space required for a large pre-programmed knowledge base is huge, and with the space restrictions on a computer chip, large code segments are not an option. More importantly, the top-down, linear approaches are very easily subjected to exceptions that the knowledge base cannot handle, especially given a real world environment in which to operate in. Since bottom-up approaches learn to deal with such exceptions and difficulties, the systems adapt, and are incredible flexible in their execution.
      As stated, the bottom-up approach utilises parallel processing, and advanced data structures such as neural networking, evolutionary computing, and other such methods. The theory behind these ideas is too broad and is, aside from a brief introduction to the subject, beyond the boundaries of this essay. Instead, this essay deals with one of the computational models of the brain, Mentalese, and its parallel in computer science — conceptual representation.
      Despite the applications of the bottom-up and the top-down approach to different fields of NLP and robotics, both fields have a common problem — how to code commonsense into the program or robot, whether to use brute computation, or when to let the program learn for itself.
      Commonsense and General Knowledge
      Commonsense, or consensus reality, is an incredible obstacle that AI faces. Over the course of our childhood, millions of tiny ‘facts’ are gathered by us that are taken for granted later in life. Think about the importance of this for any program that is to exhibit general intelligence. It has been generally accepted by the AI community that any future parsing program will need command of general knowledge. Such is the opinion of the ParseTalk team:
       
      Dreyfus (a well-known sceptic of AI) says that a program can only be classified as intelligent after is has exhibited a general command of commonsense. For instance, a classic program designed to answer questions concerning a restaurant might be able to answer, "What food was ordered?" or, "How much did the person pay for his food?" Yet it could not answer "What part of the body was used to eat the food?" or "How many pairs of chopsticks were required to eat the food?" So much of our life requires us to relate to our commonsense that we never notice it. So, how could commonsense be coded into a computer?
       
      The top-down method relies on vast amounts of pre-programmed information, concepts, and other such symbols for the program to use. The bottom-up method uses techniques such as neural networking, evolutionary computing, and parallel processing to allow the program to adapt and learn from it’s environment. Classical AI chooses the top-down method for use in programs such as inference engines, expert systems and other such programs where learning knowledge over many years is not an option. The field of robotics often takes the bottom-up methodology, letting the systems get information from the ‘senses’ and allowing them to adapt to the environment.
      When looking at natural languages though, a top-down approach is often needed. This is due to several reasons, the first being that most modern day computers do not have the capabilities to learn through sensory experience. Secondly, a program that is sold with the intent to read and paraphrase large amounts of text will not be acceptable if it requires two years of continual running so it can learn the knowledge required before usage. Therefore, an incredible amount of pre-programmed information is required. The most comprehensive top-down knowledge base so far is the CYC Project.
      The CYC Project: The Top-Down Approach
      The CYC Project is a knowledge base (KB) being created by the Cycorp Corporation in Austin, Texas. CYC aims to turn several million general knowledge assumptions into a computable form. The project has been going on for about 14 years and over a million discrete concepts and commonsense assertions have been turned into CYC entities.
       
      A CYC entity is not necessarily limited to one word; often it represents a group of words, or a concept. Look at this example taken from the CYC KB:
      ;;; #$Food-ReadyToEat  
      (#$isa #$Food-ReadyToEat #$ProductType)
      (#$isa #$Food-ReadyToEat #$ExistingStuffType)
      (#$genls #$Food-ReadyToEat #$FoodAndDrink)
      (#$genls #$Food-ReadyToEat #$OrganicStuff)
      (#$genls #$Food-ReadyToEat #$Food)
      You can see how ‘food that is ready to eat’ is represented in CYC as a group of IS-A relationships and GENLS relationships. The IS-A relationships are an ‘element of’-relationship whereas the GENLS relationships are a ‘subset of’-relationship. This hierarchy creates a large linked concept for a very simple term. For example, the Food-ReadyToEat concept IS-A ExistingStuffType, and in the CYC KB, ExistingStuffType is represented as:
       
      ;;; #$ExistingStuffType
      (#$isa #$ExistingStuffType #$Collection)
      (#$genls #$ExistingStuffType #$TemporalStuffType)
      (#$genls #$ExistingStuffType #$StuffType)

        With the following comment about it: "…A collection of collections. Each element of #$ExistingStuffType is a collection of things (including portions of things) which are temporally and spatially stufflike; they may also be stufflike in other ways, e.g., in some physical property. Division in time or space does not destroy the stufflike quality of the object…" It is apparent how generic many of the concepts get as they rise in the CYC hierarchy.
       
      Evidently, such a huge KB would generate a large concept for a small entity, but such a large concept is necessary. For example, the CYC team created a sample program in 1994 that fetched images given search criteria. Given a request to search for images of seated people, the program retrieved an image with the following caption: "There are some cars. They are on a street. There are some trees on the side of the street. They are shedding their leaves. Some of them are yellow taxicabs. The New York City skyline is in the background. It is sunny." The program had deduced that cars have seats, in which people normally sit, when the car is in motion.
      COG: The Bottom-Up Approach
      After many years of GOFAI (Good Old Fashioned Artificial Intelligence) research, scientists started doubting the classical AI approaches. The MIT Cog Robot team eloquently put their doubts:
       
      The whole robot is designed without any complex systems modelling, or pre-programmed human intelligence. An impressive example of this is Cog’s ability to successfully play with a slinky. The robot’s arms are controlled by a set of self-adaptive oscillators. Each joint in the arm is actuated by an oscillator that uses local information to adjust the frequency and phase of the arm. None of the oscillators are connected, nor do any of them share information. When the arms are unconnected, they are uncoordinated, yet if a slinky is used to connect them, the oscillators adapt to the motion, and coordinated behaviour is achieved.
       
      You can see how simple systems can model quite complex behaviour — the problem with doing this is that it takes a long time for systems to get adjusted to its environment, just like a real human. This can prove impractical. So, in the field of NLP, a top-down approach is required most of the time. An exception would perhaps be a computer program that can learn a new language dynamically.
      With both approaches to common sense and general knowledge, there is one thing in common — the vast amount of knowledge required. A method of learning, storing, and retrieving all this information is also needed. A lot of this is automatically taken care of through the bottom-up approach. With the top-down approach, such luxuries are not present, everything has to be hard coded. For example, all the text written by the CYC project is useless unless a way can be created to conceptualise and link all the entities together. A computer cannot understand the entity #$ExistingStuffType as it stands. A program that parses the KB, and turns it into a data structure that the computer can manipulate is necessary. There is no set way of doing this — but one promising field of Artificial Intelligence exists for this purpose, that of Conceptual Representation.
      Conceptual Representation: Theory
      Conceptual Representation (CR) relies on symbolic data types called conceptual structures. A conceptual structure (now referred to as a C-structure) must represent the meaning of a natural language idiom in an unequivocal way. Roger Schank, one of the pioneers of CR, says the following:
       
      C-structures are created, stored, manipulated and interpreted within a CR program. In a typical CR program there are three parts — the parser and conceptualizer, another module that acts as an inference engine (or something similar), and finally a module to output the necessary information. The parser takes the natural language it is designed to parse and creates the C-structure primitives necessary. Then, the main program uses the concepts to either paraphrase the input text, draw inferences from the text provided or other similar functions. Finally, the output module will convert those structures back into a natural language —this does not necessarily have to be the same language the text was inputted in.
       
      Parsing
      A look at parsing and its two approaches is necessary at this point. Parsers generally take information and convert it into a data structure that the computer can manipulate. With reference to Artificial Intelligence, a parser is generally a program (or module of the program) that takes a natural language sentence and converts it into a group of symbols. There are generally two methods of parsing, bottom-up and top-down. The bottom-up method takes each word separately, matches the word to its syntactic category, does this for the following word, and attempts to find grammar rules that can join these words together. This procedure continues until the whole sentence is parsed, and the computer has represented the sentence in a well-formed structure. The top-down method, on the other hand, starts with the various grammar rules and then tries to find instances of the rules within the sentence.
       
      Here the bottom-up and top-down relationship is slightly different. Nevertheless, a parallel can be drawn if the grammar of a sentence can be seen as the base of language (like commonsense is the base of cognitive intelligence). Both approaches have problems largely due to the large amount of computational time both require. With the bottom-up approach, a lot of time is wasted looking for combinations of syntactic categories that do not exist in the sentence. The same problem appears in the top-down approach, although it is looking for instances of grammar that are not present that wastes the time.
      So which method represents the human mind closer? Neither is perfect, because both methods simply involve syntactic analysis. Take these two examples:
      Carries’s box of strawberries was edible.
      Carrie’s love of Kevin was intense. If a program syntactically analyzed these two statements, it would come to the correct conclusion that the strawberries were edible, but the incorrect conclusion that Kevin was intense. Despite the syntactical structure of the two sentences being the exact same, the meaning is different. Nevertheless, even if a syntactical approach is used, it can be used to point the computer to the correct meaning. As you will see with conceptual representation, if prior knowledge is known about the word ‘love’ then the computer can create the correct data structure to represent the sentence. This still does not answer the question of what type of parser the brain is closer to. In Schank’s words, ‘Does a human who is trying to understand look at what he has or does he look to fulfill his expectations?’ The answer seems to be both; a person not only handles the syntax of the sentence, but also does a certain degree of prediction. Take the following incomplete sentence:
       
      John: I’ve played my guitar for over three hours and my fingers feel like ——  
      Looking at the syntax of the sentence it is easy to see that the next word will be a verb (‘dying’) or a noun (‘jelly’). It is easy, therefore, to predict the conceptual structure of a sentence. Problems arise when meaning also has to be predicted too. We have the problem of context, for instance. The fingers could be very worn out; they could be very callused from playing, or they could feel hot from playing for so long.
       
      Prediction is easier when there is more information to go on, for example, if John had said "and my poor fingers," from the context of the sentence, we could have gathered that the fingers do not feel so good. This kind of prediction is called conversational prediction. Another type of prediction is based upon the listener’s knowledge. If the listener knows John to be an avid guitar player, then he might except a positive comment, but if he knows John’s parents force him to play the guitar, the listener could except a negative remark.
      All these factors are constituents when a human listens to someone talking. With all this taken into account, Schank sums up the answer the following way:
      Types of Concepts
      A concept can be any one of three different types — a nominal, an action, or a modifier. Nominals are concepts that do not need to be explained further. Schank refers to nominals as picture-producers, or PPs, because he says that nominals produce a picture relating to the concept in the mind of the hearer. An action is what a nominal can do, or more specially, what an animate nominal can perform on some object. Therefore, a verb like ‘hit’ is classified as an action, but ‘like’ is not, since no action is performed. Finally, a modifier is a descriptor of a nominal or an action. In English, modifiers could be given the names adverbs and adjectives, yet since CR is supposedly independent of any language, the non-grammatical terms PA (picture aiders – for modifiers of nominals) and AA(action aiders – for modifiers of actions) are used by Schank.
       
      These three categories can all relate to each other, such relationships are called dependencies. Dependencies are well described by Schank:
      Therefore, nominals and actions are always governors, and the two types of modifiers are dependents. This does not mean, though, that a nominal or an action cannot also be a dependent. For instance, some actions are derived from other actions, take for example the imaginary structure CR-STEAL (conceptual type for stealing). Since stealing is really swapping of possession (with one party not wanting that change of possession), it can be derived from a simpler concept of possession change.
       
      C-Diagrams
      C-Diagrams are the graphical equivalent of the structures that would be created inside a computer, showing the different relationships between the concepts. C-Diagrams can get extremely complicated, with many different associations between the primitives; this essay will cover the basics. Below is an example of a C-Diagram:
       

      The above represents the sentence "John hit his little dog." ‘John’ is a nominal since it does not need anything further to describe it. ‘Hit’ is an action, since it is something that an animate nominal does. The dependency is said to be a ‘two-way dependency’ since both ‘John’ and ‘hit’ are required for the conceptualisation — such a dependency is denoted by a Û in the diagram. ‘Dog’ is also a governor in this conceptualisation, yet it does not make sense within this conceptualization without a dependency to the action ‘hit.’ Such a dependency is called an ‘objective dependency’ — this is denoted by an arrow (the ‘o’ above it denotes the objectivity). Now all the governors are in place, and we have created "John hit a dog" as a concept. We have to further this by adding the dependencies — ‘little’ and ‘his’. Little is called an attribute dependency, since it is a PA for ‘dog’. Attributive dependencies are denoted by a ­ in the diagram. Finally, the ‘his’ has to be added — since his is just a pronoun, another way of expressing John, you would think it would be dependent of ‘dog.’ It is not this simple, though, since ‘his’ also implies possession of ‘dog’ to ‘John.’ This is called a prepositional dependency, and is denoted by a Ý, followed by a label indicating the type of prepositional dependency. POSS-BY, for instance, denotes possession.
      With all this in mind lets look at a more complicated C-Diagram. Take the sentence, "I gave the man a book." Firstly, the ‘I’ and ‘give’ relationship is a two-way dependency, so a Û is used. The ‘p’ above the arrow is to denote that the event took place in the past. The ‘book’ is objectively dependent on ‘give’, so the arrow is used to denote this. Now, though we have a change in possession, this represented in the diagram by two arrows, with the arrow pointing toward the governor (the originator ), and the arrow pointing away (the actor ). The final diagram would look like:

      You can see through conceptual representation, how a computer could create, store and manipulate symbols or concepts to represent sentences. How can we be sure that such methods are accurate models of the brain? We cannot be certain, but we can look at philosophy for theories that can support such computational, serial models of the brain.
      Computational Models of the Brain
      In order to explain the functioning of the brain, the field of philosophy has come up with many, many models of the brain. The ones that are of most interest to Artificial Intelligence are the computational models. There are quite a few computational models of the brain ranging from GOFAI approaches (like conceptual representation) to connectionism and parallelism, to the neuropsychological and neurocomputational theories and evolutionary computing theories (genetic algorithms).
       
      Connectionism and Parallelism
      Branches of AI started to look at other areas of neurobiology and computer science. The serial methods of computing were put to one side in favour of parallel processing. Connectionism attempts to model the brain by creating large networks of entities simulating the neurones of the brain — these are most commonly called ‘neural networks.’ Connectionism attempts to model lower-level functions of the brain, such as motion-detection. Parallelism, otherwise known as PDP (parallel distributed processing), is the branch of computational cognitive psychological that attempts to model that higher-level aspects of the brain, such as face recognition, or language learning. This is yet another instance of how the bottom-up approach is limited to the fields of robotics.
       
      Introduction to Mentalese
      Despite this movement away from GOFAI by some researchers, the majority of scientists carried on with the classical approach. One such pioneer of the ‘computational model’ field was Jerry A. Fodor. He says the following:
       
      This quote is very reminiscent of conceptual representation and its methodology. Fodor argues that since sciences all have laws governing their phenomenon, psychology and the workings of the brain are not an exception.
       
      Language of Thought and Mentalese
      Fodor was an important figure in the idea of a ‘language of thought.’ Fodor theorises that the brain ‘parses’ information that it then transforms to a mental language of its own that it can subsequently manipulate and change more efficiently. When a person needs to ‘speak’ the brain converts the language it uses into a natural language. Fodor thought that this mental language was taken beyond natural language, but and to the senses to:
       
      Fodor called this ‘language of thought’ Mentalese. The theory of Mentalese says that thought is based upon language, that we think in a language not like English, French or Japanese but in the brain’s own universal language. Before studying the idea of Mentalese, the concepts ‘syntax’ and ‘semantics’ should be fully explained.
       
      Syntax and Semantics
      Syntactic features of a language are the words and sentences that relate to the form rather than to the meaning of the sentence. Syntax tells us how a sentence in a particular language is formed, how to form grammatically correct sentences. For example, in English the sentence, ‘Cliff went to the supermarket’ is syntactically correct whereas, ‘To supermarket Cliff the went’ makes no sense whatsoever. Semantics are the features of words that relate to the overall meaning of the sentence/paragraph. The semantics of a word also define the relations of the word to the real world, and also to its contextual place in the sentence.
       
      How are syntactics and semantics related to symbols and representations? The syntax of symbols is a huge hierarchy, simple basesymbols that represent simple representations. From these simple symbols, more complicated symbols are derived from them. The semantics of symbols is easy to explain — symbols are inherently semantic. Symbols represent, relate and various objects to each other, therefore when you hear or read a sentence it is broken up into symbols that are all related to one another.
      With these terms defined, we can see how Mentalese a) can be restated as a computational theory and b) supports the idea of conceptual representation. Mentalese is computational because ‘it invokes representations which are manipulated or processed according to formal rules.’ The syntax of Mentalese is just like the hierarchy of CR structures — with different, more complex structures derived from base structures. The theory even has similarities to the architecture of an entire CR program. The brain receives sentences and turns them into Mentalese, just like a CR program would parse a stream of text, and conceptualize it into structures within the computer that do not resemble (and are independent of) the language that the text was in. When the brain needs to retrieve this information, it converts the Mentalese back into the natural language needed, just like a CR program takes the concepts and changes them back into the necessary language.
      Earlier on, the idea of an implementing mechanism was introduced. How can such a mechanism be viewed within the Mentalese paradigm? The basic idea of an implementing mechanism (IM) is that lower-level mechanisms implement higher-level ones. In more generic terms, an IM goes further than the "F causes G" stage, to the "How does F cause G?" Fodor says that an IM specifies ‘a mechanism that the instantiation of F is sufficient to set in motion, the operation of which reliably produces some state of affairs that is sufficiently for the instantiation of G."
      Which Model does the Brain More Closely Follow?
      After looking at all the examples of the different approaches, we are presented with the final question of which one best represents the human brain? In the field of NLP, it seems that the GOFAI, top-down approach is by far the best approach, whereas, the fields of robotics and image recognition are benefited by the connectionist, bottom-up approach. Despite this seemingly concrete conclusion, we are still plagued with a few problems.
       
      Consistency
      How many different types of cells are our brains composed of? Essentially, it just uses one type — the neurone. How can the brain both exhibit serial and parallel behaviours with only one type of cell? The obvious answer to this is that it does not. This is a fault in the overall integrity of both approaches.
       
      For example, we have parallel, bottom-up neural networks that can successfully detect pictures, but cannot figure out the algorithm for an XOR bit-modifier. We have serial, top-down CR programs that can read newspaper articles, make inferences, and even learn from these inferences — yet, such programs often make bogus discoveries, due to the lack of background information and general knowledge. We have robots that can play the guitar, walk up stairs, aid in bomb disposal, yet nothing gets close to a program with overall intellect equal to that of a human.
      Integration of the Fields
      What will happen when robotics reaches the stage where it is on the brink of real-time communication, voice-recognition and synthesis? The fields of NLP and robotics will have to integrate, and a common method will have to be devised. The question of what method to follow would simply arise again. Since the current breed of robots use parallel processing, future robots will no doubt use similar data systems to the ones today, and would not easily adapt to a serial, top-down approach to language. Even if a successful bottom-up approach is found for robots, will it be practical?
       
      Will humans tolerate ‘infant’ robots that require to be ‘looked after’ for the first 3 or 4 years of their life while they learn the language of their ‘parents’? For robots that are to immediately know a certain language (for instance, English) from the minute they are turned on, a top-down approach is bound to be necessary. No doubt, a comprise system of language processing would have to be developed, perhaps the object-orientated system of ParseTalk could be promising, since OOP offers the advantages of serial programming with some of the advantages of parallel processing too.
      Top-Down Approach
      One of the best ways to support the top-down approach and its similarities to the brain is to look at just how similar Mentalese and conceptual representation are. Mentalese assumes that there is a language of the brain completely independent of the natural language (or languages) of the individual. CR assumes the exact same thing, creating data structures with more abstract names that are independent to the language, but rely on the parser for its input. This can explain the ease at which programs utilising CR easily convert between two languages.
       
      Both the ideas of Mentalese and CR must have been formulated from the same direction and perspective of the brain. Both assume a certain independence that the brain has over the rest of the language/communication areas of the brain. Both assume that computational manipulation is performed only after the language has been transformed into this language of the brain. It is about this area that Mentalese and conceptual representation diverge — conceptual representation is limited to language (or has not yet been applied to other areas), whereas Fodor maintains that this mental language applies to cognitive and perceptive areas of the brain too.
      A fault in the Mentalese theory is that Fodor says Mentalese is universal. It seems hard to imagine that as we all grow up, learning how to see, hear, speak, and all the other activities we learn as a new-born, we all adopt the same language. A possible counter to this is that the language is already there — but this creates a lot of new complications. Nevertheless, this fault is not detrimental to its applications in computer processing, since nearly any base for an NLP program will require some prior programming (analogous to the pre-made rules of Mentalese).
      The COG Team also outlined three basic (but very important) faults with a GOFAI approach to AI:
      The team backs up their assertions with various results from psychological tests that have been performed. Such monolithic representations and controlling units underlie the theory of Mentalese and conceptual representation.
       
      Bottom-Up Approach
      The main advantage of the bottom-up approach is its simplicity (somewhat), and its flexibility. Using structures such as neural networking, programs have been created that can do things that would be impossible to do with conventional, serial approaches. For example, Hopfield networks can recognise partial, noisy, shrunken, or obscured images that it has ‘seen’ before relatively quickly. Another program powered by a neural network has been trained to accept present tense English verbs and convert them to past tense.
       
      The strongest argument for the bottom-up approach is that the learning processes are the same that any human undergoes when they grow up. If you present the robot or program to the outside world it will learn things, and adapt itself to them. Like the COG Team asserts, the key to human intelligence is the four traits they spelled out: developmental organisation, social interaction, embodiment, and integration.
      The bottom-up approach also models human behaviour (emotions inclusive) due to chaotic nature of parallel processing and neural networking:
      The main downfall of the bottom-up approach is its practicality. Building a robot’s knowledge base from the ground up every time one is made might be reasonable for a research robot, but for (once created) commercial robots or even very intelligent programs, such a procedure would be out of the question.
       
      Conclusion
      In conclusion, a definite approach to the brain has not yet been developed, but the two approaches (top-down and bottom-up) describe different aspects of the brain. The top-down approach seems like it can explain how humans use their knowledge in conversation. What the top-down approach does not solve however, is how we get that initial knowledge to begin with — the bottom-up approach does. Through ideas such as neural networking and parallel processing, we can see how the brain could possibly take sensory information and convert it into data that it can remember and store. Nevertheless, these systems have so far only demonstrated an ability to learn, and not sufficient ability to manipulate and change data in the way that the brain and programs utilising a top-down methodology can.
       
      These attributes of the approaches lead to their dispersion within the two fields of AI. Natural Language Processing took up the top-down approach, since that had all the necessary data manipulation required to do the advanced analysis of languages. Yet, the large amount of storage space required for a top-down program and the lack of a good learning mechanism made the top-down approach too cumbersome for robotics. They adopted the bottom-up approach, which proved to be good for things such as face recognition, motor control, sensory analysis and other such ‘primitive’ human attributes. Unfortunately, any degree of higher-level complexity is very hard to achieve with a bottom-up approach.
      Now we have one approach modelling the lower level aspects of the human brain, and another modelling the higher levels — so which models the brain overall the best? Top-down approaches have been in development for as long as AI has been around, but serious approaches to the bottom-up methodologies have only really started in the last twenty years or so. Since bottom-up approaches are looking at what we know from neurobiology and psychology, not so much from philosophy like GOFAI scientists do, there may be a lot more we have yet to discover. These discoveries, though, may be many years in the future. For the meanwhile, a compromise should be reached between the two levels to attempt to model the brain consistently given the current technology. The object-orientated approach might be one answer, research into neural networks trained to create and modify data structures similar to those used in conceptual representation might be another.
      Artificial Intelligence is the science of the unknown — trying to emulate something we cannot understand. GOFAI scientists have always hoped that AI might one day explain the brain, instead of the other way around — connectionist scientists do not, and perhaps this is the key to creating flexible code that can react given any environment, just like the human brain — real Artificial Intelligence.
      Bibliography.
      Altman, Ira. The Concept of Intelligence: A Philosophical Analysis. Maryland: 1997.
      Brooks, R. A., Breazeal (Ferrell), C., Irie, R., Kemp, C. C., Marjanovic, M., Scassellati, B. & Williamson, M. M. (1998), Alternative Essences of Intelligence, in ‘Proceedings of the American Association of Articial Intelligence (AAAI-98)’.
      Brooks, R. A., Breazeal (Ferrell), C., Irie, R., Kemp, C. C., Marjanovic, M., Scassellati, B. & Williamson, M. M. (1998), The Cog Project: Building a Humanoid Robot.
      Churchland, Patricia and Sejnowski, Terrence. The Computational Brain. London: 1996.
      Churchland, Paul. The Engine of Reason, the Seat of the Soul: A Philosophical Journey into the Brain. London: 1996.
      Crane, Tim. The Mechanical Mind: A Philosophical Introduction to Minds, Machines and Mental Representation. London: 1995.
      Fodor, Jerry A. Elm and the Expert: An Introduction to Mentalese and Its Semantics. Cambridge: 1994.
      Hahn, Udo. Schacht, Susanne. Bröker, Norbert. Concurrent, Object-Orientated Natural Language Parsing: The ParseTalk Model. Arbeitsgruppe Linguistische Informatik/Computerlinhguistik. Freiburg: 1995.
      Lenat, Douglas B. Artificial Intelligence. Scientific American, September 1995.
      Lenat, Douglas B. The Dimensions of Context-Space. Cycorp Corporation, 1998.
      Penrose, Roger. The Emperor’s New Mind: Concerning Computers, Minds and The Laws of Physics. Oxford: 1989.
      Schank, Roger. The Cognitive Computer: On Language, Learning and Artificial Intelligence. Reading: 1984.
      Schank, Roger and Colby, Kenneth. Computer Models of Thought and Language. San Francisco: 1973.
      Watson, Mark. AI Agents in Virtual Reality Worlds. New York: 1996.


    • By GameDev.net
      Symbolic AI Systems vs Connectionism
      Symbolic AI systems manipulate symbols, instead of numbers. Humans, as a matter of fact, reason symbolically (in the most general sense). Children must learn to speak before they are able to deal with numbers for example. More specifically, these systems operate under a set of rules, and their actions are determined by these rules. They always operate under task oriented environments, and are wholly unable to function in any other case. You can think of symbolic AI systems as "specialists". A program that plays 3d tic tac toe will not be able to play PenteAI (a game where 5 in a row is a win, but the players are allowed to capture two pieces if they are sandwiched by the pieces of an opposing player). Although symbolic AI systems can't draw connections between meanings or definitions and are very limited with respect to types of functionality, they are very convenient to use for tackling task-center problems (such as solving math problems, diagnosing medical patients etc.). The more flexible approach to AI involves neural networks, yet NN systems are usually so underdeveloped that we can't expect them to do "complex" things that symbolic AI systems can, such as playing chess. While NN systems can learn more flexibly, and draw links between meanings, our traditional symbolic AI systems will get the job done fast.
      An example of a programming language designed to build symbolic AI systems is LISP. LISP was developed by John McCarthy during the 1950s to deal with symbolic differentiation and integration of algebraic expressions.
       
      Production Systems
      (This model of production systems is based on chapter 5 of Stan Franklin's book, The Artificial Mind, the example of the 8-puzzle was also based on Franklin's example)
      Production systems are applied to problem solving programs that must perform a wide-range of seaches. Production ssytems are symbolic AI systems. The difference between these two terms is only one of semantics. A symbolic AI system may not be restricted to the very definition of production systems, but they can't be much different either.
      Production systems are composed of three parts, a global database, production rules and a control structure.
      The global database is the system's short-term memory. These are collections of facts that are to be analyzed. A part of the global database represents the current state of the system's environment. In a game of chess, the current state could represent all the positions of the pieces for example.
      Production rules (or simply productions) are conditional if-then branches. In a production system whenever a or condition in the system is satisfied, the system is allowed to execute or perform a specific action which may be specified under that rule. If the rule is not fufilled, it may perform another action. This can be simply paraphrased:
       
      WHEN (condition) IS SATISFIED, PERFORM (action)  
      A Production System Algorithm  
      DATA (binded with initial global data base) when DATA satisfies the halting condition do begin select some rule R that can be applied to DATA return DATA (binded with the result of when R was applied to DATA) end For a scenerio where a production system is attempting to solve a puzzle, pattern matching is required to tell whether or not a condition is satisfied. If the current state of a puzzle matches the desired state (the solution to the puzzle), then the puzzle is solved. However, if this case is not so, the system must attempt an action that will contribute to manipulating the global database, under the production rules in such a way that the puzzle will eventually be solved.
       
      Initial State Final State In order to take a closer look to control structures let us look at a problem involving the eight puzzle. The eight puzzle contains eight numbered squares laid in a three-by-three grid, leaving one square empty. Initially it appears in some, obfuscated state. The goal of the production system is to reach some final state (the goal). This can be obtained by successively moving squares into the empty position. This system changes with every move of the square, thus, the global database changes with time. The current state of the system is given as the position and enumeration of the squares. This can be represented for example as a 9 dimensional vector with components 0, 1, 2, 3,..., 8, NULL, the NULL object being the empty space.
      In this puzzle, the most general production rule can be simply summed up in one sentence:
      If the empty square isn't next to the left edge of the board, move it to the left 
      However, in order to move the empty square to the left, the system must first make room for the square to move left. For example, from the initial state (refer to above figure) the 1-square would be moved down 1 space, then, the 8-square right 1 space, then the 6-square up one space in order for the empty square to be moved left (i.e., a heuristic search). All these sequences require further production rules. The control system decides which production rules to use, and in what sequence. To reiterate, in order to move the empty square left, the system must check if the square is towards the top, or somewhere in the middle or bottom before it can decide what squares can be moved to where. The control system thus picks the production rule to be used next in a production system algorithm (refer to the production system algorithm figure above).
      Another example of a production system can be found in Ed Kao's 3-dimensional tic-tac-toe program. The rules and conditions for the AI are conviently listed just like for the 8 puzzle.
      The question that faces many artificial intelligence researchers is how capable is a production system? What activities can a production system control? Is a production system really capable of intelligence? What can it compute? The answer lies in Turing machines...
       
      Programs
      These programs written are examples of production systems.
       
      3d Tic Tac Toe - E. Kao  PenteAI - J. Matthews

    • By GytisDev
      Hello,
      without going into any details I am looking for any articles or blogs or advice about city building and RTS games in general. I tried to search for these on my own, but would like to see your input also. I want to make a very simple version of a game like Banished or Kingdoms and Castles,  where I would be able to place like two types of buildings, make farms and cut trees for resources while controlling a single worker. I have some problem understanding how these games works in the back-end: how various data can be stored about the map and objects, how grids works, implementing work system (like a little cube (human) walks to a tree and cuts it) and so on. I am also pretty confident in my programming capabilities for such a game. Sorry if I make any mistakes, English is not my native language.
      Thank you in advance.
    • By Descent
      Wow what a wild game by GalaXa Games Entertainment Interactive. Play now... it's really fun but IF you have epilepsy then don't play. It does not feature flashing pictures, but there is lots of animated stuff that might get ya. Anyway, 4 levels, 2 endings, insane action, BY INFERNAL. Please play it, right nao! Also , nice midi music composed by me is in the game.
       
      demons.rar
    • By Armantium
      Killing Floor 2 has the best dismemberment/gore system by far.
      No other game comes even close; how enemies react, segmented body parts depending where you shoot them, how dead bodies react, etc.
      Is there an easy-to-set up system, but with scaled down features, for the Unreal Engine 4, while still offering the same visceral physicality?
    • By GameDev.net
      This is an extract from Practical Game AI Programming from Packt. Click here to download the book for free!
      When humans play games – like chess, for example – they play differently every time. For a game developer this would be impossible to replicate. So, if writing an almost infinite number of possibilities isn’t a viable solution game developers have needed to think differently. That’s where AI comes in. But while AI might like a very new phenomenon in the wider public consciousness, it’s actually been part of the games industry for decades.
      Enemy AI in the 1970s
      Single-player games with AI enemies started to appear as early as the 1970s. Very quickly, many games were redefining the standards of what constitutes game AI. Some of those examples were released for arcade machines, such as Speed Race from Taito (a racing video game), or Qwak (a duck hunting game using a light gun), and Pursuit (an aircraft fighter) both from Atari. Other notable examples are the text-based games released for the first personal computers, such as Hunt the Wumpus and Star Trek, which also had AI enemies.
      What made those games so enjoyable was precisely that the AI enemies that didn't react like any others before them. This was because they had random elements mixed with the traditional stored patterns, creating games that felt unpredictable to play. However, that was only possible due to the incorporation of microprocessors that expanded the capabilities of a programmer at that time. Space Invaders brought the movement patterns and Galaxian improved and added more variety, making the AI even more complex. Pac-Man later on brought movement patterns to the maze genre – the AI design in Pac-Man was arguably as influential as the game itself.
      After that, Karate Champ introduced the first AI fighting character and Dragon Quest introduced the tactical system for the RPG genre. Over the years, the list of games that has used artificial intelligence to create unique game concepts has expanded. All of that has essentially come from a single question, how can we make a computer capable of beating a human in a game?
      All of the games mentioned used the same method for the AI called finite-state machine (FSM). Here, the programmer inputs all the behaviors that are necessary for the computer to challenge the player. The programmer defined exactly how the computer should behave on different occasions in order to move, avoid, attack, or perform any other behavior to challenge the player, and that method is used even in the latest big budget games.
      From simple to smart and human-like AI
      One of the greatest challenges when it comes to building intelligence into games is adapting the AI movement and behavior in relation to what the player is currently doing, or will do. This can become very complex if the programmer wants to extend the possibilities of the AI decisions.
      It's a huge task for the programmer because it's necessary to determine what the player can do and how the AI will react to each action of the player. That takes a lot of CPU power. To overcome that problem, programmers began to mix possibility maps with probabilities and perform other techniques that let the AI decide for itself how it should react according to the player's actions. These factors are important to be considered while developing an AI that elevates a games’ quality.
      Games continued to evolve and players became even more demanding. To deliver games that met player expectations, programmers had to write more states for each character, creating new in-game and more engaging enemies.
      Metal Gear Solid and the evolution of game AI
      You can start to see now how technological developments are closely connected to the development of new game genres. A great example is Metal Gear Solid; by implementing stealth elements, it moved beyond the traditional shooting genre. Of course, those elements couldn't be fully explored as Hideo Kojima probably intended because of the hardware limitations at the time. However, jumping forward from the third to the fifth generation of consoles, Konami and Hideo Kojima presented the same title, only with much greater complexity. Once the necessary computing power was there, the stage was set for Metal Gear Solid to redefine modern gaming.
      Visual and audio awareness
      One of the most important but often underrated elements in the development of Metal Gear Solid was the use of visual and audio awareness for the enemy AI. It was ultimately this feature that established the genre we know today as a stealth game. Yes, the game uses Path Finding and a FSM, features already established in the industry, but to create something new the developers took advantage of some of the most cutting-edge technological innovations. Of course the influence of these features today expands into a range of genres from sports to racing.
      After that huge step for game design, developers still faced other problems. Or, more specifically, these new possibilities brought even more problems. The AI still didn't react as a real person, and many other elements were required, to make the game feel more realistic.
      Sports games
      This is particularly true when we talk about sports games. After all, interaction with the player is not the only thing that we need to care about; most sports involve multiple players, all of whom need to be ‘realistic’ for a sports game to work well.
      With this problem in mind, developers started to improve the individual behaviors of each character, not only for the AI that was playing against the player but also for the AI that was playing alongside them. Once again, Finite State Machines made up a crucial part of Artificial Intelligence, but the decisive element that helped to cultivate greater realism in the sports genre was anticipation and awareness. The computer needed to calculate, for example, what the player was doing, where the ball was going, all while making the ‘team’ work together with some semblance of tactical alignment. By combining the new features used in the stealth games with a vast number of characters on the same screen, it was possible to develop a significant level of realism in sports games. This is a good example of how the same technologies allow for development across very different types of games.
      How AI enables a more immersive gaming experience
      A final useful example of how game realism depends on great AI is F.E.A.R., developed by Monolith Productions. What made this game so special in terms of Artificial Intelligence was the dialog between enemy characters. While this wasn’t strictly a technological improvement, it was something that helped to showcase all of the development work that was built into the characters' AI. This is crucial because if the AI doesn't say it, it didn't happen.
      Ultimately, this is about taking a further step towards enhanced realism. In the case of F.E.A.R., the dialog transforms how you would see in-game characters. When the AI detects the player for the first time, it shouts that it found the player; when the AI loses sight of the player, it expresses just that. When a group of (AI generated) characters are trying to ambush the player, they talk about it. The game, then, almost seems to be plotting against the person playing it. This is essential because it brings a whole new dimension to gaming. Ultimately, it opens up possibilities for much richer storytelling and complex gameplay, which all of us – as gamers – have come to expect today.
       
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631375
    • Total Posts
      2999659
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!