• Create Account

## Programming the 5 Senses

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

13 replies to this topic

### #1Tutorial Doctor  Members

2565
Like
1Likes
Like

Posted 04 January 2014 - 03:29 PM

I need some feedback on something I call a Senses System. Someone said that the way I am doing it is wrong, so I need some suggestions on how to do it right. This is how it works:

The senses system is a physics based system. Each sense is made up of basically one sensing object, one function, and two variables. All senses will eventually be part of a Senses Class.

SIGHT
Sight is just a collision test between a cone primitive that represents eyes. If there is a collision between the cone and an object, it makes a Boolean named seen equal to true. An arguent is used so that you can easily make something see whatever object you type in the parenthesis. The transparency of the cone represents the quality of sight. The wideness of the cone represents the field of view. The height scale of the cone represents nearsightedness or far farsightedness. The eyes variable is the cone primitive.

Here is some a pseudocode skeleton of sight.

function See(object)
{
seen = true;seeing = true;
if isCollisionBetween(eyes,object){}
if (seen){}
while (seeing){}
}

SMELL
Smell is a collision test also, represented by a sphere primitive, but instead of detecting collision with an object, it is a collision with a particle system. The amount of particles that go inside of the sphere represent the strength of the smell. The type name of the particle system determines the type of smell. The nose variable is the sphere primitive.

function Smell(object)
{
smelled = true;smelling = true;
if isCollisionBetween(nose,object){}
if (smelled){}
while (smelling){}
}

TOUCH
Touch is a collision test also. It requires no primitive shapes. The character or object itself detects the collision. It can be used for multiple game objects:

function Touch(object1,object2,object3)
{
touched = true;touching = true;
if isCollisionBetween(toucher,object1){}
if (touched){}
while (touching){}
}

HEARING
I decided to make hearing work a little differently. Hearing is just a volume and distance test. It checks if the distance between the hearing object and the sound object. It also checks if the volume is above or bellow a certain value.

function Hear(object){
heard = true;
hearing = true;
if (heard && volume >= 30 && distance <= 3)
{}while (hearing){}
}

TASTE
Taste works just like touch, except it has a primitive shape that does the collision witih another game object.

function Taste(object)
{
tasted = true;tasting = true;
if isCollisionBetween(tongue,object){}
if (tasted){}
while (tasting){}
}

This systems is easy to implement in any language or game engine that has particle systems, collisions, and sounds. I was even able to implement it in the Little Big Planet game using their tag system. When there was a collision, it would display a speech bubble that said either "Seen, Heard, Touched" etc.

Update:

Okay, so this was so easy to implement. Only took about 3 lines of code:

if isCollisionBetween(eyes,box) then
setText(text, "I see a ".. readout)
else setText(text,"I see nothing")


function See(object)
if isCollisionBetween(eyes,object) then
setText(text, "I see a ".. readout)
else setText(text,"I see nothing")
end
end


The above code is how the final function looks. And the way you use it is simple:

See(box)


Edited by Tutorial Doctor, 09 January 2014 - 10:59 PM.

They call me the Tutorial Doctor.

### #2richardurich  Members

1187
Like
0Likes
Like

Posted 04 January 2014 - 04:09 PM

Are you accounting for obstructions like a wall preventing you from seeing, smelling, etc. what is on the other side of the wall? I can't tell since you might or might not have it in your collision detection. Other than that, the only issue I see is that your solution won't scale very well. There's nothing wrong with poor scaling since it's always a balancing act between how detailed you want versus how many objects you can support.

You're also not supporting things like the wind that impacts how far you can smell or a sound masking another sound. None of that necessarily matters since you always have to choose what details to ignore.

### #3ferrous  Members

5772
Like
1Likes
Like

Posted 05 January 2014 - 12:45 AM

Yeah, the system won't scale well.

I had a system for sight, that was also slow, but did account for obstruction.  I basically would render to a texture from a viewpoint of the character, but with the objects color coded.  (And in a game with hi poly objects, one could render them as low poly versions of themselves.)  Then I would check that texture for what the character could see.  It wasn't too bad as long as the render texture was small, and the number of objects was small, but obviously wouldn't scale well for real time games.  (I could get away with it as I was using it for a turn based game, I was mimicking those TLOS systems in various tabletop games like Warhammer 40k)

### #4ferrous  Members

5772
Like
0Likes
Like

Posted 05 January 2014 - 12:47 AM

You're also not supporting things like the wind that impacts how far you can smell

Actually, if his particle system is affected by the wind, that would work for smell.

### #5Tutorial Doctor  Members

2565
Like
1Likes
Like

Posted 05 January 2014 - 07:35 PM

Thanks! I didn't consider obstruction for sight., but that should be easy since I can check collisions against a wall object, and if a wall is seen, then make the object not seen.

In the little big planet game, I was able to put tags on every object along with a label (this is sorta how we do it in real life) so it actually looks like the character is seeing, hearing, touching, etc because in the little big planet game, the puppet was getting all sorts of flags for collision going off.

When he was touching the ground, a "ground" label kept going off.

Cool thing about wind is that it also looks like he is sensing real wind, because the count of particles increment how much he can smell it. If the amount of particles in the sphere reach a certain number, it is "smelled." Each particle system also has a label.

However, I need to really think about this obstruction thing more, cause I can see where things won't work. I'll post an update when I think of a way.

What does it mean that it won't "scale" well? I need to work on that too.

They call me the Tutorial Doctor.

### #6Bacterius  Members

13102
Like
0Likes
Like

Posted 05 January 2014 - 07:56 PM

Not strictly feedback, but this is very interesting, I've been thinking about something similar recently (though only for vision at the moment) and I'd be really happy to see the results you arrive to. Please do update us of your progress through this thread or a developer journal

“If I understand the standard right it is legal and safe to do this but the resulting value could be anything.”

### #7Tutorial Doctor  Members

2565
Like
0Likes
Like

Posted 05 January 2014 - 08:05 PM

Good idea bacterius! I can do a developer journal. Didn't think about that.

I want it to be sort of uncanny. Right now it works surprisingly well, and actually feels more like an AI system. But now that I think of it, all of our intelligence is processed in a labeling sort of way.

I was thinking about also letting the object use the material properties of an object to describe the object. So not only can it detect the object, but form a description about it based on its material. Hmm. Getting more ideas as I type.

They call me the Tutorial Doctor.

### #8richardurich  Members

1187
Like
0Likes
Like

Posted 06 January 2014 - 12:31 AM

What does it mean that it won't "scale" well? I need to work on that too.

It means you won't be able to use this solution to handle real-time calculations for large numbers of interactions. Like you probably wouldn't be able to have 1000 different people smelling and seeing 1000 different objects and still be generating results quickly enough to get 60+ fps. As I said, that's not really a bad thing. It is just the trade-off you are making.

### #9Tutorial Doctor  Members

2565
Like
0Likes
Like

Posted 06 January 2014 - 07:18 AM

Oh okay. Hm, I need to ask the developer of the engine I'm using about this then. There is a feature in the engine that allows you to make objects a "ghost object." As a ghost object, thye go through other objects but can still detect collisions. I have used ghost objects in the past to check collisions.

His engine does run at a smooth 60fps for most things I have tried so far. I'll see what he'll say about the scaling. Thanks for the tip.

They call me the Tutorial Doctor.

### #10Álvaro  Members

20270
Like
0Likes
Like

Posted 06 January 2014 - 09:47 AM

What does it mean that it won't "scale" well? I need to work on that too.

It means you won't be able to use this solution to handle real-time calculations for large numbers of interactions. Like you probably wouldn't be able to have 1000 different people smelling and seeing 1000 different objects and still be generating results quickly enough to get 60+ fps. As I said, that's not really a bad thing. It is just the trade-off you are making.

The sense of smell doesn't have terribly good time resolution, so it's probably OK if you only update an agent's smelling input every couple of seconds.

Here's another idea for smell: Make it closer to reality, by having objects drop chemicals on the scene and agents pick them up. Since smell also doesn't have terribly good spacial resolution, you can divide your scene into chunks (1m x 1m square blocks, say) and keep a small array of densities in each chunk, each density corresponding to a different chemical. Objects can drop chemicals, chemicals can decay over time, they can be carried by wind... Then agents simply look at the densities in the chunk where they stand. This is a classic space-time tradeoff, where the use of the additional data structure allows you to scale linearly with the sum of the number of objects and agents, instead of their product.

How many chemicals to keep track of would depend a lot on the game. The coolest application of this to a game I can think of is dogs: detection dogs (they detect explosives, drugs and blood), rescue dogs (they can find survivors after a disaster) and tracking dogs (which can track individual people).

### #11Tutorial Doctor  Members

2565
Like
0Likes
Like

Posted 09 January 2014 - 11:12 PM

Yeah, the system won't scale well.

I just updated the post. It seems to work really fast with the "ghost mode" built into the engine. Of course, right now I am only seeing one object. I wonder, how would I construct the See() function so that I don't have a big switch statement for all objects that can be seen. I think i could have the cone see everything, but then it would be sensory overload (I don't think it would be enough to crash the engine though. It would just be too much to deal with to have it see everything in the game. I do understand that this is how our eyes work, but it would cause unnecessary memory use in the game.

I am sure this would have to end up being a class, but even then, how would I construct the class in the most efficient way so as to make the cone able to dynamically see things?

Doesn't seem hard, just thinking of the easiest and most efficient way at the moment.

I am using lua (which uses tables).

They call me the Tutorial Doctor.

### #12Tutorial Doctor  Members

2565
Like
0Likes
Like

Posted 10 January 2014 - 05:55 PM

After watching a few lectures on Artificial intelligence, it seems that this system can easily become an AI system. I have already thought about some cool interactions that can be done. For instance, I made my character turn to the left if it sees the box. So the character is sort of avoiding an object. In the lecture they said that if something can sense as well as produce a reaction the sense, it has artificial intelligence. Of course, adding learning and such will make it even more interesting, and I think I can create a simple system from for learning using tags or labels. I also thought of a way to get objects to anticipate a situation and react accordingly. I haven't thought of a way to do inferences but hopefully I can think of a way.  ,

Edit: I was able to get my character to jump on sight of the box!

Edited by Tutorial Doctor, 10 January 2014 - 05:58 PM.

They call me the Tutorial Doctor.

### #13ferrous  Members

5772
Like
0Likes
Like

Posted 10 January 2014 - 06:17 PM

Yeah, the system won't scale well.

I just updated the post. It seems to work really fast with the "ghost mode" built into the engine. Of course, right now I am only seeing one object. I wonder, how would I construct the See() function so that I don't have a big switch statement for all objects that can be seen. I think i could have the cone see everything, but then it would be sensory overload (I don't think it would be enough to crash the engine though. It would just be too much to deal with to have it see everything in the game. I do understand that this is how our eyes work, but it would cause unnecessary memory use in the game.

I am sure this would have to end up being a class, but even then, how would I construct the class in the most efficient way so as to make the cone able to dynamically see things?

Doesn't seem hard, just thinking of the easiest and most efficient way at the moment.

I am using lua (which uses tables).

To really test, you'd need to add more and more objects, and more and more agents.  Depending on how you eventually want to go, this might not be a big issue.  For example, the original Carnage Heart was an AI programming game, and only had six agents in a battle at once.  Only having to do six vision cone checks of varying complexity and you could probably keep a decent framerate.

And your system sounds similar to view frustums.  So you might look into various culling routines, even a scenegraph or octtree.

That way you could narrow down which objects you actually need to do the expensive collision checks on, and which you can trivially reject as being out of sight.  Though you might also want to move to a fuzzy system, if you are modelling human sight, for instance, we have peripheral vision which is very good at sensing movement, but terrible at everything else.

Granted, those are steps I would take when you get to them, ie when you actually start having trouble with your framerate.

### #14Tutorial Doctor  Members

2565
Like
1Likes
Like

Posted 10 January 2014 - 06:35 PM

Thanks for the tips ferrous. I can already see possible adaptations of this system already to make it more realistic. I thought about peripheral. I might do it the same way I do the quality of sight (using transparency of the cone to represent quality of sight. I could also scrap the whole primitive thing and do some vector math as well (trying to stay away from that right now, because primitives are already mathematical shapes, I just need their volume, not their actual faces (which I think the ghost mode in this engine is for).

I have even been thinking a bit about conic sections and such. But I am trying to keep things simple and straightforward right now.

I am going to look up the terms you posted though. Thank you. I have also started a journal. I will put this idea in an entry so that I can update my findings from there.

They call me the Tutorial Doctor.

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.