task solving

Published March 23, 2022
Advertisement

I was talking about an universal problem solving algorithm previously. I`m going to explain how solving basic physical tasks/problems should work. It`s pretty simple stuff. Basically a robot needs to figure out what the end result he`s chasing look like, usually that`s an object in a certain position, he needs to that mark that position, than he needs to mark what the current position of the object is, finally the robot need to calculate the intermediary steps between the two marked positions. So if a robot need to open a door of a car that is currently closed he needs to `visualize` the door in the opened position first and calculate the intermediate steps between that position and the position the door is currently in.

In order to achieve this task the robot needs to know a car is made of parts, a sensorial scan of a car from outside will reveal a single object. So the robot need to scan the car, identify it as a car and match on the car the default car components that he has in memory.

Previous Entry animals and fear
Next Entry About ML
0 likes 9 comments

Comments

JoeJ

There is a thin line between easy and hard problems here. For example, the task is to pick up a box and place it on a table. That's easy, and i'll be able to make my NPCs do this while looking natural. I think of it like the usual actions we have in a point and click adventure, beside the obvious locomotion functionality.

But let's change the example just slightly: There is a whole stack of boxes, and the AI has to figure out a target packing so all the boxes fit on the table. Or to make it even harder, make the table another larger box itself, or make the many boxes of different sizes. That's a kind of Tetris problem then, and true intelligence would be required. Or some specific training / algorithm if we know about the expected problem in advance, like spatial packing stable under gravity in this case.

For such scenes, we'll still need static data describing the solution of the problem, similar to animation. It could be manually created data of an ordered list of box target positions. Though, such solution isn't truly dynamic and interactive. The player could take away one box so the final arrangement as planned would no longer be stable. The NPC would fail at the task to play a game of Jenga, probably looking as silly as running against walls or in circles.

No matter how far we get, this always remains a challenging constraint of game design: We must hide our restrictions as good as possible, while also maximizing the new options enabled from new technology. But usually, new options and new restrictions are closely related and just beside each other. In this sense, we have a much harder time than static media like books or movies.

March 23, 2022 05:56 PM
Calin

There is a whole stack of boxes, and the AI has to figure out a target packing so all the boxes fit on the table

I think you can split the work (that needs to be done to achieve/complete a complex task) in figuring out the solution and putting the solution into reality.

the video game scenario you`re talking about is somewhat different from a real life scenario though. What`s happening on computer (a video game) is half way imaginary, like it`s not 100% real. If you try to simulate complex/realistic thinking inside a game you end up with a imaginary space (the mind of the character) simulating another imaginary space (the game environment that`s around the character). In a real life scenario you have just one imaginary world (that`s found in the mind of a robot) simulating the real world. The first case (simulation duplicating/copying another simulation) is not intuitive you have to figure out how the two simulation communicate (how the character gets to know what`s around him). In the second case duplicating the real world (creating a duplicate of the real world in the mind of the robot) is more intuitive and nothing new we have been using cameras, sensors, etc. to sense the environment how some time now.

March 23, 2022 09:25 PM
JoeJ

I think you can split the work (that needs to be done to achieve/complete a complex task) in figuring out the solution and putting the solution into reality.

Sure, if you have a hard problem to solve, dividing it into smaller problems is the first thing to do.
But that's not always possible, and the problem still remains hard.
Further you can assume that all easy problems are already solved before, so you are left with only the hard stuff.
That's the reality, despite marketing always claiming things would now become easy with this or that new tech.

What`s happening on computer (a video game) is half way imaginary, like it`s not 100% real.

But this changes as we replace smoke and mirrors with actual simulation. If the simulation is accurate, there is no more difference between reality and game. This said from my perspective of developing robotic NPCs, which is the same thing as developing control systems for Boston Dynamics robots. Assuming i don't want to cheat.
The outcome is still the same: A game with super powers, magic, warp drive, etc. But for the developers working on hard stuff like AI or robotics, the difference between fiction (or imagination) and reality decreases.

The first case (simulation duplicating/copying another simulation) is not intuitive you have to figure out how the two simulation communicate (how the character gets to know what`s around him). In the second case duplicating the real world (creating a duplicate of the real world in the mind of the robot) is more intuitive and nothing new we have been using cameras, sensors, etc. to sense the environment how some time now.

That's the exception: For games, sensing the environment remains indeed easier, because we already have a precise model of the world in our computer.
Though, following the last discussion, maybe we need NPC cameras to simulate natural behavior regardless. And if so, calculating the image adds a cost on us, while real world robots get it for free.
At least we don't need advanced computer vision algorithms, e.g. to label dogs and cats. This data we can still access directly in any case.

Still, we could ofc. learn from robotics guys what they do with their sensor data. We could learn from them like we do from the offline rendering guys.
But at the moment it's not that robots impress with intelligent behavior. They only impress with locomotion, like keeping in balance, etc. Impressive, but those robots are still dumb as fuck.
The ML hype so far gave us all kinds of gimmicks. Generating random faces, turn some brush strokes into a detailed image, optimize ads, chat bots and AI assistants. Nice stuff.
But no intelligence. We can only fake this - simulating it is not possible yet, because we don't know how it works.

To me it looks like this: We can have Boston Dynamics robots in our games, and we can use this tech also to model natural humans.
But we can not simulate a mind with ability to detect and solve general problems. That's still science fiction, no? We won't get there anytime soon. Or do you disagree?
So all we can and should think about are questions like: What can achievable technology add to games? Which restrictions does it lift, which problems does it introduce?

March 23, 2022 10:41 PM
Calin

Sure, if you have a hard problem to solve, dividing it into smaller problems is the first thing to do.

I think you can split work that needs to be done in simple problems like walking or opening the door of a car and problems that require strategic thinking. The problems in the first category require physics simulation the second category is not physics related and I will label it strategic simulation. You can`t have physics simulation and strategic simulation at the same time in an artificial environment/computer simulation. Like you can`t simulate a scenario where you have a tens/hundreds of people, cars and other objects, all driven by physics and interacting smart. Let me make it easier for you, you can`t have a Starcraft game with a human like AI and units with robotic/physics rigs at the same time, because it means reproducing the world 100% in an artificial environment. Within an artificial environment you can either have a person (walking) or two (passing an object from one person to another, karate fighting, etc.) simulated realistically from the physics perspective or you can have a human like strategic mind managing toy 'items' (characters that are not driven by physics). Strategic thinking and physics are developed separately and meet/marry only in the body of a terminator.

March 24, 2022 07:56 AM
Calin

This is a drawing I made, though I must admit there is a touch of speculation in it.

March 24, 2022 09:58 AM
JoeJ

I think you can split work that needs to be done in simple problems like walking or opening the door of a car and problems that require strategic thinking. The problems in the first category require physics simulation the second category is not physics related and I will label it strategic simulation.

Such categories make sense from the design perspective, but it does not hold the way down to implementation details. E.g., to open the door, i need to plan target poses to grab the door knob, predict trajectories form current position and velocities, intersect trajectories to plan foot steps, etc. That's a lot of strategies on optimization problems, including many factors like external forces, joint limits of the body and the door, dynamic limits of keeping in balance, motion paths, etc. It's much more involved than the AI in a RTS.
On the other hand, if accurate simulation of reality is our goal, a simulation of a war scene also involves a lot of physics. Projectile trajectories, environment like mud affecting the mobility of units, weather, etc.

But it's not clear how much realism and intelligence we want. If NPCs are realistic and intelligent, they also become harder to predict. Which hinders the player, as all the complexity no longer allows to have a simplified abstract game with clearly defined challenge and options.
That's why many people totally are not excited about reality simulation ideas for games. I understand this, but i think: RTS and FPS are both dead. We need something new, so let's explore and see what we get.

You can`t have physics simulation and strategic simulation at the same time in an artificial environment/computer simulation. Like you can`t simulate a scenario where you have a tens/hundreds of people, cars and other objects, all driven by physics and interacting smart.

That's a matter of creativity. Personally i need the physics simulation mostly to proof my controllers are valid. If the ragdoll falls to the floor, i know my balance controller is not right. But after i know it is right, i can disable physics simulation and use the controller to generate procedural animation instead. That's cheap, and can be used for distant characters, or if we need many of them.
Extending this idea to RTS scales, the challenge would be to use physics simulation only where needed. E.g. explosions, a plane crashing into a group of ragdolls, etc.
We could also have a large scale, traditional abstract simulation where NPCs are just points for the computer. And if the player zooms in to a scene in more detail, we gradually increase realism and simulation. Rigid body capsules → 6 bodies ragdolls like Rayman → 20 body ragdolls to represent a detailed human skeleton.
The idea of LOD for physics is quite unexplored, but it surely is possible. And the same applies to AI.

Now imagine you have a traditional RTS, and as you zoom in it becomes Call Of Duty. We'll get there, and Starcraft will look pretty limited and retro at this point. (Which does not mean people would not like to have some retro classic stuff too.)

Within an artificial environment you can either have a person (walking) or two (passing an object from one person to another, karate fighting, etc.) simulated realistically from the physics perspective or you can have a human like strategic mind managing toy 'items' (characters that are not driven by physics). Strategic thinking and physics are developed separately and meet/marry only in the body of a terminator.

We can have both. There is nothing forcing us to choose.
But we can not do all of this as a single person. Life is too short. However, this can be solved as usual. See AAA industry using 500 people to make one game, including multiple AI and physics experts.
If we work alone, we have to choose. Agree on that.

March 24, 2022 10:37 AM
JoeJ

This is a drawing I made, though I must admit there is a touch of speculation in it.

I don't get your point of why certain combinations should be impossible.
But i think it has to go the order way around, in this order, so each system is based on the former:

  1. physics simulation of the world
  2. strategy and any other trial of taking control on top of that. The output, e.g. turning motors on or using muscles, or just moving units goes to step 1 in the next frame.

The point is that you can not directly change the world at all. You can do so only indirectly by applying forces.
And usually those forces must be internal forces. Applying external forces would make it easy to force my ragdoll to stay upright, but it would not be realistic.
Muscle force is internal, gravity or a thruster would be external. To make those external forces valid and legal, you either found them on some valid abstract model (fuel gives thruster power), or you found them on a observation of nature (we do not model true gravity between any masses - instead we just approximate constant gravity from a imaginary earth, as all other gravity forces are negligible in effect, but would turn simulation too expensive for no win.)

Your given order from the image implies you imagine to dictate the world from strategic objectives. But it's the other way around. The world defines your options, and you can only choose form those.
Maybe that's just philosophy and differing points of view, though.

March 24, 2022 10:56 AM
Calin

I have no more interesting ideas to share/feedback to give at this time

March 24, 2022 02:01 PM
Calin

If we talk about RTS I can think of a meting ground between an conventional AI (script driven) and human mind simulation AI within the same RTS game. The conventional AI will serve as a well defined/controlled scenario against which human mind AI should be tested/built.

March 30, 2022 08:33 AM
You must log in to join the conversation.
Don't have a GameDev.net account? Sign up!
Profile
Author
Advertisement
Advertisement