Robotics talk

Started by
10 comments, last by Calin 1 year, 5 months ago

I’m looking to get some feedback on the topic of robots and senses. it looks like a robot in an simulated environment does not have the sense of sight, the only sense it does have is “touching” which comes by the help of the physics engine. A real life robot like atlas appears to be sight enhanced, so the question is do robots like atlas have distant terrain feature recognition?

My project`s facebook page is “DreamLand Page”

Advertisement

Calin said:
I’m looking to get some feedback on the topic of robots and senses.

Moved to the Lounge. Off-topic in Math And Physics.

-- Tom Sloper -- sloperama.com

Calin said:
it looks like a robot in an simulated environment does not have the sense of sight, the only sense it does have is “touching” which comes by the help of the physics engine.

Physics engines give information about contacts for free. Besides you can trace rays or even convex shapes to sense the environment. And you can do range queries (find all objects inside a given bounding box).
Current games AI builds on top of this functionality when sensing is needed.

Calin said:
the question is do robots like atlas have distant terrain feature recognition?

Idk, but i guess they have ability to build a model of the scene by sensing color and depth. For games, Kinect was an example of such sensor.

Personally i'll surely try the idea to generate small frame buffers for AI. I can reuse my GI data structures to do this efficiently. But if your agents are not that many, you could also generate a small GBuffer using traditional rendering.
The harder problem is anyway to interpret such image data. So i'd focus on this first and worry about performance only after it looks a promising approach to give us better AI at all.
The good thing is that we can do this very accurately and don't have to worry about errors of real world sensors. Nor do we need computer vision algorithms to label objects in the image (e.g. enemy tank, turret, a dog, etc.)

But i did not find much research about games AI based on visual input. Seems not popular yet.
I can't wait to work on this and consider a simplified prototype with top down 2D world. So a view of an agent would be just a 1D array, e.g. with one pixel per degree, containing the whole environment.
This would be good enough for classic Doom or top down RTS, and little work. I just have no time for such fun, sigh. ; )

  • Casting rays idea: to me real world sensors are pretty much the same invisible beams
  • Vision in games: I don’t think people care about vision when you can “cheat” and access position or other properties directly.

My project`s facebook page is “DreamLand Page”

Calin said:
Casting rays idea: to me real world sensors are pretty much the same invisible beams

The difference is that a single ray does not contain information about area or proximity.
How much sense can you make out of a picture which has only one single pixel? How many rays do you need to tell if you can see a partially occluded character or not?
If you look at a wall of rock, can some rays tell you if you might be able to climb up? Can you tell if you have a whole image of the rocks, including depth?

Calin said:
Vision in games: I don’t think people care about vision when you can “cheat” and access position or other properties directly.

It's not about cheating, but about simplicity. If you have a top down game, where you can represent each unit as a single point, you can have some tactics / strategy planning algorithm working on this minimal model.
An example strategy could be: Traverse a path around some gate. Find nearby enemies. For each enemy, check visibility with a ray. If it's visible, attack. Otherwise ignore the enemy.
That's what you mean with cheating i guess. We cheat bey finding nearby enemies ignoring visibility, and filter the result afterwards by visibility to remain fair and authentic if we want.

AI based on visual sense can work the other way around: Analyze the view, Found an enemy? Attack.
The dependency on global and static data structures (for path finding) and algorithms (classes of tactical behavior) can be minimized.
So we get a step closer to emergent behavior, eventually.

I think visual AI becomes more attractive as the complexity of the world and characters increases. For me personally, a major argument is sensing proximity so i can plan step / grab placement or walking and climbing.
I need those things because i no longer work with characters which actually are just a point, traversing along paths or coarse polygons, carrying some 3D model and animation with them to generate an illusion of life.
I have individual body parts like with a real robot, so sensing and interacting with the environment becomes much more of a problem and challenge.

There is AI Habitat, a system for training agents with simulated environments. It supports vision, physics, and audio senses.

  • Rays: the beam idea is my old belief, now I know sensors are radio or radar based at origin
  • Cheating: yeah that’s what I call cheating

My project`s facebook page is “DreamLand Page”

When it comes to doing things the other way arround, vision first, you can still use some of the old approach, basically you end up with a mixed model. In a top down or isometric game the unit is basically a quad, you can use its position for whatever operations is required , no need to recognize the unit every frame

My project`s facebook page is “DreamLand Page”

Aressera said:
There is AI Habitat, a system for training agents with simulated environments. It supports vision, physics, and audio senses.

Oh, my understanding about how the hell Zuck manages to burn so much money on his desperate attempt to do the metaverse has just improved. ;D

Calin said:
When it comes to doing things the other way arround, vision first, you can still use some of the old approach, basically you end up with a mixed model. In a top down or isometric game the unit is basically a quad, you can use its position for whatever operations is required , no need to recognize the unit every frame

Yes. In games sensing is much easier than in real world robotics, and also we already have richer worlds with more options to interact. And we can still do abstractions and tricks as before to keep hard things easier.

Thus, games should be a major player in achieving progress in robotics related AI. The real world does not even need this, but we do, imo.

Probably the major reason why this isn't happening is that game devs are no scientists, and are used too much to fake stuff instead simulating it.
That's a pity, because costs are much lower than e.g. large scale fluid simulations or photorealistic lighting. No 2000 bucks GPU needed to achieve progress, and we talk about real progress enabling new games, not just better gfx.

However, my optimism regarding robotics is just founded on wishful thinking so far. I'm sure i can make it work, but idk what ‘new games’ to expect exactly, and which things that worked well with animation won't work with robotics at all.

JoeJ said:
Thus, games should be a major player in achieving progress in robotics related AI. The real world does not even need this, but we do, imo. Probably the major reason why this isn't happening is that …

It may not be happening in games, but it's definitely happening in robotics: https://www.robocup.org/leagues/23​ and https://www.robocup.org/leagues/11​ (look under “leagues” from the main website).

The reason is that it's much much cheaper than real world robots, making it much more affordable to a larger group.

Also, you can drop most of the problems in keeping the hardware running reliably, so you can concentrate more on the AI domain.

This topic is closed to new replies.

Advertisement