Intea Processor

Started by
6 comments, last by kvp 17 years, 9 months ago
http://www.aiseek.com/Intia.html Comments please..
Advertisement
The game industry is not ready for such a processor and this processor is not ready for games. Games today have very different AI and providing a common implementation is not yet possible. This site keeps talking about pathfinding, but AI is much more than that, actually pathfinding isn't always considered AI. If we really wanted an AI processor we would have to go to a much lower level, where all games could use the functionality provided. The problem is because of the lack of standard AI today it would just look like a SIMD processor and most likely wouldn't be worth it because of the BUS. An AI processor could offer some improvements though, for instance it could have a concept similar to a pixel/vertex shader just for individual actors. Terrain analysis can't be implemented in the processor yet though.
Quote:the Intia’s pathfinding uses no heuristics, thereby guaranteeing that the optimal path will always be found. This optimality also means that the Intia processor avoids the common pitfalls of A*, including failures to find a path when one exists, and the generation of “artifacts” (e.g., weird, unrealistic paths). If a path exists, the Intia processor will always find it


Do they even know what they are talking about? Not only A* always returns a path if it exists, but it always return the optimal path!

Basically they have a nebulous pathfinding, a nebulous ray-casting, and a nebulous terrain analysis, which seems barely different from their pathfinding feature.

Then, their "White Paper" is not a white paper at all but the almost-exact replica of their feature web-page.

Fast ray-casting itself would be cool, but these three features do not justify buying a special chip, and the whole thing doesnt look very seriousy to me. Frankly, I dont think AI will ever take the "specialised hardware" path.
I have to agree that the features they talk about are vague at best. It feels more like a marketing statement for a possible upcoming product in development. It is really hard to tell what it is they are really selling the product as with such vague examples. More detail about the hardware architecture and the SDK must be presented before any fair evaluation of its worth can be made. Right now, I see it as nothing more than marketing spin.

As for the stuff they talk about, it almost seems that they're saying, "Looking, our processor is so fast that we don't need any heuristic based algorithm, brute force works just fine." But then my question would be that if brute force was already this fast, then wouldn't it be way better with faster heuristic methods? Also, they talk about how fast the processor runs and is able to do checks in actual seconds, but don't give any details as to how they reach those time results. Are those theoretical numbers? What would happen if visibility and dynamic pathfinding are being run simultaneously? What's the overhead in doing that? And with visibility checks of 512 against 512 taking 0.02s, wouldn't that mean that you can only do that 50 times a second? Add that to other possible things we might want to do, even if we scale down the number of objects, can we expect to be able to do those things for every character every frame and run at 60fps?

There are just too many "if"'s right now based on their "description." So, is it interesting? Yes. Should we have any sort of expectation? Probably not.
It seems like they aren't targetting all AI, just a couple areas to make them faster. Seems like this is a good idea. It doesn't cover all aspects of AI, but the repeatable ones such as pathfinding and line-of-sight.

Quote:From The FAQWhat AI routines are accelerated by AIseek’s Intia processor?
AIseek’s Intia processor accelerates low-level AI tasks. The routines accelerated include movement (in particular, pathfinding and group movement); sensory simulation (in particular, line-of-sight) and terrain analysis.
Sure, we'd just love to outsource AI to another processor. That means lots and lots of delicious latency right in the middle of the code I wanted to go faster...

No, sorry, I can't see this catching on. The physics processor seems to have run into the same problem. The performance just doesn't improve much at all. Sure, a specialized CPU is faster, but having to send data over a bus, and patiently waiting for it to come back can kill most performance advantages.
The little octa-cores AMD and Intel are cooking up will render such a "AIPU" useless before its ready anyway.

I've said it before, I'll say it again; dont look for magic solutions in AI.
Quote:Original post by Steadtler
The little octa-cores AMD and Intel are cooking up will render such a "AIPU" useless before its ready anyway.
I've said it before, I'll say it again; dont look for magic solutions in AI.


This still reminds me of an old mmorpg, that rendered the scene graph on the server side for each monster, just to get pixel correct line of sight. The same game also used the rendering system for physics calculations and ai pathfinding. The scene graph was tagged for materials, physics and graphics at the same time. The good side was correct ai and physics, the bad side was huge server costs. (with one server supporting only one zone of the game with a player limit around 32 per zone and several 1000 zones)

Imho they saw the right way, and did it 10 years ago.

Viktor

This topic is closed to new replies.

Advertisement