Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Matimeo

A quick question about AI

This topic is 6581 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am in an argument with someone about whether or not on a chip that is pretty much dedicated to AI and physics(like the P3 on XBox or Gekko in NGC), it would be pointless to include a lot of cache. He says it is and that all it would do is increase the cache latency, I say it''s not because more cache is more cache and it''s far more faster to get something from cache than Main Memory. What do you guys think, since you guys apparently know a lot about AI programming? Any help is greatly appreciated.

Share this post


Link to post
Share on other sites
Advertisement
quote:
Original post by Matimeo

I am in an argument with someone about whether or not on a chip that is pretty much dedicated to AI and physics(like the P3 on XBox or Gekko in NGC), it would be pointless to include a lot of cache. He says it is and that all it would do is increase the cache latency, I say it''s not because more cache is more cache and it''s far more faster to get something from cache than Main Memory.
What do you guys think, since you guys apparently know a lot about AI programming? Any help is greatly appreciated.


I don''t see what a cache would have to do with it....there''s cache on standard CPUs, after all.




Ferretman

ferretman@gameai.com
http://www.gameai.com
From the High Mountains of Colorado

Share this post


Link to post
Share on other sites
More cache is almost always better. Since it is so much faster than an off-chip RAM, the more cache the merrier. Unfortunately, it is also usually very expensive for what you get. ($/byte compared to regular RAM)

Of course, once you have a large enough cache to hold all of your instructions and data without going out to regular memory, you have all the cache you need. Something tells me that you aren''t likely to have that much cache though

Share this post


Link to post
Share on other sites
My first reaction on seeing this was: "ARG!! No! No! No! Wrong! Wrong! Wrong!" However, because to say that would be rude I''ll just say that you guys aren''t thinking things through to clearly.

Within a given process increasing the size of a memory array necessarily increases the data access latency on that memory. There are two reasons that cache memory is fast: one is that it is close to the registers, the second is that it is small. Both are important, and are in some degree intertwined. It''s somewhat trivial to note that as cache size increases that your die size increases and the ability to physically locate the cache near the registers goes down. It''s somewhat more difficult to notice that as cache size goes up the attendant logic costs associate with selecting cache lines, maintaining associativity, coherence and all the other wonderful stuff that makes a cache a cache goes up as well. Even if this additional complexity scaled linearly with memory size (it doesn''t, it''s worse), this would result in data access still slowing as cache size increased. Consider the AGU as the bit size of the cache offset increases, or address check logic. (And I don''t even want to think about the TLB right now).

Real world example time: compare and contrast the P4 L1 cache with the P3 L1 cache. Both processors come from essentially the same engineers and are implemented in the same process (.18 micrometers if I recall correctly). However the P4 L1 cache has a 2 cycle data latency with an 8KB size. The P3 L1 cache has a 3 cycle data latency with 16KB size.

Share this post


Link to post
Share on other sites
Bigger cache is ALLWAYS BETTER...commercial guys do tricks to convince you is not...

speed of the cache is mainly from it beeing STATIC as oposed to DYNAMIC RAM...but yes its used to place slower cache for bigger values just to save money...

Share this post


Link to post
Share on other sites
quote:
Original post by bogdanontanu
Bigger cache is ALLWAYS BETTER...commercial guys do tricks to convince you is not...


ARG!! No! No! No! Wrong! Wrong! Wrong!
A bigger cache is not always better! Would you trade a fully associatve 8192 byte (8KB) cache with a 1 cycle access time on a hit for a non associatve 8193 byte cache with a 100 cycle access time on a hit? Of course not! That one extra byte isn''t going to do you any good with that kind of access time. And losing associativity is probably going to kill your hit rate at the same time.

Cache performance depends on a lot of things, not just size. You need to factor in associativity, access time on a hit, penalty for a miss, line length, patterns in the underlying data, the whole nine yards. In order to measure to performance of a cache you need a solid metric, like average access time, which would be calculate by (hit rate) * (access time on hit) + (miss rate) * (access time on a miss). Sure hit rate and miss rate depend on cache size, but they also depend on associativity, line length and data access patterns. And given optimally efficient design, increasing cache size necessarily worsens access time. And no matter how big your cache, you still can''t stop obligatory misses, so optimizing miss penalty is always essential. So if you have an data set that is rarely reuses addresses within a working set then having a huge cache really does you no good, because the data just will never be there.

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!