Optimizing floating point computations (simulating electric motors)

Started by
10 comments, last by FreezingCool 11 years, 11 months ago

[quote name='Antheus' timestamp='1335800973' post='4936108']
Quote
it will be around 8MBs for

This is huge and unless one is guaranteed locality, it will trash L1, L2 and L3 cache.


He doesn't have to put it all in cache. If that's the first thing to pre-calculate, cache will keep only the most used values from the sinus array.
[/quote]
That's the problem. Unless the memory is in the cache, accessing memory is about the slowest thing you can do, taking hundreds of cycles. Like Antheus said, unless there's some locality in the way he's accessing the look-up table, it will cause thrashing as portions of the look-up table are continuously loaded and unloaded from the cache, which will kill performance. Look-up tables only offer a speed increase if you access them in specific ways, and Antheus's warning is to make sure that the look-up table will be accessed in a sensible pattern before bothering making a look-up table.
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]
Advertisement
Ok, I agree with that, but it's kind of 3 lines of code and he could try and see whether or not there is an improvement.

This topic is closed to new replies.

Advertisement