does 2 sticks of memory vs 1 with dualcore make a difference?

Started by
10 comments, last by hplus0603 17 years ago
Hey everyone I got a new comp a few months ago with the following: dual core 2.1 ghz 1x1gb RAM I'm wondering if (especially with dual core) having 2x512mb RAM sticks would be faster than 1x1gb RAM? I've been trying to find a straight answer, but with no luck. (yes, I can get 2x1gb RAM sticks, but just for arguments sake I'm sticking with a total of 1gb of memory)
Advertisement
Having two sticks will frequently enable dual channel mode, which gives a slight bonus to memory speed. The effect is really small though.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
wow... thanks for the quick response!

I think I'm going to grab 1 more gig of memory anyway... you can never have too much memory :-p.

But this brings another question:

I took a computer architecture course last year in uni. One big thing I noticed was this:

The speed of CPU's over the past few years is exponential where as the speed of memory is linear, now FAR below the CPU's speed. Have a fast CPU is all well and good, but wouldn't it be better to concentrate on making memory faster? I know the cache's are pretty much small amounts of super fast memory. Why can't we purchase either a.) upgrades to your CPU's cache or b.) memory which is made of the same material as the super fast caches?
Quote:Original post by CrewNick
The speed of CPU's over the past few years is exponential where as the speed of memory is linear, now FAR below the CPU's speed. Have a fast CPU is all well and good, but wouldn't it be better to concentrate on making memory faster? I know the cache's are pretty much small amounts of super fast memory. Why can't we purchase either a.) upgrades to your CPU's cache or b.) memory which is made of the same material as the super fast caches?

There's no way we could ever upgrade CPU cache because it's on the die with the CPU, so it's manufactured as part of the entire processing chip. I'm not sure if CPU cache memory is made any differently than stick memory (if it is then it's likely more exotic and so would cost insane amounts of money in larger stick form) but the reason why it's so much faster is because it's paired directly with the CPU, whereas normal memory has to reach the CPU via the mother board (I think along the Frontside Bus, or FSB)

Drew Sikora
Executive Producer
GameDev.net

I was half asleep in O&A class the other week, but from what I remember of it:

Level 2 cache memory is relatively cheap and fairly similar to standard ram, but due to it being on die still cost more per byte than large standard Ram. Level 1 however, is apparently far far more costly to produce, being faster and some other factor, make it on the order of 10-20 times the cost per byte than level 2.


So, throw in business practises of getting us to buy more cheap stuff at a high markup, and you get current trends of cache size to main memory.
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.
Quote:Original post by Gaiiden
whereas normal memory has to reach the CPU via the mother board (I think along the Frontside Bus, or FSB)


For intel CPU:s its the FSB, AMD uses hypertransport which provides nearly twice the performance in syntetic benchmarks (even the slowest AMD64 CPU:s have better memory read/write performance than the fastest Core2 cpu:s)
(Though in all normal programs the Core2 more than makes up for it in other ways).

[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
Quote:Original post by SimonForsman
Quote:Original post by Gaiiden
whereas normal memory has to reach the CPU via the mother board (I think along the Frontside Bus, or FSB)


For intel CPU:s its the FSB, AMD uses hypertransport which provides nearly twice the performance in syntetic benchmarks (even the slowest AMD64 CPU:s have better memory read/write performance than the fastest Core2 cpu:s)
(Though in all normal programs the Core2 more than makes up for it in other ways).

Yup ,Intel is shoving obscene amounts of L2 cache(4MB total on just my Intel 6600 cpu!) to overcome it's lack of on-die memory controller!
But to answer the original question it's a matter of cost. The larger the cache the more it cost Intel/AMD to produce their cpu's. Actually Intel must be losing quite a bit with the large cache's on their core2 cpu's as I mentioned but they really need it to keep up with AMD cpu's. Actually, AMD cut the amount of L2 cache on all the X2 line of cpu's a while back to cut cost and that should also give you an idea that larger cache doesn't really affect most apps as much as a crossing a slow FSB.
Personally I think the harddrive manufacturers need to come out with faster harddrive's like the raptor series since that's really where the bottleneck is these days.
L3 cache is supposed to come out this year and might make the peformance crown go back to AMD?
Look at the first example, the "Deerhound" quad-core CPU based on the K8L core tune-up, which AMD is supposed to ship in volumes during the 3rd quarter of 2007. This Opteron-class server CPU has only 2MB of shared L3 cache according to my deep throat - less than the L2 cache on the current Woodcrest! if each core has its own 1MB L2, and there are four of them, what extra use is such a small common L3 for? Well, two answers - one is to remove the inter-core and cache coherency traffic from the crossbar and keep it within that L3 cache

[Edited by - daviangel on April 10, 2007 11:50:51 PM]
[size="2"]Don't talk about writing games, don't write design docs, don't spend your time on web boards. Sit in your house write 20 games when you complete them you will either want to do it the rest of your life or not * Andre Lamothe
Quote:I'm not sure if CPU cache memory is made any differently than stick memory


CPU cache is based on SRAM, whereas stick memory is based on DRAM. Not only does SRAM use 6 transistors per bit, whereas DRAM uses 1, but SRAM is also built in a geometry optimized for speed, whereas DRAM is built in a geometry optimized for density. They are very different.

Also, cache memory is usually N-way addressable (often up to 8-way for L1, 2-way for L2) and has extra circuitry for LRU, pre-fetch, and cache line combining.

For the P4, the L1 latency is about 3 cycles; the L2 latency is about 10 cycles; the DRAM latency is over 200 cycles. I don't know the cache numbers for Core/Core2, but DRAM hasn't gotten any lower latency (in fact, it may now be worse with DDR2). Most of this is hidden by speculative execution and fetching, read/write combining, out-of-order instruction retirement, etc, so you have to be careful about constructing a benchmark that measures these things.

And to answer the question in the header: Yes, two sticks allows the memory controller to run the memory in dual-channel mode, which will give you twice the throughput for applications that depend on streaming. (video decoding, for example) This is true for any memory controller/CPU that is capable of dual channel operation.
enum Bool { True, False, FileNotFound };
I don't remember a lot from my processor design class, but one thing I do remember is this rule of thumb: speed and size have a definite inverse relationship.

I.e. cache is very small, and VERY fast. RAM is medium-size and somewhat fast. Hard Drives are extremely large, and very slow (by comparison). If you had 1gb of cache, chances are it'd just go as fast as a stick of RAM anyway.
Deep Blue Wave - Brian's Dev Blog.
Quote:Original post by BTownTKD
If you had 1gb of cache, chances are it'd just go as fast as a stick of RAM anyway.

I don't think so, given the means by which 1GB of cache and 1GB of RAM communicate with the CPU the cache would definetly still be faster IMO

Drew Sikora
Executive Producer
GameDev.net

This topic is closed to new replies.

Advertisement