SSD for programming?

Started by
23 comments, last by Ravyne 8 years, 2 months ago

But you didn't say what languages and I think this can be relevant as different ones work in different ways.

There were only so many options left when OP mentioned "source files/headers".

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Advertisement

I just buy sufficient RAM memory, so the operating system can keep everything in disk cache. Initial startup loads everything from HD to disk cache, and once that is settled, you're effectively working in the cache, while the OS syncs stuff back to the disk automagically.

I don't own an SSD, but I would use it for 'static' data, like operating system binaries, SDKs, compilers, editor binaries, etc.

That way you'd have fast startup times as well.


I just buy sufficient RAM memory, so the operating system can keep everything in disk cache.

Can confirm, you open up a 10GB directory of large pictures and you browse through large previews like sonic through rings, and 2TB SSDs are utopia right now. For 150 bucks you can get 4 sticks of stable 8GB 1333 DDR3, utilize them into any 4 stick MB, and that will hump you on all data you have, not only system disk, which I find not the best usage of SSD being for anyway.

I tested VS build times between SSD, SSD RAID 0, and HDD RAID 0 and it made no difference. Code builds seem to be heavily CPU bottlenecked.

Depends how badly your project is structured. Bad C++ project can cause the compiler to spend more time opening a hundred files and concatenating 100k lines of code, than it does compiling the 100 byte object file resulting from that input laugh.png

As everyone's been saying, it may or may not speed up your builds depending on language and project structure, but it certainly can't hurt and there's no reason to avoid keeping your projects there, at least from a drive-wear standpoint. A contemporary SSD can do 1500 to 2000 complete drive writes (14nm Flash, TLC) at a bare minimum, but that's the floor -- most drives will go significantly more than that, its not a precise number, its a manufacturer projection of lower bounds on the least-quality chips that would pass muster. With a 500GB drive you're really at no risk of passing that (You could maybe *approach* a single drive-write per day if you had a very small SSD, say 64GB, but even then your drive would last better than its 5-year intended lifespan and warranty period.)

If you're like me, you probably cycle through active disks more quickly than 5 years, especially at the rate disk sizes are growing and prices are coming down.

Now, SSDs are pretty good these days about not suddenly going ass over teakettle, but it still sometimes happens and spinning disks are a more mature technology, or at the very least are cheap enough that redundancy is in grasp of even the most budget-conscious computer user, In my current setup I've got a 256GB Samsung 840 pro, and two regular, 5400RPM 750GB laptop-sized spinners providing redundancy, plus a 10TB NAS (and I really need to adopt an off-site backup solution but haven't yet, aside from github).

If you're on a recent version of Windows, I recommend getting another mechanical drive and set up Windows' Storage Spaces -- which is a software RAID that you can over-provision (e.g. you can create multiple partitions each bigger than the sum of the actual disks backing it up) and add or upgrade disks as you need to match your actual usage. It supports the usual RAID-like configurations (RAID 0, 1, 0+1, 5, 10) and is very flexible, you can even put those drives into another computer and fire it up if ever some other part of your computer dies.

throw table_exception("(? ???)? ? ???");

This topic is closed to new replies.

Advertisement