# RAID - Information and Tutorial

This topic is 3420 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

## Recommended Posts

I wrote this article for my site, which i decided to do since recently i upped my raid setup from a raid 0 to a raid 10 and figured i could show the concepts and ideas behind the process. So, hopefully i have achieved my goal at making the concepts clear and concise. Enjoy!

# Setting Up Raid

Before we go any further, i must give the standard disclaimer. I can not be held liable for any event that is derived from the use of this article. Nor any damages or loss of data. If you're unsure of anything below, do not attempt it, or thoroughly backup your data before hand.

## What Is Raid?

RAID stands for, Redundant Array of Independent/Inexpensive Disks. Here you're basically taking two or more "physical" disks and forming one or more "logical" disks. The intent is to either increase performance by utilizing two or more drives at once, or to provide some level of redundancy for the protection of data. However, as we will discuss below, no matter what the intention is, some RAID levels offer both performance and redundancy.

## Terms

Strip Width- Here strip width defines the amount of parallel commands that can be executed at once to a stripped raid. Basically, if you have 2 drives, your strip width is 2. That is to say that you can send 2 requests that will be carried out immediately by two different drives at once. The more drives you add the larger your strip width can be, and the more data you can read/write in parallel. In this way you can see why a stripped array of 4 30 gig drives will have better transfer performance than 2 60 gig drives Strip Size- Strip size defines the minimum size a file needs to be to be stripped across drives. If you write a 4 KiB (binary kilobyte) file to a stripped array, with an array size of 32 KiB, that file will only be written to the first drive and will only take up 4 KiB. However, If you have 4 drives in a stripped array with a 32 KiB strip size, then writing a 128 KiB file to the array will result in the file being evenly split into 32 KiB chunks across all drives in the array. Choosing the strip size is an important factor in performance of a stripped array, and the choice varies drastically depending on the use of the array. If you choose a smaller strip size you're going to split files into more chunks/blocks/strips across more drives. This will have the effect of increasing your throughput/transfer performance, but positioning performance (access timing) will decrease as all of the drives will be busy handling a data request already (since the file must be accessed by all drives). If you increase the strip size you're decreasing the amount of drives files will be split across, but this will increase positioning performance as most controllers will then allow another drive that is not in use to fulfill another request while the original request is being completed. Generally there is no rule of thumb to setting the strip size as it depends on many different factors, some of which include the arrays use and the drives being used. Parity- Parity data is error correction data that has been calculated using the actual data being stored. An example of this is a single parity bit added to serial connection transmissions, where 7 bits of actual data is transmitted, with the last bit being the parity bit, then a stop bit/s. Here The parity bit is calculated before the data is sent, after the data is transmitted and received the parity bit is recalculated, if it matches the parity bit transmitted then the data is accepted, if not then the data is resent. The same type of parity information is calculated, but just on a larger scale in raid setups (except for raid 2, which does bit level stripping and bit level error correction, however raid 2 is no longer used). Raid Controller- A raid controller is a controller that handles all raid operation and protocols. There are three types of raid:
Software- All raid operations including logic and calculation is done purely in software. Usually an operating system has support for the less intensive raid levels (1,0). This will generally provide "ok" performance at the cost of a larger cpu utilization due to the entire process being done in software. And as you can probably guess, true software raid does not use any hardware.
Hardware- All raid operations including logic and protocols are handled by a dedicated processor and bios on the hardware controller. The controller may also have an onboard buffer/cache, and maybe even a battery backup for it to keep data from corrupting when utilizing write-back cache mode (in more expensive controllers). A hardware raid controller has to interface with the computer in some way. Here you'll see raid controllers for pci, pci-x, and most recently pci-e, though this list is not exhaustive and there may be raid controllers for other, more obscure interfaces. It is generally a good idea to research the actual throughput of the interfaces before choosing a raid controller.
Pseudo Hardware Controller- Here you have a hardware controller that has its own buffer and bios. However it may be lacking a dedicated processor. This means that all of the protocol and logic of the raid controller is contained on the controller, but the controller uses your processor (cpu) in absence of its own. This generally will lead to a smaller cpu utilization than just software raid, and performance is better than plain software raid, but it will be larger than complete dedicated hardware raid.
Keep in mind that the cheaper you go the cheaper the controller itself will be. While this may be obvious, cheaper controllers offer less advanced performance improving features, and may reach their bottleneck sooner. Depending on how the controller are designed you will generally start to see the controller, or its interface bottleneck. Cheaper controllers tend to do this sooner.

## Drive Interface Choice

For a drive to connect to the computer it needs to use a certain interface, most common choices being SCSI, SAS (basically SCSI over a serial Connection), SATA, and IDE. SCSI/SAS interfaced drives tend to be a bit more expensive, but they generally offer the best performance due to maturity of the interface protocols and large use within the server and mass storage industry. Next we have IDE, whose creation was for cheapness. IDE came into the computer industry as a need for a cheap, economical interface was required to lower the cost of drives and their controllers, and IDE filled that void. Because it was cheaper than other options its popularity grew, even though it was never meant to be an interface for performance. The original IDE interface design had the processor being used for the interface, this mode was called PIO, now there are DMA modes that allow for better performance. But there is still the lingering design problem of single device access per channel. Basically if you have two devices on a single IDE ribbon cable, only one of those devices can communicate with the computer at one time. This makes IDE not suited for raid whatsoever, as benefits of raid come from simutanious access of multiple devices at once. So, i would recommend using only SCSI, its cousin SAS, or SATA for raid. SATA is the successor to IDE, and is a serial based interface. SATA does offer simutanious access to multiple devices and offers better throughput than IDE. Along with this SATA is generally cheaper than SCSI but offers only slightly lesser performance. Lession... Use only SCSI, SAS, or SATA for raid setups.

## RAID Levels

There are many different RAID Levels, including some proprietary ones, that we could discuss. However, for this article i will concentrate on the most common implementations.

### Raid 0

Here you'll find a raid level that actually is not redundant at all. Raid 0 is the simplest of all the levels. Here you have two or more physical drives being combined into one or more logical drives with the recorded data being "stripped" between the drives. The intent with this case is absolute performance with no regard to data safety and redundancy.
With raid 0 you will take two more more drives combine them, then data will be stripped, based on the chosen strip size, between them. This has the effect of spreading out the workload of both writing and reading data from the logical drive, which is composed of multiple physical drives. However, as stated above, how the stripping affects performance, I.E. access times or throughput, is dependent on the strip size used. The stripping in raid 0 will help throughput with smaller strip sizes, and help access times with smart controller if a larger strip size is used.
Raid 0 is generally not used if you have any data you wish to keep safe from data loss. This is because in raid 0, since all data is simply stripped and stripped across multiple drive, if a single drive dies in the array, all data will be lost. Data can not be retrieved if it is in pieces across two or more drives, so if a single drive dies in the array, you're effectively deleting a chunk out of each and every file that was stripped, in a way corrupting the files. However, out of all the raid level's, raid 0 has the best overall read and write performance combination.

### Raid 1

Raid 1 is another simple raid, however it is the first level that offers redundancy. Here you have two disks, that are "mirroring" each other. That is to say, bit per bit, all data that is written to drive A is also written to Drive B. This create a write speed penalty for all data being written to the array, since it must be written to both drives. However, when you read back the data, you will get an improvement. Under most controllers when you request data, it will be read from drive A, and while drive A is busy, drive B can then service an request that may come in the meantime. Thus you get the throughput of a single drive but an increase in access time performance as both drives can service two different read requests at once. In a mirror, one drive of the pair can die and the raid can still function. Drive A could die and drive B would take over, or vice versa. Here one backs up the other in case of failure, so one drive death can be tolerated before risk of data loss.

### Raid 3

Raid 3 offers byte level stripping combined with a dedicated parity drive. Here you have the data stripped, similar to raid 0, but with parity data being calculated and stored on a single dedicated drive. This setup requires a minimum of 3 drives. With raid 3 the read performance is quite similar to raid 0 with a slight hit in performance from byte level stripping. However the write performance suffers considerably due to overhead of calculating parity data and the fact that the single dedicated drive becomes a bottleneck as it must be accessed every time new data is written to the array.
Raid 3 can have a single drive die before any data loss is incurred. Raid 3,4,5 have a 2-1 drive capacity. Meaning the space given by the 3 drives with 1 drive subtracted is the usable space capacity.

### Raid 4

Raid 4 improves upon raid 3 by doing away with byte level stripping and instead does block level stripping and parity data calculation. However writes still suffer from the parity calculation and raid 4 still makes use of a dedicated parity data drive, that is still a bottleneck.
Raid 4 can have a single drive die before any data loss.

### Raid 5

Raid 5 improves by doing the same as raid 4 except that it now disperses the parity data throughout all drives, as opposed to a single dedicated parity drive. This removes the dedicated drive bottleneck. However, writes still suffer from the extra overhead of having to be calculated and written along with the other data.
Raid 5 can have a single drive die without data loss.

### Raid 6

Raid 6 is the same as raid 5 with with an extra drive added to the minimum requirement. Then dual parity data sets are calculated and written. This allows a maximum of two drives to die without data loss. But as you may guess, write performance is even worse than raid 5 as there is now two data sets to be written per write request.

## Nested Raid

You can also have different raid levels nested. Meaning you can mix or combine two raid levels together. The most common being:

### Raid 10 (1+0)

Raid 10 requires a minimum of 4 drives. It will separate the four drives into 2 pairs of 2, each drive in each of those pairs will mirror each other, this forms two logical drives. These logical drives are then combined in a raid 0, forming one logical drive. Here You can have two drives die before any data loss, a drive can die in each mirror pair. By probability this is more redundant compared to raid 01 (0+1).
Raid 10 is excellent with both read and write performance. Raid 10 is considered one of the better raid levels as it offers the same level of redundancy as others but without any parity calculations. Only downside is a 50% usage capacity, meaning out of all the storage space combined from all the drives, only half is usable. However, with the prices of high storage drives getting cheaper it is almost a non-issue.

### Raid 01 (0+1)

Raid 01 (0+1) requires a minimum of 4 drives. It will separate the four drives into 2 pairs of 2, each of those pairs will strip each other, this forms two logical drives. These logical drives are then combined in a raid 1 (mirroring), forming one logical drive. Here You can have two drives die before any data loss. As with Raid 10, it half's the usable drive space.

## Setting Up a Software Raid- Using Windows

Software raid might a be a solution you're looking for, and you may even be able to do it right now with just your operating system. Windows supports software raid 0 and raid 1. To setup a software raid in Windows Xp follow the steps below:
1.)Right-Click on "MY Computer" and go to "Manage."
2.)Under "Storage" click on "Disk Management"
3.)Here you'll want to right-click on the small square next to your newly installed drives, that are unpartitioned (unless you're doing raid 1, in which case only one needs to be unpartitioned), and say "Convert to Dynamic Disk" for each. Please keep in mind that converting your disks to dynamic disks may cause compatibility issues with some software, like Acronis. And once your disk is dynamic it is risky to convert it back to basic, so if you're doing so, backup your data first.
4.)Right-click the unpartitioned space of any one of the drives, click "New Volume." From here you can make a choice of which raid type you want, and unless you have windows server, raid 1 will be grayed out. This was part of a limitation Microsoft put into the OS, reasons for this are unknown. However, if you're not afraid of hex editors, you can attempt to edit the needed files into being the server versions, this method has been tested to work by me, however because it is a gray legal area i can not provide the files directly myself.
5.)After you have selected which raid level you want, you can then click "Next", add the drives you want to raid, click "Next," then follow the menu from there which lets you choose the standard format options. And you're done!
However, as stated above, you will generally get poorer performance through software raid, along with extra CPU utilization. And under raid 1 you will not be able to boot into windows without a boot disk when using the mirrored volume, due to Window's software raid not writing the needed boot information like the MBR and other files to the drive (Window's software mirror only works on the partition level). But the pro with software raid is hardware independence for raid migration.

## Setting Up Hardware Raid

You should start by researching what kind of raid level you're looking for, this will depend on your budget and use. After you have decided, see what interface you can use. Does your motherboard have a spare pci/pci-e/pci-x? Does the interface you have available meet the bandwidth/throughput you predict will be needed by your raid setup? After you figure this out, look online for what choices you have, I'd recommend shopping at Newegg.com (if in the USA), or Tigerdirect.com, who tend to have new prices. Be careful which controller you get, and avoid cheaper ones if you're looking for performance, generally $50 and above will give results. After you select your controller, you should start looking around for drives. I recommend the Raptor series for their excellent access times. However, the Raptor series, and especially the Velociraptor, may be a too expensive for some. If you can't afford those i would look into the Samsung Spinpoint series, whose high platter data density gives excellent transfer performance for a 7,200 drive. However, I would avoid the 1 terabyte models for now as they're as having quality control issues with that model that make for a lot of DOA's. Only downside are the access times are that of any 7,200 rpm drive. Since access times scale near the same as the performance gain, it is important to watch the access times of the drives you buy. As said earlier, depending on your raid setup, some access time issues may be slightly negated by the controllers operation (another reason to go for a quality controller). For my setup i used an Adaptec 1430SA pci-e 4x raid controller which supports raid 0,1, and 10 (bought on newegg for$104). For my drives, i used 4 150 gig, 10,000 rpm Raptor X's. Power supply is an Enermax Liberty 500 watt modular power supply, processor is a Q6600 quad core, 4 gigs of ddr2 800 ram at 5-5-5-12, and an 8800 gts video card. For this article, all are at default speeds.

Here you will see that i installed my controller card into a spare pci-e slot i had available. I then plugged the 4 SATA cables into the controller, which are then connected to my 4 Raptor X's.
As you may notice, i plugged 2 drives into one cable chain, and two others into another chain. Dispersing out the power to two will help disperse out the initial spike of power required by all drives to start their initial spinning. If you don't disperse the power you may end up with a few drives failing to spin up or dropout during testing/use.
To setup the raid with everything plugged in, turn on your computer, and after it passes the initial POST it will ask you to press ctrl+[letter derived from the manufacturer], in my case it is ctrl+A (for Adaptec). Here you'll have to refer to your controller manual on the specifics of the menu's. But in general most raid controller's bios's will allow you to select which drives are to be a part of the array, then it will ask what array type you want, then the strip size. After you're done setting up the raid you will need to exit the bios and restart the computer.
After you restart you will need to install the drivers for the raid before you can use it. In my case i am installing the raid under an existing windows install that is on a separate drive, so when i start up my computer, Window's saw the raid controller, and i simply pointed the install to the drivers on the disc/downloaded from their site. The same should be done by you. However, if you're wanting to install windows on this raid, you will need to start with a fresh install. To do this create a floppy disk that contains the drivers, backup any data you want saved, pop in the windows install disk, and press "F6" at the beginning of the windows install disk, from here the windows install disk will install the drivers and you are set. We are almost there, but before we can use the raid we need to initialize the disk. Now, if you installed windows on the raid array from the step above, then you can just skip this step, as it is not needed, you only need to initialize the disk if you installed the array on an existing windows install. To do this, right-click on the "My computer" icon and select "Manage," then click on "Disk Management" under "Storage." Here you may get a prompt from the manager, ignore the prompt and close it, then you should see your new raid disk, it will be unpartitioned and uninitialized. Right-click on the square to the left of the unpartitioned space and click "Initialize Disk." Now the disk should be active. From here you just need to right-Click on the unpartitioned space and click "New Volume," from here just follow the steps.
Now your hardware raid should be up and running!

## Testing

Now, i decided to setup my 4 raptors in a 4 drive raid 0 setup for testing, after which i then set them up in raid 10 for more permanent use (I.E. redundancy). I tested these drives in various configurations using different strip sizes to illustrate the principles discussed above.
Here you can see our base test. This is the performance of your average 7,200 rpm SATA drive. You can see its access time by itself is over 13 ms, and its average throughput being only 52.2 MB/s. This is about the performance most people get with only1 drive doing one task, the drive performance suffers even further if being requested data from multiple sources. Here are the drives in raid 0. The raid 0 setup has a strip size of 32 KiB, and i get a very nice and even (too even) transfer rate of 202.5 MB/s average. The access time is 8.1 ms, being reported as doubled compared to the Raptor X's single 4.2 ms time performance. Now, without any further testing we can see that our raid array is already considerably faster than our base generic 7200 rpm drive. Both in sustained throughput and access time. But, can we push performance further? Now, here is the raid 0 with a 16 KiB strip size. We can see that the access time is pretty much the same as before, but look at the max throughput, 326.2 MB/s and an average sustained throughput of 279.6 MB/s! Now that is quite an improvement. As discussed above, using lower strip sizes force the file to be split across more of the drives giving a large increase in throughput when reading and writing as all drives are being collectively used. Next we have our raid 10 results... Just for fun i decided to test the raid 10 array starting out with a 64 KiB strip size. The results are a bit hard to judge, being so sporadic. While i achieve a nice maximum value, the average is a little low, being brought down by the random dips. This is a good lesson as to why you should never go off of just maximum transfer rate values, it is very important to see the overall average transfer rate. So, i then changed the strip size and got, This is the final setup i have come to. It is the raid 10 but with a 16 KiB strip size, as with the raid 0. Here we see a more calm and predictable curve, giving a higher average and minimum.

Hopefully, after reading this you can see the benefits of raid arrays. Just by utilizing this technology, even without spending a chunk of change you can achieve considerable performance improvements in hard drives which is important as it is one of the largest bottlenecks in computers, and affects much more than just load times. From here there is now nothing more to do, other than to go and have fun with your raid! Soon I'll be writing an article on how best to optimize your hard drive, which will allow you to make use of your array even further.
until then, enjoy!
[Edited by - Jarrod1937 on July 11, 2008 11:08:27 PM]

##### Share on other sites
Quote:
 This create a write speed penalty for all data being written to the array, since it must be written to both drives

Not necessarily.

First, if you have spare controller bandwidth, because it's writing to two drives at the same time, it can write to both drives in parallel. This is a good reason to put the two drives on different channels.

Second, write-behind RAID 1 controllers will buffer up the additional writes, and then flush them out once there's a lull in activity, thus if you're doing something other than full-on video capture, you may never see the slow-down, even if you put both drives on the same channel.

Personally, I find that RAID 1 is the easiest to set up, and the best trade-off of performance and reliability for me. Hard drives do go bad after a few years, and in the last five years, I've replaced 4 failed drives in 2 different systems, without losing a single bit of data, all because of RAID 1. I do, however, still have a remote back-up that gets taken once weekly, in case the entire computer burns out, is stolen, etc.

And, finally, a spelling nit: it's called "striping" as in "the stars and stripes." "Stripping" is something else, usually found in bars where they've sealed the windows and serve cheap, crappy beer.

##### Share on other sites
Quote:
Original post by hplus0603
Quote:
 This create a write speed penalty for all data being written to the array, since it must be written to both drives

Not necessarily.

First, if you have spare controller bandwidth, because it's writing to two drives at the same time, it can write to both drives in parallel. This is a good reason to put the two drives on different channels.

Second, write-behind RAID 1 controllers will buffer up the additional writes, and then flush them out once there's a lull in activity, thus if you're doing something other than full-on video capture, you may never see the slow-down, even if you put both drives on the same channel.

Personally, I find that RAID 1 is the easiest to set up, and the best trade-off of performance and reliability for me. Hard drives do go bad after a few years, and in the last five years, I've replaced 4 failed drives in 2 different systems, without losing a single bit of data, all because of RAID 1. I do, however, still have a remote back-up that gets taken once weekly, in case the entire computer burns out, is stolen, etc.

And, finally, a spelling nit: it's called "striping" as in "the stars and stripes." "Stripping" is something else, usually found in bars where they've sealed the windows and serve cheap, crappy beer.

You're correct actually, though it depends on how intelligent the controller is. I'll make the corrections later.

##### Share on other sites
So .. my raid 0 is on the fence and I want to add a raid 1 to it now that i have money... this can be done right? I was told once you hook in the new drives you can go to rebuild as 01 and it will keep your data intact with the new drives?? I'm nervous about it though i dont want to lose my data if possible.

I have an Nvidia sli Mobo (i think 680i or something like that)

##### Share on other sites
Quote:
 Original post by AverageJoeSSUSo .. my raid 0 is on the fence and I want to add a raid 1 to it now that i have money... this can be done right? I was told once you hook in the new drives you can go to rebuild as 01 and it will keep your data intact with the new drives?? I'm nervous about it though i dont want to lose my data if possible.I have an Nvidia sli Mobo (i think 680i or something like that)

Lol. It's doable on more advanced controllers. On the cheap little built on software raid controllers most consumer motherboards come with, you'll probably have issues.

Your best bet is to backup your data or create a ghost image, then configure your new raid configuration, and then dump your image back onto the new array.

##### Share on other sites
Quote:
Original post by Washu
Quote:
 Original post by AverageJoeSSUSo .. my raid 0 is on the fence and I want to add a raid 1 to it now that i have money... this can be done right? I was told once you hook in the new drives you can go to rebuild as 01 and it will keep your data intact with the new drives?? I'm nervous about it though i dont want to lose my data if possible.I have an Nvidia sli Mobo (i think 680i or something like that)

Lol. It's doable on more advanced controllers. On the cheap little built on software raid controllers most consumer motherboards come with, you'll probably have issues.

Your best bet is to backup your data or create a ghost image, then configure your new raid configuration, and then dump your image back onto the new array.

Ugh... Sucky... Sounds like a safer bet though. Thanks!

1. 1
Rutin
45
2. 2
3. 3
4. 4
5. 5

• 10
• 28
• 20
• 9
• 20
• ### Forum Statistics

• Total Topics
633407
• Total Posts
3011699
• ### Who's Online (See full list)

There are no registered users currently online

×