RAID 0 Stripe Sizes Compared with SSDs: OCZ Vertex Drives Tested

Storage/SSD by jmke @ 2010-01-15

We all know that two is better than one, we have dual core CPUs, dual GPU video cards, and if you really want to get the most out of your storage, a set of SSDs in RAID will boost your performance noticeably. We tested 6 different RAID stripe sizes and 3 different RAID configs in 4 different storage benchmarks, some synthetic, others real world operations. More than 1200 benchmarks results summed up in a few charts.

  • next

Introduction & Test Setup

Introduction

All good things come in pairs”, we have two eyes, two ears, two CPU cores, dual GPU configs. So why not have two storage devices linked up? When RAID was first conceived, it certainly had a business mind approach, increase redundancy without impacting performance (a lot). But with more affordable RAID chips we have been playing around with RAID on desktop systems for many years now.

RAID 0 is what it is all about on desktop systems when you want the highest performance, of course you always have a huge risk of data loss in case one of the RAID members in the RAID 0 array decides to stop working. With ye ‘ol HDD, who have moving parts and spinning platters it’s only a matter of time before they stop working. When SSDs were introduced they boasted impressive speeds, but also very high MTBF (mean time between failures):

Madshrimps (c)
(source)


SSD: 2 million hours roughly translates into 228 years. Where as HDD is about ~34 years. Most of us know that 34 years for a HDDs is a bit too optimistic, when your HDD is more than 5 years old you can start expecting it to fail, not saying it will, but keeping in mind to make a backup copy. If we translate this 34/5 ratio to the SSD side, it comes to about 34 years. So a realistic MTBF of more than 30 years is quite sufficient, you’ll most likely run out of rewrite cycles on the NAND flash chips inside.

Why is this all important? Because setting up a RAID-0 array of SSDs will inherit less risk than using one based on HDDs.


Stripe Size - Does size matter ?

When we talk about Stripe Size regarding RAID configurations, we’re referring to the size of the chunks in which your data is divided between the RAID drive members. If you have a 256Kb file and a Stripe Size of 128Kb, in a RAID 0 config with 2 members, each member will get one piece of 128Kb of the 256Kb file. The stripe size settings and options depend on the kind of raid controller you will be using.

Most raid controllers will allow you to go from 4kb stripe size up to 64 or 128Kb. Our test setup based around an Intel X58 motherboard with integrated Intel RAID controller goes up to 128Kb.

We won’t go through the motions of setting up RAID on your system, if you intend to use it, you should set aside a bit of spare time to experiment with the different settings; what we’ve done for you in this article is configure RAID 0 with 2x SSDs using different Stripe Sizes and see how this impacts performance.


Enabling Hard Disk Write-Back Cache

When setting up a RAID array on an Intel based controller you should install their Matrix Storage Manager. In your Windows OS this tool will allow you to enable write-back cache for your RAID array. In a none-raid setup you can set this up using Windows’ device manager, but once you defined your RAID array you’ll have to use the Matrix Storage Console.

Madshrimps (c)


We’ll do some tests to see if and where there differences when enabling this software option on the next pages.

Test Setup

After our real world SSD tests we asked a second sample of OCZ’s Vertex SSD. Armed with two 30Gb SSDs we installed them into a Dell T5500 workstation which is equipped with X58 motherboard and 3Ghz Core i7 CPU and 4Gb ram.

Madshrimps (c)


We installed Windows 7 x64 edition and started our tests. The OCZ Vertex drives were flashed with firmware version 1.41 which has OCZ’s garbage collection.

  • Note: For All RAID-0 tests Cache Write-Back is enabled unless mentioned otherwise

    Partitions were created using W7 disk manager , default NTFS file format. This is an important side note, as you can align your partition to match the stripe size you’re using, as well as the NTFS format to match the amount of bytes you set as stripe size. We did a quick test to see how much the impact on performance would be: negligible. But when you are setting up your final config, it’s recommended to follow these steps nonetheless.
    • next
    Comment from Faiakes @ 2010/01/15
    Good article.

    I'll be upgrading soon (or at least after you publish the new Cooler roundup ) and I'm undecided as to an OS drive.

    260 Euro can get you a very fast, single, simple to use SSD.
    Comment from jmke @ 2010/01/15
    Do note I'm not saying that I'm in any way recommending 30Gb drives, or these specific SSDs; it's only an example of what it would cost you, so you can anticipate. For €260 you can get a 80Gb Intel G2 drive (for €219 in fact). Cost wise it's not as interesting for sure.

    But of course this does scale upwards, if you buy 3x <€100 drives (let say the value X25 drives, 40gb a piece, for €300 you'll have a 120Gb drive with write speeds up to 120mb/s and read speeds well over 400mb/s, and random IOPs will be crazy
    Comment from spock @ 2010/01/16
    I don't even know where to begin -- there is so much incorrect, misleading, and dangerous information in this article. Even for the things that the author gets right, it is obvious that he doesn't actually know why they are right, or he has miraculously stumbled upon the correct answer from false assumptions and flawed logic. Please, if you are reading this article and you are a storage technology novice, get yourself informed elsewhere. If you are already in-the-know, then you will have already noticed the misinformation and no damage has been done.
    Comment from jmke @ 2010/01/16
    Quote:
    I don't even know where to begin
    most people do recommend 128k stripe size for Intel based controllers when using 2x SSDs, 256k for 4xSSD and scaling upwards. if you have a dedicated RAID controller card you'll have to experiment yourself as they behave differently regarding RAID 0 and stripe size performance.

    never claim to be "an expert", nor is it a "definitive" guide to everything SSD and RAID. But hey, it's easy to make claims about "incorrect, misleading" without actually saying anything...
    Comment from spock @ 2010/01/17
    Quote:
    Originally Posted by jmke View Post
    most people do recommend 128k stripe size for Intel based controllers when using 2x SSDs, 256k for 4xSSD and scaling upwards. if you have a dedicated RAID controller card you'll have to experiment yourself as they behave differently regarding RAID 0 and stripe size performance.

    never claim to be "an expert", nor is it a "definitive" guide to everything SSD and RAID. But hey, it's easy to make claims about "incorrect, misleading" without actually saying anything...
    I'm sorry. I read your article yesterday and my head nearly exploded. Your evaluation of stripe size is actually correct and well done, in fact, I would actually like to see some of this testing repeated with other brands of SSDs to see if the findings are particular to that drive or not. It's the other stuff in the article that worries me.

    Let me give you some examples. MTBF means absolutely nothing and it should not be relied on as a specification. Using your "34/5" ratio isn't valid, and until SSDs have been in use for 5-10 years we can make no assumptions about their long term reliability. Also I happen to know that the operating temperature for SSDs is actually not that good. Keep them around room temperature if you can. Bottom line is, RAID-0 is dangerous regardless of the technology used behind it. Your bolded statement:

    Quote:
    setting up a RAID-0 array of SSDs will inherit less risk than using one based on HDDs
    is untrue and it's dangerous to make people think that.

    Next, some of your results are suspicious. You can't expect to get over 100% increase in any score between a single disk vs RAID-0 without raising eyebrows. Also, you mix up percentage increases and "x" multipliers -- for example a 200% increase is actually 3x not 2x. You hide the actual results behind percentages.

    Quote:
    Overall we can conclude that you can expect a 50~200% boost in disk performance going from single SSD to two of them in RAID 0. Sequential operations will benefit the most, but smaller file operations won’t be slower than a single disk, on average.
    Again, 200% increases without explaining how that is possible by only doubling the hardware.

    Next, your summary for the write-back cache results:

    Quote:
    Summary? Just enable it, it won’t do any harm, worse case scenario you won’t notice a difference, best case scenario you get a nice throughput boost.
    You fail to either qualify your statement with "it won't harm performance", or at least mention the data loss possibility from using write-back caching. I don't recommend disabling the cache either, it's just important to understand the risks.

    Your RAID-1 results are surprising. Possible, I suppose, but surprising. I wouldn't expect it to be so much slower. Again, I'd like to see the raw numbers instead of percentages.

    The graph on the top of page 5 is awesome and really answers the question that your article attempted to. Also I appreciate the raw values instead of percentages.
    Comment from jmke @ 2010/01/17
    Quote:
    RAID-0 is dangerous regardless of the technology used behind it. MTBF means absolutely nothing and it should not be relied on as a specification.
    I agree 100% with you, I would only consider using RAID-0 as a scratch disk or in a system which only purpose is "benchmarking" (www.hwbot.org).
    MTBF gives you but an idea of what to expect, not what you're going to get; since it the time between failures "on average" you can start off with a failure immediately and then get the specced MTBF until the next one, but that won't do any good now

    Quote:
    You can't expect to get over 100% increase in any score between a single disk vs RAID-0 without raising eyebrows.
    the Cache Write-Back on the Intel Storage Controller does boost performance significantly in the sequential read tests , then add in going from single to RAID 0, and you'll have a performance increase more than just 2x for some file operations.

    Quote:
    Also, you mix up percentage increases and "x" multipliers -- for example a 200% increase is actually 3x not 2x.
    thanks for this, will go through the article and correct this!

    Quote:
    You hide the actual results behind percentages.
    never my intention to hide any results; but with the amount of raw data to process, I thought it best to focus on certain aspects with increase/decrease in % rather than sticking with the performance numbers; which will definitely change between SSD models & sizes.

    Quote:
    :your summary for the write-back cache results: You fail to either qualify your statement with "it won't harm performance",
    well, the only area where CWB is "bad" for performance is the random read test of HD Tune, in all other scenarios there's a noticeable increase in performance.

    Quote:
    or at least mention the data loss possibility from using write-back caching.
    "write caching" for HDDs is enabled by default in some OS, and most enable it afterwards too; the increase in performance is worth the possible "data loss" by a sudden power down during a write operation; the only area where we now disable it actively is on removable storage, USB disks etc, to allow easy removal without losing any data we copied to the disk.

    Quote:
    Your RAID-1 results are surprising. Possible, I suppose, but surprising. I wouldn't expect it to be so much slower.
    I was expecting better read results, slower write results in the RAID 1 setup, so wasn't really surprised by the outcome. Write speeds are lower as data has to be written to two drives instead of one; sequential read speeds are higher as the data is gathered from two drives at the same time.

    A long time ago I did some similar tests with HDD, raid 1 vs no-raid: http://www.madshrimps.be/?action=get...280&articID=69 ,pretty much the same outcome overall

    Quote:
    Also I appreciate the raw values instead of percentages.
    I'll see if I can make the source .xls somewhat presentable
    Comment from Kougar @ 2010/01/18
    Quote:
    Originally Posted by spock View Post
    Also I happen to know that the operating temperature for SSDs is actually not that good. Keep them around room temperature if you can.
    Ya have some fair point, but this one I find hard to believe. What SSD would you be referring to here? I've tested a few models and none even got warm after benchmark runs.
    Comment from spock @ 2010/01/19
    Quote:
    Originally Posted by Kougar View Post
    Ya have some fair point, but this one I find hard to believe. What SSD would you be referring to here? I've tested a few models and none even got warm after benchmark runs.
    I don't mean they need their own cooling. They don't actually self-heat very much, just make sure they are not in a case with hot air from overclocked CPUs/GPUs or your data will float away!
    Comment from spock @ 2010/01/19
    Quote:
    Originally Posted by jmke View Post
    MTBF gives you but an idea of what to expect, not what you're going to get; since it the time between failures "on average" you can start off with a failure immediately and then get the specced MTBF until the next one, but that won't do any good now
    No, it actually means NOTHING. Forget it was ever told to you. MTBF is a completely meaningless spec.

    Quote:
    the Cache Write-Back on the Intel Storage Controller does boost performance significantly in the sequential read tests , then add in going from single to RAID 0, and you'll have a performance increase more than just 2x for some file operations.
    So wait, you are changing more than one variable in that test? I had the impression that the comparison was being made between a single disk and RAID-0 without any other modifications to the setup.

    Quote:
    never my intention to hide any results; but with the amount of raw data to process, I thought it best to focus on certain aspects with increase/decrease in % rather than sticking with the performance numbers; which will definitely change between SSD models & sizes.
    Agreed. It's just when results get interesting (or suspicious), it's nice to see the actual values for a little bit of independent verification.

    Quote:
    "write caching" for HDDs is enabled by default in some OS, and most enable it afterwards too; the increase in performance is worth the possible "data loss" by a sudden power down during a write operation; the only area where we now disable it actively is on removable storage, USB disks etc, to allow easy removal without losing any data we copied to the disk.
    Yes, but you didn't say that. You said "Just enable it, it won’t do any harm" except it can cause harm in the form of data loss. Yes it's often the default, yes it's an acceptable risk, etc., but it's still there.

    Quote:
    I was expecting better read results, slower write results in the RAID 1 setup, so wasn't really surprised by the outcome. Write speeds are lower as data has to be written to two drives instead of one; sequential read speeds are higher as the data is gathered from two drives at the same time.
    RAID-1 needs to write to both disks but it can (essentially) be done simultaneously so the typical expectation is RAID-1 has the same write speed and up to 2x read speed as a single disk. Any significant loss in write performance is probably an implementation bug.
    Comment from jmke @ 2010/01/19
    Quote:
    So wait, you are changing more than one variable in that test?
    no.I'm speaking about the overall change between single disk and optimal RAID 0 config.

    Quote:
    You said "Just enable it, it won’t do any harm" except it can cause harm in the form of data loss.
    it's ON by default for WD HDDs, and the "data loss" might be less than you think

    Quote:
    RAID-1 needs to write to both disks but it can (essentially) be done simultaneously so the typical expectation is RAID-1 has the same write speed and up to 2x read speed as a single disk.
    theoretically yes, but practical experience shows slower write speeds with RAID 1 compared to a single disk setup
    Comment from jmke @ 2010/01/19
    interesting source for SSD articles and information: http://www.storagesearch.com/ , SSD endurance is an interesting read: http://www.storagesearch.com/ssdmyths-endurance.html

    Quote:
    not only can an SSD RAID array offer a multiple of a single SSD's throughput, and IOPs, just as with hard disks but depending on the array configuration the operating life can be multiplied as well - because not all the disks will operate at 100% duty cycle. That means that MTBF and not write endurance will be the limiting factors. And although oem published MTBF data for hard disks has been discredited recently - the MTBF data for flash SSDs has been verified for over a decade in more discriminating applications in high reliability embedded systems.

     

    reply