2Mb and 8Mb cache size on HDs tested in RAID

Storage/HDD by jmke @ 2003-06-12

How does a 2mb cache harddisk compare to its bigger 8mb cache brother? We test them in different RAID setups, using real word benchmarks to show you the actual difference! Software RAID 0 / Hardware RAID 1 / 0 with Stripe sizes: 1 - 4 - 16 - 64 - 512.
HDTach/Sisoft Sandra/File Copying/UT2003

  • next

Introduction

Maxtor DiamondMax Plus 9 Raid Comparison

In this article I will try to find out what the exact advantages are of having a RAID 0 or RAID 1 setup. Nowadays large storage space is becoming very affordable when it comes down to hard drives. The only element that hasn't taken very big leaps of improvement is the average read/write speeds of these drives.
Now that many of the motherboards from almost all vendors are shipping with an integrated RAID controller, it becomes quite attractive to actually use this given functionality.

For those of you that know all about RAID, skip to the next paragraph, otherwise just read on what exactly RAID (redundant array of independent disks; originally redundant array of inexpensive disks) is.

RAID is a way of storing the same data in different places (thus, redundantly) on multiple hard disks. By placing data on multiple disks, Input/Output operations can overlap in a balanced way, improving performance. Since multiple disks increases the mean time between failure (MTBF), storing data redundantly also increases fault-tolerance.


The 2 most popular types of RAID for the home and performance user are RAID 0 and RAID 1.

Thanks to this link from the HWFaq I had quick access to these explanatory drawings and text:

RAID 0: Striped Disk Array without Fault Tolerance

Madshrimps (c)
RAID Level 0 requires a minimum of 2 drives to implement


Advantages:
RAID 0 implements a striped disk array, the data is broken down into blocks and each block is written to a separate disk drive
I/O performance is greatly improved by spreading the I/O load across many channels and drives
Best performance is achieved when data is striped across multiple controllers with only one drive per controller
No parity calculation overhead is involved
Very simple design
Easy to implement

Disadvantages:
Contra Points Not a "True" RAID because it is NOT fault-tolerant
Contra Points The failure of just one drive will result in all data in an array being lost
Contra Points Should never be used in mission critical environments

Recommended Use:
  • Video Production and Editing
  • Image Editing
  • Pre-Press Applications
  • Any application requiring high bandwidth


    RAID 1: Mirroring and Duplexing

    Madshrimps (c)
    RAID Level 1 requires a minimum of 2 drives to implement


    Advantages:
    One Write or two Reads possible per mirrored pair
    Twice the Read transaction rate of single disks, same Write transaction rate as single disks
    100% redundancy of data means no rebuild is necessary in case of a disk failure, just a copy to the replacement disk
    Transfer rate per block is equal to that of a single disk
    Under certain circumstances, RAID 1 can sustain multiple simultaneous drive failures
    Simplest RAID storage subsystem design

    Disadvantages:
    Contra Points Highest disk overhead of all RAID types (100%) - inefficient
    Contra Points Lower write performance

    Recommended Use:
  • Accounting , Payroll , Financial
  • Any application requiring very high availability
  • Dual data storage from important files



    Testing Environment

    For this test I used a PCI RAID controller from Q-Tec, 340R ATA133
    It can operate in 3 different ways:
  • the drives connected act as individual drives
  • the drives connected form a RAID 1 setup
  • the drives connected form a RAID 0 setup
  • the drives connected form a RAID 0+1 setup (this is not discussed in this article, this mode combines 2 striped (RAID 0) drive-arrays and threats the them in a RAID 1 "way", if you wish to know what it is in detail please click this link)

    I had a total of 4 drives to my disposal, 2x 2mb and 2x 8mb Cache version of the latest DiamondMax Plus 9 Series

    Each pair of drives was hooked up separately on each IDE port on the controller

    Madshrimps (c)


    After I finished testing the 2Mb Cache version I changed the drives for their more powerful brothers

    I ran the following tests:

  • Individual drive mode
  • RAID 0 - 1k stripe size
  • RAID 0 - 4k stripe size
  • RAID 0 - 16k stripe size
  • RAID 0 - 64k stripe size
  • RAID 0 - 512k stripe size
  • RAID 1
  • Software RAID 0

    Madshrimps (c)


    The tag on each hard drive tells you a lot about what its specifications are:

  • 2 first digits define the Maxtor Drive model
    examples:
    4Rxxxxx = DiamondMax 16 series
    6Yxxxxx = DiamondMax Plus 9 series
    6Exxxxx = DiamondMax Plus 8 series
    2Fxxxxx = Fireball III series

  • the next 3 digits define the size of the drive
    examples:
    6E040L0 = 40GB
    6Y120L0 = 120GB

  • The last 2 represent different things depending on the drive model, the latest DiamondMax Series P0 and L0 states the drives has an IDE interface uses 8Mb (P0) and 2Mb (L0) Cache, M0 is used for Maxtor drives with a SATA interface. While for the older drives and the Diamond Max 16 modules the "J" in model number denotes Ball bearing motor, "L" denotes Fluid Dynamic Bearing (FDB) motor)
    examples:
    4R120L0 = FDB motor
    4R080J0 = Ball Bearing motor
    6Y120L0 = IDE 2Mb cache
    6Y120P0 = IDE 8Mb cache
    6Y200M0 = SATA


    Test Setup and Benchmark line-up

    For testing I used the following hardware:
  • Abit KR7A
  • XP1700+
  • 256Mb DDR
  • Q-Tec 340R
  • System drive: 1x 20GB DiamondMax Plus 8
  • 2x 120GB DiamondMax Plus 9 2Mb Cache (6Y120L0)
  • 2x 120GB DiamondMax Plus 9 8Mb Cache (6Y120P0)

    The benchmarks used are:

  • HDTach 2.61, read/write tests
  • Sisoft Sandra 2003
  • Copying of UT2003 Demo folder TO and FROM source drive (20GB maxtor)
  • Load time of a Map in UT2003 , timed first loading time and buffered loading time

    Madshrimps (c)
    Detailed view of the folder that was copied TO and FROM the test drives



    Now that you have all the info on the system and tests that need to be run: here are the results, the first batch of individual tests results will be without comment as they pretty much speak for themselves..


    • next