Neohapsis is currently accepting applications for employment. For more information, please visit our website www.neohapsis.com or email email@example.com
Re: [Dailydave] The Small Company's Guide to Hard Drive Failure and Linux
From: Anthony.zboralski (bcs2005bellua.com)
Date: Thu Nov 18 2004 - 12:51:41 CST
> Frank Berger said:
>> using RAID-1 is most of the times also okay as a software RAID
>> configuration. Normally you do not see much more CPU load doing RAID 1
>> as software...
> CPU overhead has never really been the argument against software
> Even with RAID-5, where it matters more, it's a forced argument at
> The advantage of RAID-1 in hardware is that you are reducing the
> on the bus. In a two-disk hardware RAID-1, you send a single packet
> (block) of data across your system buss and the controller replicates
> block out to the disks. With software RAID-1, you have two blocks
> across the bus for every write. Maybe system bus saturation isn't
> important to you and so then the point is moot.
> As was pointed out earlier (but perhaps not with enough forcefulness)
> nearly all hardware RAID controllers for ATA (IDE) are a lie. Unless
> are buying something high quality, what you are really getting is
> RAID-- software RAID on a chip. Who do you trust more to write a better
> implementation of RAID? Neil Brown, who benefits from peer review, or
> anonymous software engineer at a motherboard manufacturer? In addition
> this, there is the tendency by these same vendors (many of whom provide
> integrated motherboard support) to provide very badly written drivers
> hook into the SCSI layer-- making your array crawl.
> On the other hand, SW RAID-1 is fast on Linux and if one side of your
> mirror dies, you can bring up the remaining disks as a standalone
> (sans RAID)-- just mount the partitions normally. Presumably you aren't
> worried about system bus saturation in this case, which I suspect
> people are not.
> Shameless plug (though it's getting a bit outdated):
> Also let me second the endorsement of Pilosoft. They have been very
> helpful through several power supply failures.
Make sure you stay away from "hardware RAID" as most of the
don't even support RAID5 and the performance is really poor 15
meg/second again 100+ with software raid. Plus you're stuck with a
vendor with poor support.
There is a benchmark somewhere on google, in which linux software raid
really ahead of most implementations (*bsd, hardware, etc.)
I am using a 350gig Linux 2.6.9 Software RAID5 array (4x120gig + 1
spare) and I am
really happy with it . The setup was done using IBM's volume manager,
http://evms.sourceforge.net . it has really nice interface and
rootdis:/home/acz# hdparm -t /dev/md0
Timing buffered disk reads: 168 MB in 3.01 seconds = 55.79 MB/sec
rootdis:/home/acz# hdparm -T /dev/md0
Timing cached reads: 1176 MB in 2.00 seconds = 586.62 MB/sec
CPU usage is really minimal on this machine (1.8ghz AMD 2500+, 1gig of
the only time CPU usage climbs up is after a crash (my UPS died on my
a few times without warning while the power was up; I guess it is time
to replace it; it's
kind of stupid that my UPS is a single point of failure, anyone know an
easy way to run 2 in parallel.)
RAID5 or RAID6 is really the best way to go in terms of security and
allows 1 drive failure (and will rebuilt its state automatically if you
have a spare) and RAID6 allows 2 drives to fail at the same time. Using
other raid modes for anything is pure waste unless you work with big
temporary files, for which the performance boost of a stripping array
will come handy; 1 disk failure on a stripping array and you can say
bye to your data.
Oh and btw even though you use RAID5/6 you are not protected from
corruption and human stupidity; you still need to do backup regularly.
Dailydave mailing list