linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Which raid card to buy for Sarge
@ 2004-03-31 15:47 me
  2004-03-31 16:15 ` Luca Berra
  2004-04-01  4:24 ` me
  0 siblings, 2 replies; 40+ messages in thread
From: me @ 2004-03-31 15:47 UTC (permalink / raw)
  To: linux-raid

Hi,

I wanted to buy an IDE raid card, to run inside a stock debian Sarge box.  I
was hoping someone could point me to a card that will play nice with Sarge
out of the box.  Seems like all the cards support the usual distros (RH,
Suse...) but none seem to mention debian (and more specifically Sarge).  I'd
like a card that can run raid 1 and 5.

If it doesn't run right out of the box, how about one that isn't too hard to
get working (I know "too hard" is subjective)

Any suggestions would be very appreciated

Thanks
Jay


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 15:47 Which raid card to buy for Sarge me
@ 2004-03-31 16:15 ` Luca Berra
  2004-03-31 16:43   ` me
  2004-04-01  4:24 ` me
  1 sibling, 1 reply; 40+ messages in thread
From: Luca Berra @ 2004-03-31 16:15 UTC (permalink / raw)
  To: me; +Cc: linux-raid

me@heyjay.com wrote:

> Hi,
> 
> I wanted to buy an IDE raid card, to run inside a stock debian Sarge box.  I
> was hoping someone could point me to a card that will play nice with Sarge
> out of the box.  Seems like all the cards support the usual distros (RH,
> Suse...) but none seem to mention debian (and more specifically Sarge).  I'd
> like a card that can run raid 1 and 5.
> 
> If it doesn't run right out of the box, how about one that isn't too hard to
> get working (I know "too hard" is subjective)
> 
> Any suggestions would be very appreciated
look for 3ware cards,
the driver is in the stock linux 2.4 kernel,
so it works out of the box.

-- 
Luca Berra -- bluca@comedia.it


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 16:15 ` Luca Berra
@ 2004-03-31 16:43   ` me
  2004-03-31 16:58     ` Måns Rullgård
  0 siblings, 1 reply; 40+ messages in thread
From: me @ 2004-03-31 16:43 UTC (permalink / raw)
  To: linux-raid

Seems like 3ware is the card of choice.  I'll probably pick one up at cdw,
unless anyone knows where I can get them on the cheap.

Speaking of cheap, anyone ever have any luck with the Highpoint RocketRaid
cards on Sarge?  They're roughly 80% the price of 3ware (maybe you get what
you pay for)

Thanks
Jay
----- Original Message ----- 
From: "Luca Berra" <bluca@comedia.it>
To: <me@heyjay.com>
Cc: <linux-raid@vger.kernel.org>
Sent: Wednesday, March 31, 2004 10:15 AM
Subject: Re: Which raid card to buy for Sarge


> me@heyjay.com wrote:
>
> > Hi,
> >
> > I wanted to buy an IDE raid card, to run inside a stock debian Sarge
box.  I
> > was hoping someone could point me to a card that will play nice with
Sarge
> > out of the box.  Seems like all the cards support the usual distros (RH,
> > Suse...) but none seem to mention debian (and more specifically Sarge).
I'd
> > like a card that can run raid 1 and 5.
> >
> > If it doesn't run right out of the box, how about one that isn't too
hard to
> > get working (I know "too hard" is subjective)
> >
> > Any suggestions would be very appreciated
> look for 3ware cards,
> the driver is in the stock linux 2.4 kernel,
> so it works out of the box.
>
> -- 
> Luca Berra -- bluca@comedia.it
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 16:43   ` me
@ 2004-03-31 16:58     ` Måns Rullgård
  2004-03-31 18:03       ` Ralph Paßgang
  0 siblings, 1 reply; 40+ messages in thread
From: Måns Rullgård @ 2004-03-31 16:58 UTC (permalink / raw)
  To: linux-raid

<me@heyjay.com> writes:

> Seems like 3ware is the card of choice.  I'll probably pick one up at cdw,
> unless anyone knows where I can get them on the cheap.
>
> Speaking of cheap, anyone ever have any luck with the Highpoint RocketRaid
> cards on Sarge?

I don't use debian, but my rocketraid is doing well with recent kernels.

> They're roughly 80% the price of 3ware (maybe you get what you pay
> for)

80%?  You must mean 20%.  The Highpoint cards are software raid cards,
meaning that they are just regular ATA cards with a "RAID" label
slapped on the box.

-- 
Måns Rullgård
mru@kth.se

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 16:58     ` Måns Rullgård
@ 2004-03-31 18:03       ` Ralph Paßgang
  2004-03-31 19:50         ` Mark Hahn
  0 siblings, 1 reply; 40+ messages in thread
From: Ralph Paßgang @ 2004-03-31 18:03 UTC (permalink / raw)
  To: linux-raid

Am Mittwoch, 31. März 2004 18:58 schrieben Sie:
> <me@heyjay.com> writes:
> > Seems like 3ware is the card of choice.  I'll probably pick one up at
> > cdw, unless anyone knows where I can get them on the cheap.
> >
> > Speaking of cheap, anyone ever have any luck with the Highpoint
> > RocketRaid cards on Sarge?
>
> I don't use debian, but my rocketraid is doing well with recent kernels.
>
> > They're roughly 80% the price of 3ware (maybe you get what you pay
> > for)
>
> 80%?  You must mean 20%.  The Highpoint cards are software raid cards,
> meaning that they are just regular ATA cards with a "RAID" label
> slapped on the box.

If would not use the "raid" feature of the Highpoint cards, because it is only 
software raid and not so performant as a hardware raid. If you don't need a 
high-end system, then you can make a software raid with linux standard tools 
like: mdadm. If you need performance, then you should think about a scsi raid 
setup and a good adapter, maybe adaptec or 3ware or something like that.

If software raid, then I would prefer a linux software raid (with mdadm for 
example) and would not use not good supported third-party software raid 
"drivers". For mdadm you will find more support and you can migrate the disks 
in another linux box and it will read the raid... With the Highpoint raid 
function you are forced to continue using the highpoint card.

And just for your information, you don't have to look after a card that is 
specially supported by Debian. Debian uses a normal linux kernel (+ some 
patches, but nothing revolutionary) so it can handle each raid adapter that 
is supported by the linux kernel... It's the same for SuSE and RedHat, 
because they support each adapter supported by the standard linux kernel plus 
maybe some patched-in drivers). But even when the adapter is not supported in 
the vanilla kernel you can patch a kernel-source by yourself... But it is 
always a good decision to only use hardware that is supported in a stock 
kernel.

So better look after general linux support. I think there is not one piece of 
hardware that is declared as: "ready for debian sarge" :))
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 18:03       ` Ralph Paßgang
@ 2004-03-31 19:50         ` Mark Hahn
  2004-03-31 20:19           ` Richard Scobie
                             ` (2 more replies)
  0 siblings, 3 replies; 40+ messages in thread
From: Mark Hahn @ 2004-03-31 19:50 UTC (permalink / raw)
  To: linux-raid

> If would not use the "raid" feature of the Highpoint cards, because it is only 
> software raid and not so performant as a hardware raid. If you don't need a 

please don't say things like this.  HW raid is *NOT* generally
faster or better than software raid.  yes, if you're building a 
quad-gigabit fileserver out of an old P5/100 you had sitting around,
you're not even going to start looking at sw raid.

but for a normal FS config (dual opteron or xeon, >1GB ram,
2-400 MB/s sustained disk throughput), software raid is The Right Choice.

- speed: it's easy to do hundreds of MB/s with sw raid.  it's surprisingly hard
to break even 100 MB/s using sw raid.

- you don't pay through the nose for a crappy embedded processor
to do your parity calculations

- hw raid *does* reduce the amount of PCI-X traffic you generate,
but do you really care, at 1 GB/s?

- sw raid *does* consume some host CPU cycles, but do you care,
given that this is a fileserver?

- give me mdadm and normal userspace tools over some wheel-reinventing
hw raid configurator.

- you've probably got the hardware to fix an exploded sw-raid server
already in your office (other computers, normal disk controllers, etc).
replacing that hw raid card WILL take more than 30 minutes, obviously will
take money, and will eventually become impossible.


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 19:50         ` Mark Hahn
@ 2004-03-31 20:19           ` Richard Scobie
  2004-04-01  9:07             ` KELEMEN Peter
  2004-03-31 20:42           ` Which raid card to buy for Sarge Ralph Paßgang
  2004-04-01  4:49           ` Brad Campbell
  2 siblings, 1 reply; 40+ messages in thread
From: Richard Scobie @ 2004-03-31 20:19 UTC (permalink / raw)
  To: linux-raid

Mark Hahn wrote:

> please don't say things like this.  HW raid is *NOT* generally
> faster or better than software raid.  yes, if you're building a 
> quad-gigabit fileserver out of an old P5/100 you had sitting around,
> you're not even going to start looking at sw raid.
> 
> but for a normal FS config (dual opteron or xeon, >1GB ram,
> 2-400 MB/s sustained disk throughput), software raid is The Right Choice.
> 
> - speed: it's easy to do hundreds of MB/s with sw raid.  it's surprisingly hard
> to break even 100 MB/s using sw raid.
> 

<snip> further points I agree with.

I have spent some time benchmarking a dual xeon 2.4 with 1 GB and a
3ware 7506LP with 4 x 250 GB 7200RPM on 2.4.25 with XFS.

I would very much prefer to run software RAID 10 on this setup, for a
few reasons including being able to use mdadm for everything (I am using
a software RAID 1 for the OS) and the fact that 3dmd seems to have a
negative impact on disk writes, plus the other negatives you mention.

However, after extensive tests with iozone at the filesize (1.2MB), that
this server will be predominantly dealing with, varying stripe sizes and
the myriad of XFS variables, here are my best results:

3ware hardware RAID 10: Reads - 92.8MB/s    Writes - 88.9MB/s

Software RAID 10:  Reads - 53.7MB/s   Writes - 94.9 MB/s

If anyone is running similar hardware and is able to get better software
RAID performance, I would be very interested to hear the parameters.

Regards,

Richard Scobie



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 19:50         ` Mark Hahn
  2004-03-31 20:19           ` Richard Scobie
@ 2004-03-31 20:42           ` Ralph Paßgang
  2004-03-31 20:59             ` Jeff Garzik
  2004-03-31 21:39             ` Guy
  2004-04-01  4:49           ` Brad Campbell
  2 siblings, 2 replies; 40+ messages in thread
From: Ralph Paßgang @ 2004-03-31 20:42 UTC (permalink / raw)
  To: linux-raid

Am Mittwoch 31 März 2004 21:50 schrieben Sie:
> > If would not use the "raid" feature of the Highpoint cards, because it is
> > only software raid and not so performant as a hardware raid. If you don't
> > need a

Oh, my sentence should have started with: "I would not use ..." not with: 
"If.." :)

> please don't say things like this.  HW raid is *NOT* generally
> faster or better than software raid. 

I never said that it is "better" than software raid, because I don't think so. 
I am using mdadm myself and I think it is great software (together with the 
kernel code). But normaly a real hardware raid is without a doubt faster than 
a software raid. But the most servers/computers don't really need hardware 
raid, because the don't produce this huge amount of data.

With the sentence you quoted from me, I only wanted to say, that I wouldn't 
use the software of the highpoint card to build a software raid, because it's 
closed software (if I remeber right) and it is only there to fake the people, 
because highpoint doesn't say clearly (on the product box for example) that 
it is a "software" raid IDE100/133 Card (so it should be called: IDE Adapter 
with software raid tools). If you are not so familiar with this kind of stuff 
you maybe even think years later that this is a real hardware raid 
(espacially under windows).

I would use the HPT Adapter only as normal ide adapter and build a software 
raid, or if I need the performace I would use a hardware raid, but only then. 
I don't have to much money, like the most peole, so I don't buy useless 
stuff. (in my private I still use a pentium 60 as gateway with firewall, ntp 
server, some other small thinks)

But you are right, only a few servers really need a hardware raid. The most 
are idleing the most time :) But many customers (and I am working for a isp) 
wants a hardware raid (even if they haven't good a clou of such stuff and 
doesn't really need this for their server). The words: "hardware raid" are 
good for marketing... Its like ide and scsi drives :) And if a customer wants 
something it not always a good idea trying to convert him to another 
solution.

> yes, if you're building a 
> quad-gigabit fileserver out of an old P5/100 you had sitting around,
> you're not even going to start looking at sw raid.
>
> but for a normal FS config (dual opteron or xeon, >1GB ram,
> 2-400 MB/s sustained disk throughput), software raid is The Right Choice.

i never said that software raid is slow! I just said that hardware raid is 
faster... That's a difference :) I never would say that a amd athlon xp is 
slow, but without a doubt a amd athlon 64 is faster :)

> - speed: it's easy to do hundreds of MB/s with sw raid.  it's surprisingly
> hard to break even 100 MB/s using sw raid.
>
> - you don't pay through the nose for a crappy embedded processor
> to do your parity calculations
>
> - hw raid *does* reduce the amount of PCI-X traffic you generate,
> but do you really care, at 1 GB/s?

it's not about me and if I care... I only tried to help Jay choosing the right 
solution for his raid setup... I only said that I think the hpt adapters are 
fakes raid adapters, because it's software raid, and that if HE wants 
hardware raid, he should take another adapter.

> - sw raid *does* consume some host CPU cycles, but do you care,
> given that this is a fileserver?
>
> - give me mdadm and normal userspace tools over some wheel-reinventing
> hw raid configurator.
>
> - you've probably got the hardware to fix an exploded sw-raid server
> already in your office (other computers, normal disk controllers, etc).
> replacing that hw raid card WILL take more than 30 minutes, obviously will
> take money, and will eventually become impossible.

I also said this... I had some broken disks in hardware raids and in software 
raids in the last 2 years. Both solutions have advantages and disadvantages 
again:

- Advantage: its faster to fix a broken sw raid setup. It's only a shutdown, 
change disk, restart, rebuild the array in the background. In the hardware 
raid you normaly have to rebuild the array in the bios and so the computer is 
offline for at least that time.

- Disadvanage: Sometime linux crashes if a hard disk in an array broke. I 
don't know for sure why this happens, but I guess it has something to do with 
the ide channel which is marked as busy and never gets useable again. A crash 
after a disk broke down never happend to me on a hardware raid.

I know that this a kernel ml for the software raid in linux, but hey,  I only 
told jay what are advantages and disadvantaged in the hpt software raid, the 
linux mdadm software raid and a a hardware raid is. I think there was nothing 
wrong about it and I even think that we have more or less the same optinion 
in sw/hw raids.

Once again: i am using mdadm myself and think that the linux software raid is 
great, no question... I never wanted to say something else. So don't 
understand this wrong, but _even_ hardware raid has the right to life .))

--Ralph
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 20:42           ` Which raid card to buy for Sarge Ralph Paßgang
@ 2004-03-31 20:59             ` Jeff Garzik
  2004-03-31 21:39             ` Guy
  1 sibling, 0 replies; 40+ messages in thread
From: Jeff Garzik @ 2004-03-31 20:59 UTC (permalink / raw)
  To: Ralph Paßgang; +Cc: linux-raid

Ralph Paßgang wrote:
> kernel code). But normaly a real hardware raid is without a doubt faster than 
> a software raid.


Well...  Regardless of hardware or software RAID, the kernel I/O limits 
and the drive I/O limits are the main limiting factor.  Hardware RAID 
mainly helps for RAID1 and RAID5 writes, eliminating duplicate copies of 
data going across the PCI bus... at the expense of relinquishing control 
over your data to the hardware RAID's firmware.

A UDMA/133 PATA hardware RAID from a large company I won't mention is 
quite a bit slower than any SATA software RAID that I've tested...

	Jeff



-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: Which raid card to buy for Sarge
  2004-03-31 20:42           ` Which raid card to buy for Sarge Ralph Paßgang
  2004-03-31 20:59             ` Jeff Garzik
@ 2004-03-31 21:39             ` Guy
  1 sibling, 0 replies; 40+ messages in thread
From: Guy @ 2004-03-31 21:39 UTC (permalink / raw)
  To: linux-raid

I have not seen a hardware RAID that is faster than md (software RAID).
I also have not used any hardware RAID within the last 3 years.
On my P3-500 SMP system, the CPU usage of MD is less than 5% during a
re-build.  Re-builds at about 5M/s on 14 disks.

So, when you say HW RAID is faster, I disagree. But, I admit some HW RAID
may be faster than some software RAID.  Depends on the hardware!

Also, from what I have seen a HW RAID system is limited to the SCSI busses
on the RAID card.  You can't RAID drives on other cards/busses.  md can use
any disk on any buss.  You can even mix SCSI, SATA and IDE with md.

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Ralph Paßgang
Sent: Wednesday, March 31, 2004 3:43 PM
To: linux-raid@vger.kernel.org
Subject: Re: Which raid card to buy for Sarge

Am Mittwoch 31 März 2004 21:50 schrieben Sie:
> > If would not use the "raid" feature of the Highpoint cards, because it
is
> > only software raid and not so performant as a hardware raid. If you
don't
> > need a

Oh, my sentence should have started with: "I would not use ..." not with: 
"If.." :)

> please don't say things like this.  HW raid is *NOT* generally
> faster or better than software raid. 

I never said that it is "better" than software raid, because I don't think
so. 
I am using mdadm myself and I think it is great software (together with the 
kernel code). But normaly a real hardware raid is without a doubt faster
than 
a software raid. But the most servers/computers don't really need hardware 
raid, because the don't produce this huge amount of data.

With the sentence you quoted from me, I only wanted to say, that I wouldn't 
use the software of the highpoint card to build a software raid, because
it's 
closed software (if I remeber right) and it is only there to fake the
people, 
because highpoint doesn't say clearly (on the product box for example) that 
it is a "software" raid IDE100/133 Card (so it should be called: IDE Adapter

with software raid tools). If you are not so familiar with this kind of
stuff 
you maybe even think years later that this is a real hardware raid 
(espacially under windows).

I would use the HPT Adapter only as normal ide adapter and build a software 
raid, or if I need the performace I would use a hardware raid, but only
then. 
I don't have to much money, like the most peole, so I don't buy useless 
stuff. (in my private I still use a pentium 60 as gateway with firewall, ntp

server, some other small thinks)

But you are right, only a few servers really need a hardware raid. The most 
are idleing the most time :) But many customers (and I am working for a isp)

wants a hardware raid (even if they haven't good a clou of such stuff and 
doesn't really need this for their server). The words: "hardware raid" are 
good for marketing... Its like ide and scsi drives :) And if a customer
wants 
something it not always a good idea trying to convert him to another 
solution.

> yes, if you're building a 
> quad-gigabit fileserver out of an old P5/100 you had sitting around,
> you're not even going to start looking at sw raid.
>
> but for a normal FS config (dual opteron or xeon, >1GB ram,
> 2-400 MB/s sustained disk throughput), software raid is The Right Choice.

i never said that software raid is slow! I just said that hardware raid is 
faster... That's a difference :) I never would say that a amd athlon xp is 
slow, but without a doubt a amd athlon 64 is faster :)

> - speed: it's easy to do hundreds of MB/s with sw raid.  it's surprisingly
> hard to break even 100 MB/s using sw raid.
>
> - you don't pay through the nose for a crappy embedded processor
> to do your parity calculations
>
> - hw raid *does* reduce the amount of PCI-X traffic you generate,
> but do you really care, at 1 GB/s?

it's not about me and if I care... I only tried to help Jay choosing the
right 
solution for his raid setup... I only said that I think the hpt adapters are

fakes raid adapters, because it's software raid, and that if HE wants 
hardware raid, he should take another adapter.

> - sw raid *does* consume some host CPU cycles, but do you care,
> given that this is a fileserver?
>
> - give me mdadm and normal userspace tools over some wheel-reinventing
> hw raid configurator.
>
> - you've probably got the hardware to fix an exploded sw-raid server
> already in your office (other computers, normal disk controllers, etc).
> replacing that hw raid card WILL take more than 30 minutes, obviously will
> take money, and will eventually become impossible.

I also said this... I had some broken disks in hardware raids and in
software 
raids in the last 2 years. Both solutions have advantages and disadvantages 
again:

- Advantage: its faster to fix a broken sw raid setup. It's only a shutdown,

change disk, restart, rebuild the array in the background. In the hardware 
raid you normaly have to rebuild the array in the bios and so the computer
is 
offline for at least that time.

- Disadvanage: Sometime linux crashes if a hard disk in an array broke. I 
don't know for sure why this happens, but I guess it has something to do
with 
the ide channel which is marked as busy and never gets useable again. A
crash 
after a disk broke down never happend to me on a hardware raid.

I know that this a kernel ml for the software raid in linux, but hey,  I
only 
told jay what are advantages and disadvantaged in the hpt software raid, the

linux mdadm software raid and a a hardware raid is. I think there was
nothing 
wrong about it and I even think that we have more or less the same optinion 
in sw/hw raids.

Once again: i am using mdadm myself and think that the linux software raid
is 
great, no question... I never wanted to say something else. So don't 
understand this wrong, but _even_ hardware raid has the right to life .))

--Ralph
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html


-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 15:47 Which raid card to buy for Sarge me
  2004-03-31 16:15 ` Luca Berra
@ 2004-04-01  4:24 ` me
  2004-04-01  4:57   ` Clemens Schwaighofer
  2004-04-01  5:58   ` Jeff Garzik
  1 sibling, 2 replies; 40+ messages in thread
From: me @ 2004-04-01  4:24 UTC (permalink / raw)
  To: linux-raid

I'd like to thank everyone who responded, and I didn't mean to start a
software vs hardware war.  But now that I did I'm sort of happy, because
it's made me question if my direction is proper for my goals.

My goal is reliability, not really performance.  I'd like to build a new box
with a mirror (raid 1), and migrate an existing box onto the new box with
the mirror.
Really the new box is an old box (450 Mhz PIII).  I figured I could get a
raid card and be done with it.  But now I'm wondering if I shouldn't skip
getting a raid card and do it with software raid.

Being totally new to Raid, I really don't know the criteria I should be
considering that would lead me down one path or the other.

- I'm using a (fairly) old machine, I have to figure processor speed has to
be an issue with software raid
- When the box is being stressed most of the work is computational, the
process I run puts the CPU% at 98% (according to top).
- Most of the disk activity is read (the disks aren't very active in
general)
- I'm going to use WD 7200 rpm drives
- going to do raid 1
- I'm going to want to do backups onto a tape drive

Thanks
Jay
----- Original Message ----- 
From: <me@heyjay.com>
To: <linux-raid@vger.kernel.org>
Sent: Wednesday, March 31, 2004 9:47 AM
Subject: Which raid card to buy for Sarge


> Hi,
>
> I wanted to buy an IDE raid card, to run inside a stock debian Sarge box.
I
> was hoping someone could point me to a card that will play nice with Sarge
> out of the box.  Seems like all the cards support the usual distros (RH,
> Suse...) but none seem to mention debian (and more specifically Sarge).
I'd
> like a card that can run raid 1 and 5.
>
> If it doesn't run right out of the box, how about one that isn't too hard
to
> get working (I know "too hard" is subjective)
>
> Any suggestions would be very appreciated
>
> Thanks
> Jay
>
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 19:50         ` Mark Hahn
  2004-03-31 20:19           ` Richard Scobie
  2004-03-31 20:42           ` Which raid card to buy for Sarge Ralph Paßgang
@ 2004-04-01  4:49           ` Brad Campbell
  2004-04-01  4:51             ` seth vidal
                               ` (3 more replies)
  2 siblings, 4 replies; 40+ messages in thread
From: Brad Campbell @ 2004-04-01  4:49 UTC (permalink / raw)
  To: linux-raid

Mark Hahn wrote:
>>If would not use the "raid" feature of the Highpoint cards, because it is only 
>>software raid and not so performant as a hardware raid. If you don't need a 
> 
> 
> please don't say things like this.  HW raid is *NOT* generally
> faster or better than software raid.  yes, if you're building a 
> quad-gigabit fileserver out of an old P5/100 you had sitting around,
> you're not even going to start looking at sw raid.

One point. Hardware raid (and faux hardware raid) provides real hot swap with on-the-fly rebuilds.
Linux software raid can't (yet).
Both promise and highpoint proprietary ide raid drivers can, but no raw kernel or libata drivers can 
as yet, and the interface between the hotswap driver and md driver is no where near there.

With the Highpoint driver I get a drive failure and the card starts beeping. I fire up the 
management util, remove the faulty drive, swap it out and insert the new drive in the array. The 
array starts rebuilding - no effect on the uptime and only a slight loss in throughput. Plus it's 
seamless.

I used a pair of Highpoint Rocketraid 1540 cards with the Highpoint driver (as it presented all 8 
drives as 8 units on a scsi chain) with linux md raid-5, and I have now moved onto 3 Promise 
SATA150-TX4 units in an md raid-5.

I did play with the highpoint raid-5 (which can now span controllers) and it's management features, 
hotswap and hot-rebuild were quite good.

Not that I'm advocating hardware or faux hardware raid, just noting that linux software raid still 
has a large deficiency.

Regards,
Brad

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  4:49           ` Brad Campbell
@ 2004-04-01  4:51             ` seth vidal
  2004-04-01  5:01               ` Brad Campbell
  2004-04-01  5:29             ` Guy
                               ` (2 subsequent siblings)
  3 siblings, 1 reply; 40+ messages in thread
From: seth vidal @ 2004-04-01  4:51 UTC (permalink / raw)
  To: Brad Campbell; +Cc: linux-raid

> One point. Hardware raid (and faux hardware raid) provides real hot swap with on-the-fly rebuilds.
> Linux software raid can't (yet).

umm. That's surprising. My software raid arrays running on dell
powervault 221s scsi boxes with adaptec 39160 controllers are
hotswappable and when I put in a new disk and add it to the array it
resyncs on the fly.

Are you just talking about ide-based raid?

-sv



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  4:24 ` me
@ 2004-04-01  4:57   ` Clemens Schwaighofer
  2004-04-01  5:34     ` Jeff Garzik
  2004-04-01  5:58   ` Jeff Garzik
  1 sibling, 1 reply; 40+ messages in thread
From: Clemens Schwaighofer @ 2004-04-01  4:57 UTC (permalink / raw)
  To: me; +Cc: linux-raid

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

me@heyjay.com wrote:

| - I'm using a (fairly) old machine, I have to figure processor speed
has to
| be an issue with software raid
| - When the box is being stressed most of the work is computational, the
| process I run puts the CPU% at 98% (according to top).
| - Most of the disk activity is read (the disks aren't very active in
| general)
| - I'm going to use WD 7200 rpm drives
| - going to do raid 1
| - I'm going to want to do backups onto a tape drive

The only thing that comes to my mind is hot swapping. If you go with IDE
drives it is highlu possible it is not going to work. which means, you
don't have data loss if a drive fails, but downtime (depends on the
server if this is critical).
If you use SCSI drives and Softwareraid you can make hotswap, then if
you don't have an special reason for hardware raid, you can go for
software raid.

- --
Clemens Schwaighofer - IT Engineer & System Administration
==========================================================
TEQUILA\Japan, 6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN
Tel: +81-(0)3-3545-7703            Fax: +81-(0)3-3545-7343
http://www.tequila.co.jp
==========================================================
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFAa6EqjBz/yQjBxz8RApbuAKDf2q472lXPznFHPkpQ56dSLRkKJwCdGHWm
agCc+RjOofOQk3aNaZRB4GY=
=w1b7
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  4:51             ` seth vidal
@ 2004-04-01  5:01               ` Brad Campbell
  2004-04-01  5:39                 ` Guy
  0 siblings, 1 reply; 40+ messages in thread
From: Brad Campbell @ 2004-04-01  5:01 UTC (permalink / raw)
  To: seth vidal; +Cc: linux-raid

seth vidal wrote:
>>One point. Hardware raid (and faux hardware raid) provides real hot swap with on-the-fly rebuilds.
>>Linux software raid can't (yet).
> 
> 
> umm. That's surprising. My software raid arrays running on dell
> powervault 221s scsi boxes with adaptec 39160 controllers are
> hotswappable and when I put in a new disk and add it to the array it
> resyncs on the fly.
> 
> Are you just talking about ide-based raid?

Yes. ATA and SATA raid (Which is the majority of low budget linux software raid stuff)
I realise scsi is hotswap. It comes with the turf.

I would find it pretty hard to justify the expense of scsi for my 7 drive 1.4TB low speed array (Low 
speed because I never *need* more than about 2MB/s in any direction but I need heaps of space)

Brad

^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: Which raid card to buy for Sarge
  2004-04-01  4:49           ` Brad Campbell
  2004-04-01  4:51             ` seth vidal
@ 2004-04-01  5:29             ` Guy
  2004-04-01  5:54               ` Jeff Garzik
  2004-04-01  5:44             ` Jeff Garzik
  2004-04-16 14:25             ` Nick Maynard
  3 siblings, 1 reply; 40+ messages in thread
From: Guy @ 2004-04-01  5:29 UTC (permalink / raw)
  To: linux-raid

I have hot swapped software RAID disks, and re-synced without a re-boot.
Another time the system had a disk failure and re-synced to the hot spare.
I was there and using the system and never knew until the next day.
So, very little performance impact.

With software RAID (md) you must invoke some commands to do the hot swap.
It's not auto-magic.  Some of the hardware RAID systems I know of don't need
any user input to re-sync.  Just swap the bad disk for a good one.

It would be nice if md could detect a disk being replaced and do everything
needed without user input.
BUT!!  md has a big difference on that point.
md does not mirror disks!
Read the above line again!

md mirrors partitions.  I think it is a major difference.  You must
partition the disk that is replacing the failed disk.  A hardware RAID
system usually (maybe always) works on the whole disk.  Since md only works
on partitions it makes having a hot spare a real pain.  I have 2 disk that
make up "/boot" and "/".  Each are mirrored using these 2 disks.  I also
have 14 disks in a RAID5 array.  I have 1 spare.  The spare is configured
with 1 partition.  It is a spare for the RAID5 array.  It can't spare for
the other 2 disks since they have 2 partitions.

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Brad Campbell
Sent: Wednesday, March 31, 2004 11:49 PM
To: linux-raid@vger.kernel.org
Subject: Re: Which raid card to buy for Sarge

Mark Hahn wrote:
>>If would not use the "raid" feature of the Highpoint cards, because it is
only 
>>software raid and not so performant as a hardware raid. If you don't need
a 
> 
> 
> please don't say things like this.  HW raid is *NOT* generally
> faster or better than software raid.  yes, if you're building a 
> quad-gigabit fileserver out of an old P5/100 you had sitting around,
> you're not even going to start looking at sw raid.

One point. Hardware raid (and faux hardware raid) provides real hot swap
with on-the-fly rebuilds.
Linux software raid can't (yet).
Both promise and highpoint proprietary ide raid drivers can, but no raw
kernel or libata drivers can 
as yet, and the interface between the hotswap driver and md driver is no
where near there.

With the Highpoint driver I get a drive failure and the card starts beeping.
I fire up the 
management util, remove the faulty drive, swap it out and insert the new
drive in the array. The 
array starts rebuilding - no effect on the uptime and only a slight loss in
throughput. Plus it's 
seamless.

I used a pair of Highpoint Rocketraid 1540 cards with the Highpoint driver
(as it presented all 8 
drives as 8 units on a scsi chain) with linux md raid-5, and I have now
moved onto 3 Promise 
SATA150-TX4 units in an md raid-5.

I did play with the highpoint raid-5 (which can now span controllers) and
it's management features, 
hotswap and hot-rebuild were quite good.

Not that I'm advocating hardware or faux hardware raid, just noting that
linux software raid still 
has a large deficiency.

Regards,
Brad
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  4:57   ` Clemens Schwaighofer
@ 2004-04-01  5:34     ` Jeff Garzik
  2004-04-01  6:39       ` Clemens Schwaighofer
  0 siblings, 1 reply; 40+ messages in thread
From: Jeff Garzik @ 2004-04-01  5:34 UTC (permalink / raw)
  To: Clemens Schwaighofer; +Cc: me, linux-raid

Clemens Schwaighofer wrote:
> The only thing that comes to my mind is hot swapping. If you go with IDE
> drives it is highlu possible it is not going to work. which means, you
> don't have data loss if a drive fails, but downtime (depends on the
> server if this is critical).

It works in SATA just fine.

(for Linux, it will work when I release the code...)

	Jeff




^ permalink raw reply	[flat|nested] 40+ messages in thread

* RE: Which raid card to buy for Sarge
  2004-04-01  5:01               ` Brad Campbell
@ 2004-04-01  5:39                 ` Guy
  2004-04-01  5:51                   ` Brad Campbell
  0 siblings, 1 reply; 40+ messages in thread
From: Guy @ 2004-04-01  5:39 UTC (permalink / raw)
  To: 'seth vidal'; +Cc: linux-raid

Your RAID is 2MB/s?  I don't know how you make it so slow!
Today's IDE disks are much faster than that!
I bet something is configured wrong!

My P3-500 system gives me 30-40MB/s on a 14 disk RAID5.
It's SCSI, but my disks are slow compared to today's IDE disks.

Guy

-----Original Message-----
From: linux-raid-owner@vger.kernel.org
[mailto:linux-raid-owner@vger.kernel.org] On Behalf Of Brad Campbell
Sent: Thursday, April 01, 2004 12:02 AM
To: seth vidal
Cc: linux-raid@vger.kernel.org
Subject: Re: Which raid card to buy for Sarge

seth vidal wrote:
>>One point. Hardware raid (and faux hardware raid) provides real hot swap
with on-the-fly rebuilds.
>>Linux software raid can't (yet).
> 
> 
> umm. That's surprising. My software raid arrays running on dell
> powervault 221s scsi boxes with adaptec 39160 controllers are
> hotswappable and when I put in a new disk and add it to the array it
> resyncs on the fly.
> 
> Are you just talking about ide-based raid?

Yes. ATA and SATA raid (Which is the majority of low budget linux software
raid stuff)
I realise scsi is hotswap. It comes with the turf.

I would find it pretty hard to justify the expense of scsi for my 7 drive
1.4TB low speed array (Low 
speed because I never *need* more than about 2MB/s in any direction but I
need heaps of space)

Brad
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  4:49           ` Brad Campbell
  2004-04-01  4:51             ` seth vidal
  2004-04-01  5:29             ` Guy
@ 2004-04-01  5:44             ` Jeff Garzik
  2004-04-01  5:56               ` Brad Campbell
  2004-04-16 14:25             ` Nick Maynard
  3 siblings, 1 reply; 40+ messages in thread
From: Jeff Garzik @ 2004-04-01  5:44 UTC (permalink / raw)
  To: Brad Campbell; +Cc: linux-raid

Brad Campbell wrote:
> Mark Hahn wrote:
> 
>>> If would not use the "raid" feature of the Highpoint cards, because 
>>> it is only software raid and not so performant as a hardware raid. If 
>>> you don't need a 
>>
>>
>>
>> please don't say things like this.  HW raid is *NOT* generally
>> faster or better than software raid.  yes, if you're building a 
>> quad-gigabit fileserver out of an old P5/100 you had sitting around,
>> you're not even going to start looking at sw raid.
> 
> 
> One point. Hardware raid (and faux hardware raid) provides real hot swap 
> with on-the-fly rebuilds.
> Linux software raid can't (yet).
> Both promise and highpoint proprietary ide raid drivers can, but no raw 
> kernel or libata drivers can as yet, and the interface between the 
> hotswap driver and md driver is no where near there.

No real need to have much interfacing.  As long as the low level 
hardware and driver support hotswap, md will notice when the drive stops 
responding, or starts spitting out nothing but errors.  Of course, it is 
the nice thing to do, to use mdadm to hot-remove the drive first :)


> With the Highpoint driver I get a drive failure and the card starts 
> beeping. I fire up the management util, remove the faulty drive, swap it 
> out and insert the new drive in the array. The array starts rebuilding - 
> no effect on the uptime and only a slight loss in throughput. Plus it's 
> seamless.

Yeah, md+mdadm can do all this right now, provided the hardware and 
driver support is there...

	Jeff




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  5:39                 ` Guy
@ 2004-04-01  5:51                   ` Brad Campbell
  0 siblings, 0 replies; 40+ messages in thread
From: Brad Campbell @ 2004-04-01  5:51 UTC (permalink / raw)
  To: Guy; +Cc: linux-raid

Guy wrote:
> Your RAID is 2MB/s?  I don't know how you make it so slow!
> Today's IDE disks are much faster than that!
> I bet something is configured wrong!
> 
> My P3-500 system gives me 30-40MB/s on a 14 disk RAID5.
> It's SCSI, but my disks are slow compared to today's IDE disks.
> 

No, it's about 90MB/s read and 15-20MB/s write.
I said I *need* only 2MB/s.

Brad

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  5:29             ` Guy
@ 2004-04-01  5:54               ` Jeff Garzik
  0 siblings, 0 replies; 40+ messages in thread
From: Jeff Garzik @ 2004-04-01  5:54 UTC (permalink / raw)
  To: Guy; +Cc: linux-raid

Guy wrote:
> With software RAID (md) you must invoke some commands to do the hot swap.
> It's not auto-magic.  Some of the hardware RAID systems I know of don't need
> any user input to re-sync.  Just swap the bad disk for a good one.
> 
> It would be nice if md could detect a disk being replaced and do everything
> needed without user input.

Yes, agreed.  This sort of communication would be the [simple] interface 
alluded to in other messages.



> BUT!!  md has a big difference on that point.
> md does not mirror disks!
> Read the above line again!
> 
> md mirrors partitions.  I think it is a major difference.  You must

No, that's not the major difference.  You are very close, though:

The difference is that md manipulates anonymous block devices.  The 
block devices can be whole disks, partitioned disks, un-partition-able 
media (nbd or ramdisk), whatever.  As long as it's a block device, md 
can handle it.

So, functioning at the Linux block device level as it does, md is much 
more abstract and generic than hardware RAID, or "controller-focused 
software RAID" (i.e. Adaptec host raid, DDF, Promise pdcraid, hptraid, 
Silicon Image Medley RAID, ...)

	Jeff




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  5:44             ` Jeff Garzik
@ 2004-04-01  5:56               ` Brad Campbell
  2004-04-01  7:49                 ` Sandro Dentella
  0 siblings, 1 reply; 40+ messages in thread
From: Brad Campbell @ 2004-04-01  5:56 UTC (permalink / raw)
  To: Jeff Garzik; +Cc: linux-raid

Jeff Garzik wrote:

>> With the Highpoint driver I get a drive failure and the card starts 
>> beeping. I fire up the management util, remove the faulty drive, swap 
>> it out and insert the new drive in the array. The array starts 
>> rebuilding - no effect on the uptime and only a slight loss in 
>> throughput. Plus it's seamless.
> 
> 
> Yeah, md+mdadm can do all this right now, provided the hardware and 
> driver support is there...
> 

Yes, my point however is for low budget stuff with software raid the driver support is not yet there 
in a vanilla kernel.
I can't just whack a sata drive off one of my promise SATA150-TX4 controllers, pop another one in 
and have the kernel rescan the partition table and realise a new drive was present. (YET!)

My entire point was that for the types of controllers that were being discussed (Highpoint was 
particularly mentioned, but you can really interchange any multi-port ATA/SATA controller for this) 
that the kernel support was not there *yet* but hardware raid or faux hardware raid will buy you 
this support *now*.

Regards,
Brad

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  4:24 ` me
  2004-04-01  4:57   ` Clemens Schwaighofer
@ 2004-04-01  5:58   ` Jeff Garzik
  2004-04-01 12:52     ` me
  1 sibling, 1 reply; 40+ messages in thread
From: Jeff Garzik @ 2004-04-01  5:58 UTC (permalink / raw)
  To: me; +Cc: linux-raid

me@heyjay.com wrote:
> I'd like to thank everyone who responded, and I didn't mean to start a
> software vs hardware war.  But now that I did I'm sort of happy, because
> it's made me question if my direction is proper for my goals.

Not your fault, it's inevitable ;-)


> - I'm using a (fairly) old machine, I have to figure processor speed has to
> be an issue with software raid
> - When the box is being stressed most of the work is computational, the
> process I run puts the CPU% at 98% (according to top).

98% cpu doing just raid 1?  That sounds highly strange, even on an older 
CPU.

Typically RAID1 doesn't stress the cpu as much as PCI bus bandwidth and 
the drives...

	Jeff




^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  5:34     ` Jeff Garzik
@ 2004-04-01  6:39       ` Clemens Schwaighofer
  0 siblings, 0 replies; 40+ messages in thread
From: Clemens Schwaighofer @ 2004-04-01  6:39 UTC (permalink / raw)
  To: Jeff Garzik; +Cc: me, linux-raid

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Jeff Garzik wrote:
| Clemens Schwaighofer wrote:
|
|> The only thing that comes to my mind is hot swapping. If you go with IDE
|> drives it is highlu possible it is not going to work. which means, you
|> don't have data loss if a drive fails, but downtime (depends on the
|> server if this is critical).

| It works in SATA just fine.
|
| (for Linux, it will work when I release the code...)

well, I was not 100% clear. I am sorry, SATA of course (depends on the
chipset thought) can do hotswap. I was more referring to the old IDE
(simple ATA drives).

- --
Clemens Schwaighofer - IT Engineer & System Administration
==========================================================
TEQUILA\Japan, 6-17-2 Ginza Chuo-ku, Tokyo 104-8167, JAPAN
Tel: +81-(0)3-3545-7703            Fax: +81-(0)3-3545-7343
http://www.tequila.co.jp
==========================================================
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.2.4 (GNU/Linux)

iD8DBQFAa7kvjBz/yQjBxz8RAlBBAKCzMIik9EM3ClXh3yoc+VQr7RTsTwCgkOPA
/RpQ3UNDF4azD4pTO4iD/Ls=
=tPeb
-----END PGP SIGNATURE-----

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  5:56               ` Brad Campbell
@ 2004-04-01  7:49                 ` Sandro Dentella
  2004-04-01  8:03                   ` Brad Campbell
  2004-04-01  8:07                   ` Jeff Garzik
  0 siblings, 2 replies; 40+ messages in thread
From: Sandro Dentella @ 2004-04-01  7:49 UTC (permalink / raw)
  To: linux-raid

On Thu, Apr 01, 2004 at 09:56:40AM +0400, Brad Campbell wrote:
> Jeff Garzik wrote:
> 
> > >it out and insert the new drive in the array. The array starts 
> > >rebuilding - no effect on the uptime and only a slight loss in 
> > >throughput. Plus it's seamless.
> >
> >Yeah, md+mdadm can do all this right now, provided the hardware and 
> >driver support is there...
> >
> 
> Yes, my point however is for low budget stuff with software raid the driver 
> support is not yet there in a vanilla kernel.
> I can't just whack a sata drive off one of my promise SATA150-TX4 
> controllers, pop another one in and have the kernel rescan the partition 
> table and realise a new drive was present. (YET!)

I'm sort of confused... which are the combinations that allow me to hotswap a
disk w/ software raid. I don't mind doing some mdadm operations, I'm just
interested in how I can avoid rebooting. 

I thought I couldn't, now I learn you can "provided the hardware and driver
support is there"... can you detail a little more?

thanks
sandro
*:-)


-- 
Sandro Dentella  *:-)
e-mail: sandro.dentella@tin.it 
http://www.tksql.org                    TkSQL Home page - My GPL work

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  7:49                 ` Sandro Dentella
@ 2004-04-01  8:03                   ` Brad Campbell
  2004-04-01  8:07                   ` Jeff Garzik
  1 sibling, 0 replies; 40+ messages in thread
From: Brad Campbell @ 2004-04-01  8:03 UTC (permalink / raw)
  To: Sandro Dentella; +Cc: linux-raid

Sandro Dentella wrote:
> On Thu, Apr 01, 2004 at 09:56:40AM +0400, Brad Campbell wrote:
> 
>>Jeff Garzik wrote:
>>
>>
>>>>it out and insert the new drive in the array. The array starts 
>>>>rebuilding - no effect on the uptime and only a slight loss in 
>>>>throughput. Plus it's seamless.
>>>
>>>Yeah, md+mdadm can do all this right now, provided the hardware and 
>>>driver support is there...
>>>
>>
>>Yes, my point however is for low budget stuff with software raid the driver 
>>support is not yet there in a vanilla kernel.
>>I can't just whack a sata drive off one of my promise SATA150-TX4 
>>controllers, pop another one in and have the kernel rescan the partition 
>>table and realise a new drive was present. (YET!)
> 
> 
> I'm sort of confused... which are the combinations that allow me to hotswap a
> disk w/ software raid. I don't mind doing some mdadm operations, I'm just
> interested in how I can avoid rebooting. 
> 
> I thought I couldn't, now I learn you can "provided the hardware and driver
> support is there"... can you detail a little more?

Yup. Currently, SCSI has the hardware and driver support.
SATA has the hardware support, but not the driver support (Yet).


Regards,
Brad

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  7:49                 ` Sandro Dentella
  2004-04-01  8:03                   ` Brad Campbell
@ 2004-04-01  8:07                   ` Jeff Garzik
  1 sibling, 0 replies; 40+ messages in thread
From: Jeff Garzik @ 2004-04-01  8:07 UTC (permalink / raw)
  To: Sandro Dentella; +Cc: linux-raid

Sandro Dentella wrote:
> On Thu, Apr 01, 2004 at 09:56:40AM +0400, Brad Campbell wrote:
> 
>>Jeff Garzik wrote:
>>
>>
>>>>it out and insert the new drive in the array. The array starts 
>>>>rebuilding - no effect on the uptime and only a slight loss in 
>>>>throughput. Plus it's seamless.
>>>
>>>Yeah, md+mdadm can do all this right now, provided the hardware and 
>>>driver support is there...
>>>
>>
>>Yes, my point however is for low budget stuff with software raid the driver 
>>support is not yet there in a vanilla kernel.
>>I can't just whack a sata drive off one of my promise SATA150-TX4 
>>controllers, pop another one in and have the kernel rescan the partition 
>>table and realise a new drive was present. (YET!)
> 
> 
> I'm sort of confused... which are the combinations that allow me to hotswap a
> disk w/ software raid. I don't mind doing some mdadm operations, I'm just
> interested in how I can avoid rebooting. 
> 
> I thought I couldn't, now I learn you can "provided the hardware and driver
> support is there"... can you detail a little more?

AFAICS, the steps should be:

1) (optional) be nice, and tell md you are about to yank a drive using 
hot-remove
2) (required on ICH5, optional on others) be nice, and tell SATA you are 
about to yank a drive (not implemented yet in libata)
3) unplug the SATA cable
4) swap out drives
5) plug in the SATA cable (kernel automatically notices the new device; 
not implemented yet in libata)
6) new device is probed by kernel SATA driver
7) kernel executes /sbin/hotplug (normal for any hotplug event)
8a) /sbin/hotplug magic issues the md ioctls to hot-add a new device. 
This requires some knowledge in code, or in a config file, of how to 
associate a new device on a random controller with a specific array.
	or
8b) sysadmin uses mdadm to hot-add the new device, to the specified array.



^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-03-31 20:19           ` Richard Scobie
@ 2004-04-01  9:07             ` KELEMEN Peter
  2004-04-04  4:47               ` Richard Scobie
  0 siblings, 1 reply; 40+ messages in thread
From: KELEMEN Peter @ 2004-04-01  9:07 UTC (permalink / raw)
  To: linux-raid

* Richard Scobie (richard@sauce.co.nz) [20040401 08:19]:

> 3ware hardware RAID 10: Reads - 92.8MB/s    Writes - 88.9MB/s
> Software RAID 10:  Reads - 53.7MB/s   Writes - 94.9 MB/s

> If anyone is running similar hardware and is able to get better
> software RAID performance, I would be very interested to hear
> the parameters.

I've tested some configurations with 128 GiB streams (iozone) on a
dual Xeon 2.4GHz 2G RAM machine (running XFS).

3ware HW-RAID1, Linux SW-RAID0:	read 263 MiB/s, write 157 MiB/s
3ware HW-RAID5, Linux SW-RAID0:	read 243 MiB/s, write 135 MiB/s
Linux SW-RAID5:			read 225 MiB/s, write 135 MiB/s
3ware HW-RAID10:		read 119 MiB/s, write  91 MiB/s

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  5:58   ` Jeff Garzik
@ 2004-04-01 12:52     ` me
  2004-04-01 19:31       ` Terrence Martin
  0 siblings, 1 reply; 40+ messages in thread
From: me @ 2004-04-01 12:52 UTC (permalink / raw)
  To: Jeff Garzik; +Cc: linux-raid


----- Original Message ----- 
From: "Jeff Garzik" <jgarzik@pobox.com>
To: <me@heyjay.com>
Cc: <linux-raid@vger.kernel.org>
Sent: Wednesday, March 31, 2004 11:58 PM
Subject: Re: Which raid card to buy for Sarge


>
> 98% cpu doing just raid 1?  That sounds highly strange, even on an older
> CPU.
>
> Typically RAID1 doesn't stress the cpu as much as PCI bus bandwidth and
> the drives...
>

Sorry, I was unclear.  Currently (without raid) my process maxes out my cpu.
If I move to raid won't I have performance problems?  Maybe all the work
happens at the PCI bus

Jay


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01 12:52     ` me
@ 2004-04-01 19:31       ` Terrence Martin
  2004-04-02  4:46         ` me
  0 siblings, 1 reply; 40+ messages in thread
From: Terrence Martin @ 2004-04-01 19:31 UTC (permalink / raw)
  To: me, linux-raid

me@heyjay.com wrote:

>----- Original Message ----- 
>From: "Jeff Garzik" <jgarzik@pobox.com>
>To: <me@heyjay.com>
>Cc: <linux-raid@vger.kernel.org>
>Sent: Wednesday, March 31, 2004 11:58 PM
>Subject: Re: Which raid card to buy for Sarge
>
>
>  
>
>>98% cpu doing just raid 1?  That sounds highly strange, even on an older
>>CPU.
>>
>>Typically RAID1 doesn't stress the cpu as much as PCI bus bandwidth and
>>the drives...
>>
>>    
>>
>
>Sorry, I was unclear.  Currently (without raid) my process maxes out my cpu.
>If I move to raid won't I have performance problems?  Maybe all the work
>happens at the PCI bus
>
>Jay
>
>-
>To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>the body of a message to majordomo@vger.kernel.org
>More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
>  
>

AFAIK there is no additional calculations on RAID1 so little or no CPU, 
as long as you make sure your drivers are not in an IDE master/slave 
relationship you will get equivalent performance to a single drive 
system, but with redundancy.

This assumes that the RAID array is not recovering at the time of course 
or doing an integrity check (after an unclean shutdown).

In general though I would not expect Linux software RAID1  to have any 
additional CPU cost over a single drive.

As an aside alone the 99% CPU utilization is perhaps not a good measure 
of your system load or capacity. You should also look at how much IO 
your process produces and also the total load on the system (how much 
processes are waiting to execute).  Most processes if they do any 
significant IO are bound by that, not the CPU. If your process does 
little or no IO I would not expect any RAID config to have any impact at 
all, even RAID5.

see vmstat(8) and uptime(1)

Terrence

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01 19:31       ` Terrence Martin
@ 2004-04-02  4:46         ` me
  0 siblings, 0 replies; 40+ messages in thread
From: me @ 2004-04-02  4:46 UTC (permalink / raw)
  To: Terrence Martin, linux-raid

I think I'll read the how-tos and see if I can't get the software raid up
and running

thanks
Jay
----- Original Message ----- 
From: "Terrence Martin" <tmartin@physics.ucsd.edu>
To: <me@heyjay.com>; <linux-raid@vger.kernel.org>
Sent: Thursday, April 01, 2004 1:31 PM
Subject: Re: Which raid card to buy for Sarge


> me@heyjay.com wrote:
>
> >----- Original Message ----- 
> >From: "Jeff Garzik" <jgarzik@pobox.com>
> >To: <me@heyjay.com>
> >Cc: <linux-raid@vger.kernel.org>
> >Sent: Wednesday, March 31, 2004 11:58 PM
> >Subject: Re: Which raid card to buy for Sarge
> >
> >
> >
> >
> >>98% cpu doing just raid 1?  That sounds highly strange, even on an older
> >>CPU.
> >>
> >>Typically RAID1 doesn't stress the cpu as much as PCI bus bandwidth and
> >>the drives...
> >>
> >>
> >>
> >
> >Sorry, I was unclear.  Currently (without raid) my process maxes out my
cpu.
> >If I move to raid won't I have performance problems?  Maybe all the work
> >happens at the PCI bus
> >
> >Jay
> >
> >-
> >To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> >the body of a message to majordomo@vger.kernel.org
> >More majordomo info at  http://vger.kernel.org/majordomo-info.html
> >
> >
> >
>
> AFAIK there is no additional calculations on RAID1 so little or no CPU,
> as long as you make sure your drivers are not in an IDE master/slave
> relationship you will get equivalent performance to a single drive
> system, but with redundancy.
>
> This assumes that the RAID array is not recovering at the time of course
> or doing an integrity check (after an unclean shutdown).
>
> In general though I would not expect Linux software RAID1  to have any
> additional CPU cost over a single drive.
>
> As an aside alone the 99% CPU utilization is perhaps not a good measure
> of your system load or capacity. You should also look at how much IO
> your process produces and also the total load on the system (how much
> processes are waiting to execute).  Most processes if they do any
> significant IO are bound by that, not the CPU. If your process does
> little or no IO I would not expect any RAID config to have any impact at
> all, even RAID5.
>
> see vmstat(8) and uptime(1)
>
> Terrence
>
>


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  9:07             ` KELEMEN Peter
@ 2004-04-04  4:47               ` Richard Scobie
  2004-04-09 11:16                 ` KELEMEN Peter
  0 siblings, 1 reply; 40+ messages in thread
From: Richard Scobie @ 2004-04-04  4:47 UTC (permalink / raw)
  To: linux-raid

KELEMEN Peter wrote:
> * Richard Scobie (richard@sauce.co.nz) [20040401 08:19]:
> 
> 
>>3ware hardware RAID 10: Reads - 92.8MB/s    Writes - 88.9MB/s
>>Software RAID 10:  Reads - 53.7MB/s   Writes - 94.9 MB/s
> 
> 
>>If anyone is running similar hardware and is able to get better
>>software RAID performance, I would be very interested to hear
>>the parameters.
> 
> 
> I've tested some configurations with 128 GiB streams (iozone) on a
> dual Xeon 2.4GHz 2G RAM machine (running XFS).
> 
> 3ware HW-RAID1, Linux SW-RAID0:	read 263 MiB/s, write 157 MiB/s
> 3ware HW-RAID5, Linux SW-RAID0:	read 243 MiB/s, write 135 MiB/s
> Linux SW-RAID5:			read 225 MiB/s, write 135 MiB/s
> 3ware HW-RAID10:		read 119 MiB/s, write  91 MiB/s
> 
> Peter
> 


Thanks Peter,

This array is obviously larger than the 4 disc one I have. Unfortunately 
you do not have the one result I would really like - SW-RAID10.

It would be interesting to see how it compares to 3ware HW-RAID10.

Regards,

Richard Scobie


^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-04  4:47               ` Richard Scobie
@ 2004-04-09 11:16                 ` KELEMEN Peter
  2004-04-09 13:07                   ` Which raid card to buy for SargeD Yu Chen
  0 siblings, 1 reply; 40+ messages in thread
From: KELEMEN Peter @ 2004-04-09 11:16 UTC (permalink / raw)
  To: linux-raid

* Richard Scobie (richard@sauce.co.nz) [20040404 16:47]:

> KELEMEN Peter wrote:
> > * Richard Scobie (richard@sauce.co.nz) [20040401 08:19]:
> > >3ware hardware RAID 10: Reads - 92.8MB/s    Writes - 88.9MB/s
> > >Software RAID 10:  Reads - 53.7MB/s   Writes - 94.9 MB/s

> > 3ware HW-RAID1, Linux SW-RAID0:	read 263 MiB/s, write 157 MiB/s
> > 3ware HW-RAID5, Linux SW-RAID0:	read 243 MiB/s, write 135 MiB/s
> > Linux SW-RAID5:			read 225 MiB/s, write 135 MiB/s
> > 3ware HW-RAID10:		read 119 MiB/s, write  91 MiB/s

> This array is obviously larger than the 4 disc one I have. Unfortunately 
> you do not have the one result I would really like - SW-RAID10.
> It would be interesting to see how it compares to 3ware HW-RAID10.

Well, I did those tests as well for the sake of interest.
Linux SW-RAID10:		read 221 MiB/s, write 105 MiB/s

Peter

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for SargeD
  2004-04-09 11:16                 ` KELEMEN Peter
@ 2004-04-09 13:07                   ` Yu Chen
  2004-04-09 21:52                     ` KELEMEN Peter
  0 siblings, 1 reply; 40+ messages in thread
From: Yu Chen @ 2004-04-09 13:07 UTC (permalink / raw)
  To: KELEMEN Peter; +Cc: linux-raid

Sorry, this might be a silly question, since I am new to RAID, how do I do
those tests, I would like to check out our RAID.

Thanks a lot in advance!

Jeff
On Fri, 9 Apr 2004, KELEMEN Peter wrote:

> * Richard Scobie (richard@sauce.co.nz) [20040404 16:47]:
>
> > KELEMEN Peter wrote:
> > > * Richard Scobie (richard@sauce.co.nz) [20040401 08:19]:
> > > >3ware hardware RAID 10: Reads - 92.8MB/s    Writes - 88.9MB/s
> > > >Software RAID 10:  Reads - 53.7MB/s   Writes - 94.9 MB/s
>
> > > 3ware HW-RAID1, Linux SW-RAID0:	read 263 MiB/s, write 157 MiB/s
> > > 3ware HW-RAID5, Linux SW-RAID0:	read 243 MiB/s, write 135 MiB/s
> > > Linux SW-RAID5:			read 225 MiB/s, write 135 MiB/s
> > > 3ware HW-RAID10:		read 119 MiB/s, write  91 MiB/s
>
> > This array is obviously larger than the 4 disc one I have. Unfortunately
> > you do not have the one result I would really like - SW-RAID10.
> > It would be interesting to see how it compares to 3ware HW-RAID10.
>
> Well, I did those tests as well for the sake of interest.
> Linux SW-RAID10:		read 221 MiB/s, write 105 MiB/s
>
> Peter
>
> --
>     .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
>  Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
> .+'         `+...+'         `+...+'         `+...+'         `+...+'
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
>
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for SargeD
  2004-04-09 13:07                   ` Which raid card to buy for SargeD Yu Chen
@ 2004-04-09 21:52                     ` KELEMEN Peter
  0 siblings, 0 replies; 40+ messages in thread
From: KELEMEN Peter @ 2004-04-09 21:52 UTC (permalink / raw)
  To: linux-raid

* Yu Chen (chen@hhmi.umbc.edu) [20040409 09:07]:

> Sorry, this might be a silly question, since I am new to RAID,
> how do I do those tests, I would like to check out our RAID.

Grab iozone¹ and run a throughput test:
iozone -Mce -t1 -i0 -i1 -s128g -r256k -f /path/to/raid/IOZONE

Peter

¹ http://www.iozone.org/

-- 
    .+'''+.         .+'''+.         .+'''+.         .+'''+.         .+''
 Kelemen Péter     /       \       /       \     Peter.Kelemen@cern.ch
.+'         `+...+'         `+...+'         `+...+'         `+...+'
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-01  4:49           ` Brad Campbell
                               ` (2 preceding siblings ...)
  2004-04-01  5:44             ` Jeff Garzik
@ 2004-04-16 14:25             ` Nick Maynard
  2004-04-16 15:09               ` Måns Rullgård
  3 siblings, 1 reply; 40+ messages in thread
From: Nick Maynard @ 2004-04-16 14:25 UTC (permalink / raw)
  To: linux-raid

> I used a pair of Highpoint Rocketraid 1540 cards with the Highpoint driver
> (as it presented all 8
> drives as 8 units on a scsi chain) with linux md raid-5, and I have now moved
> onto 3 Promise SATA150-TX4 units in an md raid-5.
This is probably slightly off-topic, but has anyone heard of successful use of
the Rocket 1540 (non-RAID, HPT374 based) card on Linux 2.6.x yet?
Highpoint provides binary drivers for Redhat and SuSE, but no other distros and
no source - anyone any idea where to get hold of functional drivers?

Cheers all,

--

Nick Maynard
nick.maynard@alumni.doc.ic.ac.uk

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-16 14:25             ` Nick Maynard
@ 2004-04-16 15:09               ` Måns Rullgård
  2004-04-16 16:29                 ` Nick Maynard
  0 siblings, 1 reply; 40+ messages in thread
From: Måns Rullgård @ 2004-04-16 15:09 UTC (permalink / raw)
  To: linux-raid

Nick Maynard <nick.maynard@alumni.doc.ic.ac.uk> writes:

>> I used a pair of Highpoint Rocketraid 1540 cards with the Highpoint
>> driver (as it presented all 8 drives as 8 units on a scsi chain)
>> with linux md raid-5, and I have now moved onto 3 Promise
>> SATA150-TX4 units in an md raid-5.
> This is probably slightly off-topic, but has anyone heard of
> successful use of the Rocket 1540 (non-RAID, HPT374 based) card on
> Linux 2.6.x yet?

I'm running four disks off a RocketRAID 1540 SATA card with Linux
software RAID since kernel 2.6.0.  The driver included with the kernel
works just fine.  Any card using the hpt374 chip should work, whatever
the name on the box happens to be.

> Highpoint provides binary drivers for Redhat and SuSE, but no other
> distros and no source - anyone any idea where to get hold of
> functional drivers?

Perfectly good drivers are included with the kernel.

-- 
Måns Rullgård
mru@kth.se

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-16 15:09               ` Måns Rullgård
@ 2004-04-16 16:29                 ` Nick Maynard
  2004-04-16 17:44                   ` Måns Rullgård
  2004-04-16 22:34                   ` jlewis
  0 siblings, 2 replies; 40+ messages in thread
From: Nick Maynard @ 2004-04-16 16:29 UTC (permalink / raw)
  To: Måns Rullgård; +Cc: linux-raid

> I'm running four disks off a RocketRAID 1540 SATA card with Linux
> software RAID since kernel 2.6.0.  The driver included with the kernel
> works just fine.  Any card using the hpt374 chip should work, whatever
> the name on the box happens to be.
Yea.  You should note that I have a Rocket 1540, not a RocketRAID 1540.  There's
a difference - shown particularly by the fact that Highpoint release open
drivers for the RocketRAID and not the Rocket.
Should doesn't necessarily mean does, unfortunately.  My reasoning was exactly
the same as yours, except it's bitten me where it hurts...

> Perfectly good drivers are included with the kernel.
And this is where I got bitten.  The standard kernel drivers (HPT374 is
supported by hpt366.o) in 2.6.5 (and 2.4.something) lock up on boot, during the
probe bit I think.  Are you using any other module?

Cheers,

--

Nick Maynard
nick@tastycake.net

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-16 16:29                 ` Nick Maynard
@ 2004-04-16 17:44                   ` Måns Rullgård
  2004-04-16 22:34                   ` jlewis
  1 sibling, 0 replies; 40+ messages in thread
From: Måns Rullgård @ 2004-04-16 17:44 UTC (permalink / raw)
  To: Nick Maynard; +Cc: linux-raid

Nick Maynard <nick@tastycake.net> writes:

>> I'm running four disks off a RocketRAID 1540 SATA card with Linux
>> software RAID since kernel 2.6.0.  The driver included with the kernel
>> works just fine.  Any card using the hpt374 chip should work, whatever
>> the name on the box happens to be.
> Yea.  You should note that I have a Rocket 1540, not a RocketRAID
> 1540.  There's a difference

More than the box and the BIOS?

> - shown particularly by the fact that Highpoint release open drivers
> for the RocketRAID and not the Rocket.

I wouldn't use those drivers, having looked briefly at the source code
they do release.  It wasn't pretty.

> Should doesn't necessarily mean does, unfortunately.

Could the onboard bios be messing with you?  Can it be disabled?  I'm
using the card in an Alpha machine that pretty much ignores the bios
extensions.

> My reasoning was exactly the same as yours, except it's bitten me
> where it hurts...
>
>> Perfectly good drivers are included with the kernel.
> And this is where I got bitten.  The standard kernel drivers (HPT374
> is supported by hpt366.o) in 2.6.5 (and 2.4.something) lock up on
> boot, during the probe bit I think.  Are you using any other module?

I'm using the htp366 driver.

-- 
Måns Rullgård
mru@kth.se
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: Which raid card to buy for Sarge
  2004-04-16 16:29                 ` Nick Maynard
  2004-04-16 17:44                   ` Måns Rullgård
@ 2004-04-16 22:34                   ` jlewis
  1 sibling, 0 replies; 40+ messages in thread
From: jlewis @ 2004-04-16 22:34 UTC (permalink / raw)
  To: Nick Maynard; +Cc: linux-raid

On Fri, 16 Apr 2004, Nick Maynard wrote:

> > Perfectly good drivers are included with the kernel.
> And this is where I got bitten.  The standard kernel drivers (HPT374 is
> supported by hpt366.o) in 2.6.5 (and 2.4.something) lock up on boot,
> during the probe bit I think.  Are you using any other module?

Sounds like the same problem I had with a Rocket100 (not RAID) which uses
the HPT370A chip.  I'm hoping eventually the hpt366 driver will be fixed
to support this chip (if it hasn't already).

----------------------------------------------------------------------
 Jon Lewis *jlewis@lewis.org*|  I route
 Senior Network Engineer     |  therefore you are
 Atlantic Net                |
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2004-04-16 22:34 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2004-03-31 15:47 Which raid card to buy for Sarge me
2004-03-31 16:15 ` Luca Berra
2004-03-31 16:43   ` me
2004-03-31 16:58     ` Måns Rullgård
2004-03-31 18:03       ` Ralph Paßgang
2004-03-31 19:50         ` Mark Hahn
2004-03-31 20:19           ` Richard Scobie
2004-04-01  9:07             ` KELEMEN Peter
2004-04-04  4:47               ` Richard Scobie
2004-04-09 11:16                 ` KELEMEN Peter
2004-04-09 13:07                   ` Which raid card to buy for SargeD Yu Chen
2004-04-09 21:52                     ` KELEMEN Peter
2004-03-31 20:42           ` Which raid card to buy for Sarge Ralph Paßgang
2004-03-31 20:59             ` Jeff Garzik
2004-03-31 21:39             ` Guy
2004-04-01  4:49           ` Brad Campbell
2004-04-01  4:51             ` seth vidal
2004-04-01  5:01               ` Brad Campbell
2004-04-01  5:39                 ` Guy
2004-04-01  5:51                   ` Brad Campbell
2004-04-01  5:29             ` Guy
2004-04-01  5:54               ` Jeff Garzik
2004-04-01  5:44             ` Jeff Garzik
2004-04-01  5:56               ` Brad Campbell
2004-04-01  7:49                 ` Sandro Dentella
2004-04-01  8:03                   ` Brad Campbell
2004-04-01  8:07                   ` Jeff Garzik
2004-04-16 14:25             ` Nick Maynard
2004-04-16 15:09               ` Måns Rullgård
2004-04-16 16:29                 ` Nick Maynard
2004-04-16 17:44                   ` Måns Rullgård
2004-04-16 22:34                   ` jlewis
2004-04-01  4:24 ` me
2004-04-01  4:57   ` Clemens Schwaighofer
2004-04-01  5:34     ` Jeff Garzik
2004-04-01  6:39       ` Clemens Schwaighofer
2004-04-01  5:58   ` Jeff Garzik
2004-04-01 12:52     ` me
2004-04-01 19:31       ` Terrence Martin
2004-04-02  4:46         ` me

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).