linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* I'm about ready to do SW-Raid5 - pointers needed
@ 2003-10-27 23:42 berk walker
  2003-10-28  0:37 ` David Anderson
                   ` (3 more replies)
  0 siblings, 4 replies; 24+ messages in thread
From: berk walker @ 2003-10-27 23:42 UTC (permalink / raw)
  To: linux-raid

The purpose of my going to raid is to ensure, short of a total 
meltdown/fire, etc, data loss prevention.  If my house and business 
burn, I'm hosed anyway.

I am buying 4 maxtor 40 gb/200mb ultra 133 drives, and another promise 
board, to finally do swraid5 (after reading this list for a few months, 
it seems pretty scary in failure).

is there an advantage to >more< than 1 spare drive? .. more than 3 
drives in mdx?  why not cp old boot/root/whatever drive to mdx after 
booting on floppy?

is there an advantage to having various mdx's allocated to various 
/directories?..ie: /home, var, /etc

looking for meaningful help pls. not flamage.

b-


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-27 23:42 I'm about ready to do SW-Raid5 - pointers needed berk walker
@ 2003-10-28  0:37 ` David Anderson
  2003-10-28  1:55   ` maarten van den Berg
  2003-10-28  1:07 ` maarten van den Berg
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 24+ messages in thread
From: David Anderson @ 2003-10-28  0:37 UTC (permalink / raw)
  To: linux-raid; +Cc: berk walker

Hi there.

First of all, as a satisfied swraid5 user for some time, I would like to 
point out someting: data loss is not just about the redundancy of a raid 
array. If you bought your disks from a faulty batch and they all die, 
you're still hosed. Raid is absolutely no excuse for not backing up data.

Now, on to the questions ;)

> is there an advantage to >more< than 1 spare drive?

If several disks die _in a row_. ie one dies, a spare is synced, a 
second dies after the sync. A fairly rare occurence I reckon (in my 
experience, multiple failures come in flocks, not in rows). And if it 
does happen, you'll have a little degraded mode time while you add a 
clean disk and add it to the array. No big deal _if_ you have an 
efficient backup routine.

> .. more than 3 drives in mdx?

Depends on how many you have, and how much failure you want your array 
to tolerate. The more you add disks, the more your array becomes 
intolerant to failure (more active disks means more potential failure 
points, but the threshold of 1 failed disk only remains). But you "lose" 
less space to parity.
In short, if you can afford it, stick to 3-disk raid5 with a few spares.

> why not cp old boot/root/whatever drive to mdx after 
> booting on floppy?

I wasn't aware of anything against that... When I got hold of higher 
capacity disks I created a new larger raid5 array and copied the old to 
the new (not a raw copy of course, a copy at the file system level).
But then again, my array was for a special mountpoint (/var/data), so I 
could mount it readonly and copy it. Maybe the warnings against copying 
/ were because of the write issue which could leave you with an 
inconsistent copy?

> is there an advantage to having various mdx's allocated to various 
> /directories?..ie: /home, var, /etc

I don't see any myself. The only advantage would be less data loss in 
the event of a failure, but since you backup on a regular basis, a 
catastrophic failure shouldn't bring you down too long anyway.

A final reminder: Have efficient backup routines!! Raid will help you 
prevent disasters, but when a disaster does occur (not if, when), you'll 
need fast recovery with minimal loss.

David Anderson

PS: did I tell you about the importance of backups? ;)



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-27 23:42 I'm about ready to do SW-Raid5 - pointers needed berk walker
  2003-10-28  0:37 ` David Anderson
@ 2003-10-28  1:07 ` maarten van den Berg
  2003-10-28  1:21   ` David Anderson
  2003-10-28  8:23 ` Hermann Himmelbauer
  2003-10-28  8:26 ` Gordon Henderson
  3 siblings, 1 reply; 24+ messages in thread
From: maarten van den Berg @ 2003-10-28  1:07 UTC (permalink / raw)
  To: linux-raid

On Tuesday 28 October 2003 00:42, berk walker wrote:
> The purpose of my going to raid is to ensure, short of a total
> meltdown/fire, etc, data loss prevention.  If my house and business
> burn, I'm hosed anyway.
>
> I am buying 4 maxtor 40 gb/200mb ultra 133 drives, and another promise
> board, to finally do swraid5 (after reading this list for a few months,
> it seems pretty scary in failure).

Having just been there (lots of problems these last weeks...) but also a 
longtime user of linux raid in all levels 0, 1 and 5 I'll comment.

> is there an advantage to >more< than 1 spare drive? .. more than 3
> drives in mdx?  why not cp old boot/root/whatever drive to mdx after
> booting on floppy?

I don't know those answers. Let me describe what I've built and for what 
purposes.  At home I have a need for a big storage which costs little. These 
are opposites from what my business needs are.

At home I have a raid5 array of 400G composed of 7 80GB disks. (5+1+1).
Before this week, I had no spare drive. I just suffered a two-disk failure and 
it almost took all my data with it. :(  I now have 3 promise cards and 1 
spare drive.  Later this week I found out I _still_ had a bad drive which 
hung the whole system when accessed (at a certain area), yet was NOT being 
rejected or marked failed.  It took a lot of searching and eventually running 
'badblocks' to find the culprit.  This was really rather nasty.
I don't know why the machine locks up instead of the raid layer realising 
whats happening and killing the bad drive off...  But it puzzles and 
irritates me. However, a drive can go dead in so many ways, one never knows.
At home my _system_ is not critical, just the data is. That makes life very 
much simpler; I have an old drive with linux on it, and the raid volume is 
only mounted on /mnt. So there are no boot dependancies or stuff.

At work, after experimenting with raid5, I decided raid5 was not worth it (for 
my needs) and I now run raid1 exclusively, mostly over 3 disks since todays 
hardware is a far cry from what it used to be...  Using raid1 has some very 
nice features which makes rollout simpler, like, I keep one master image in 
the closet and when I need a new system I boot from that and clone a couple 
of disks for the system. This would never be possible with a raid5 setup.
Also, since cost is not (should not be) a factor, raid1 is just perfect here. 
Needless to say, in this setup the data is not so much important as the uptime 
and / or time-to-recovery is. So here, everything is mirrored and on seperate 
raid volumes (/,  /usr, /var, /home). /boot is not really a raid volume but 
it is mirrored (cloned) to enable a quick recovery.
Also with these setups I experienced nastyness; when a drive fails it very 
often does not get kicked but remains online. It takes the whole machine 
'down' (for all intents and purposes) because every read or write does 
numerous dead slow retries, tying up so many resources that the 
responsiveness of the machine measures in several minutes instead of 
microseconds. Maybe this is an IDE issue, maybe it's a RAID issue, I don't 
know, I'm no coder.  I just report what I notice.
In any case, after a reboot and marking the disk failed all is well again.

Maybe the moral of this is: If you have the money, go SCSI.  I'm sure a lot of 
the problems I experienced come from the IDE system. Maybe someone else has 
insights in this regard.

> is there an advantage to having various mdx's allocated to various
> /directories?..ie: /home, var, /etc

Not for md, but for linux, yea... If you run a multiuser system and you don't 
want to have your system _crash_ when someone fills up /home (and by that, /) 
you should definitely go for seperate partitions.

> looking for meaningful help pls. not flamage.

All in all I'm a happy linux sw raid user since ehm... back to '99 I think. 
(it was round the time the glibc came into distros) 

I don't know if it suits own your needs but be sure to read the 
boot+root+raid+lilo howto. It might help you. And... Good luck !

Maarten

-- 
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  1:07 ` maarten van den Berg
@ 2003-10-28  1:21   ` David Anderson
  2003-10-28  1:35     ` maarten van den Berg
  0 siblings, 1 reply; 24+ messages in thread
From: David Anderson @ 2003-10-28  1:21 UTC (permalink / raw)
  To: linux-raid

maarten van den Berg wrote:
> Not for md, but for linux, yea... If you run a multiuser system and you don't 
> want to have your system _crash_ when someone fills up /home (and by that, /) 
> you should definitely go for seperate partitions.

Or for user quotas...

David Anderson



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  1:21   ` David Anderson
@ 2003-10-28  1:35     ` maarten van den Berg
  2003-10-28  1:55       ` David Anderson
  2003-10-28  1:58       ` David Anderson
  0 siblings, 2 replies; 24+ messages in thread
From: maarten van den Berg @ 2003-10-28  1:35 UTC (permalink / raw)
  To: linux-raid

On Tuesday 28 October 2003 02:21, David Anderson wrote:
> maarten van den Berg wrote:
> > Not for md, but for linux, yea... If you run a multiuser system and you
> > don't want to have your system _crash_ when someone fills up /home (and
> > by that, /) you should definitely go for seperate partitions.
>
> Or for user quotas...

Uhm, good point...!  :-)  But I wasn't aware user quotas were available for 
many of the filesystems except ext2.  Does reiserfs support quotas ?

Maarten

> David Anderson

-- 
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  0:37 ` David Anderson
@ 2003-10-28  1:55   ` maarten van den Berg
  2003-10-28  3:32     ` rob
  2003-10-28  5:37     ` Luke Rosenthal
  0 siblings, 2 replies; 24+ messages in thread
From: maarten van den Berg @ 2003-10-28  1:55 UTC (permalink / raw)
  To: linux-raid

On Tuesday 28 October 2003 01:37, David Anderson wrote:
> Hi there.

Hi David,

You made some good remarks but I just wanted to comment on the last paragraph.

> A final reminder: Have efficient backup routines!! Raid will help you
> prevent disasters, but when a disaster does occur (not if, when), you'll
> need fast recovery with minimal loss.

I have a question about this.  With time, it becomes increasingly difficult to 
keep up with the massive amounts of data we all store. Just stating 'backup 
often!' doesn't cut it when apparently the disk drives follow Moore's Law 
perfectly but backup solutions do not.  I do not deny the need for backup, 
but how many people have a DLT IV 40/80 at home ? How many of you buy those 
DLT tapes, which are interestingly enough, MORE expensive byte-for-byte than 
the average harddisk (yes, you read that right). I have a DDS3 unit at home 
and a DVD burner. Still, keeping up with my 400GB raid array is very much 
work at best and near impossible at worst. I could put it all on 100 DVD+RWs, 
but to keep track of what has been backed up and where is just impossible.
The tapes work better but I do full tape backup ehm, like, once or twice 
yearly.  (yeah yeah I know...)
One of the cheapest "backup" mediums right now are harddisks themselves 
(weird, isn't it?) so it stands to reason some people try to build something 
that is foolproof (at least against hardware failure) using that, just disks.

How do you feel about this ?

Note that this has nothing to do with business situations; where the 
neccessary DLT drive and or -libraries are just bought from the budget.

Maarten

> David Anderson
>
> PS: did I tell you about the importance of backups? ;)

Yea. I think you did.   ;-)


-- 
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  1:35     ` maarten van den Berg
@ 2003-10-28  1:55       ` David Anderson
  2003-10-28  1:58       ` David Anderson
  1 sibling, 0 replies; 24+ messages in thread
From: David Anderson @ 2003-10-28  1:55 UTC (permalink / raw)
  Cc: linux-raid

maarten van den Berg wrote:
> Uhm, good point...!  :-)  But I wasn't aware user quotas were available for 
> many of the filesystems except ext2.  Does reiserfs support quotas ?

Good point also :) I don't know much about reiserfs (waiting for a large 
disk to test it), but I'd reckon that they plan to put quota support in 
a plugin if it is not integrated in the filesystem core. And I don't 
find any reference to reiserfs quotas anywhere, so...

David Anderson



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  1:35     ` maarten van den Berg
  2003-10-28  1:55       ` David Anderson
@ 2003-10-28  1:58       ` David Anderson
  1 sibling, 0 replies; 24+ messages in thread
From: David Anderson @ 2003-10-28  1:58 UTC (permalink / raw)
  To: maarten van den Berg; +Cc: linux-raid

David Anderson wrote:
 > [something silly about reiserfs and quotas]

I stand corrected.
http://www.namesys.com/faq.html#quota

Reiserfs _and_ quotas... Nice :)

David Anderson



^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  1:55   ` maarten van den Berg
@ 2003-10-28  3:32     ` rob
  2003-10-28  5:37     ` Luke Rosenthal
  1 sibling, 0 replies; 24+ messages in thread
From: rob @ 2003-10-28  3:32 UTC (permalink / raw)
  To: maarten van den Berg; +Cc: linux-raid

I use rsync and spare inexpensive computers to backup our servers.

Our business requires 7x24 computer access with minimal downtime when 
the server fails.

I have 3 backup computers.

1st - next to the main computer.

2nd- at the other end of the building.

3-rd at my house. 

 hourly all our data files are rsynced to the 3 backup servers. This 
takes  1-3 min.

 also 2x  per hour  our main data files are synced.  this takes < 2 min.

 Rsync is a great program for backup as it sends just the parts of the 
file which changed.

 In addition we use dds-4 tape to backup 1x per day. the tape is taken 
offsite.

So use rsync over ethernet to backup to a spare computer or 2.






^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  1:55   ` maarten van den Berg
  2003-10-28  3:32     ` rob
@ 2003-10-28  5:37     ` Luke Rosenthal
  2003-10-28  8:20       ` Spinning down disks [Re: I'm about ready to do SW-Raid5 - pointers needed] Gordon Henderson
  2003-10-28  8:39       ` I'm about ready to do SW-Raid5 - pointers needed Hermann Himmelbauer
  1 sibling, 2 replies; 24+ messages in thread
From: Luke Rosenthal @ 2003-10-28  5:37 UTC (permalink / raw)
  To: maarten van den Berg; +Cc: linux-raid

On Tue, 28 Oct 2003, maarten van den Berg wrote:

> One of the cheapest "backup" mediums right now are harddisks themselves
> (weird, isn't it?) so it stands to reason some people try to build
> something that is foolproof (at least against hardware failure) using
> that, just disks.

Ok, most folks would argue that hard disk failure is a byproduct of them
being run continuously.  Especially in the case of IDE disks, which are
usually cheaper than their SCSI counterparts on a cost-per-MB ratio, but
consequently have a much shorter MTBF.

So here's my question - would it be possible to use a very large IDE disk
in a system purely for backups, eg one of those new 300GB behemoths - but
with one caveat - leave it "asleep".  Ie. leave it "spun down", and only
activate it once a day at backup time, run the backup, verify it, then
unmount it, and put it to sleep in a cron job.  Feasible?

I know there's the appropriate commands to do it, but are they risky?  If
a drive's not mounted, but powered off, it should present no problems or
lock up the OS, correct?

> > PS: did I tell you about the importance of backups? ;)
> 
> Yea. I think you did.   ;-)

Everyone's got a horror story.. :)  and most people learn from it :)

Luke.


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Spinning down disks [Re: I'm about ready to do SW-Raid5 - pointers needed]
  2003-10-28  5:37     ` Luke Rosenthal
@ 2003-10-28  8:20       ` Gordon Henderson
  2003-10-28  8:39       ` I'm about ready to do SW-Raid5 - pointers needed Hermann Himmelbauer
  1 sibling, 0 replies; 24+ messages in thread
From: Gordon Henderson @ 2003-10-28  8:20 UTC (permalink / raw)
  To: linux-raid

On Tue, 28 Oct 2003, Luke Rosenthal wrote:

> I know there's the appropriate commands to do it, but are they risky?  If
> a drive's not mounted, but powered off, it should present no problems or
> lock up the OS, correct?

Have a look at noflushd. I've been using it for years with good results in
PCs that don't need their disks on all the time (eg. laptops). (and even
recently in my home servers with RAID1 sets)

Gordon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-27 23:42 I'm about ready to do SW-Raid5 - pointers needed berk walker
  2003-10-28  0:37 ` David Anderson
  2003-10-28  1:07 ` maarten van den Berg
@ 2003-10-28  8:23 ` Hermann Himmelbauer
  2003-10-28  8:26 ` Gordon Henderson
  3 siblings, 0 replies; 24+ messages in thread
From: Hermann Himmelbauer @ 2003-10-28  8:23 UTC (permalink / raw)
  To: berk walker, linux-raid

On Tuesday 28 October 2003 00:42, berk walker wrote:
> The purpose of my going to raid is to ensure, short of a total
> meltdown/fire, etc, data loss prevention.  If my house and business
> burn, I'm hosed anyway.

Maybe you'r hosed but note that it's not _that_ complicated to prevent 
dataloss in case of catastrophic events (fire/theft).

A simple single external (USB/Firewire, now SATA) drive that you store in 
another location can be a huge advantage. You could e.g. buy 2 external 
drives, leave one plugged into your server (copy data every night onto this 
disk), store the other in a different location and swap these drives every 
week.

> I am buying 4 maxtor 40 gb/200mb ultra 133 drives, and another promise
> board, to finally do swraid5 (after reading this list for a few months,
> it seems pretty scary in failure).

I would suggest to buy quality drives. Two disk failures are not very common 
but can occur. Sometimes a single quality drive can be more reliable than 4 
low quality drives bundled to a 4disk-RAID5.

I personally like the Western Digital "JB" (Special Edition) series and the 
new, although quite expensive, Raptor series. Moreover they provide excellent 
performance.

A measure for the quality of a drive may be the warranty, WD gives 3 years on 
the "JB"-series and 5 years on the Raptors.

> is there an advantage to >more< than 1 spare drive? .. more than 3
> drives in mdx?  why not cp old boot/root/whatever drive to mdx after
> booting on floppy?

Maybe I don't fully understand your question but a spare drive means normally 
"hot spare" - e.g. if a disk fails, the spare disk gets used. So if you have 
a 3-disk RAID5 plus 2 spares, you can loose 3 disks (not at the same time!) 
and still do not suffer from data loss.

> is there an advantage to having various mdx's allocated to various
> /directories?..ie: /home, var, /etc

It certainly makes sense to make different partitions to prevent dataloss due 
to file system corrpution and the like. I partition my personal system like 
that:

First on "/"
Second on "/var"
Third on "/home"

In some cases you could also add another small partition for "/boot".

Anyway, the more partitions you create, the more space gets lost: Let's say 
you want to store a 4GB file, and there is 2GB on "/", 1GB on "/var" and 
1.5GB on "home" left...

		Best Regards,
		Hermann

-- 
x1@aon.at
GPG key ID: 299893C7 (on keyservers)
FP: 0124 2584 8809 EF2A DBF9  4902 64B4 D16B 2998 93C7


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-27 23:42 I'm about ready to do SW-Raid5 - pointers needed berk walker
                   ` (2 preceding siblings ...)
  2003-10-28  8:23 ` Hermann Himmelbauer
@ 2003-10-28  8:26 ` Gordon Henderson
  2003-10-28 14:06   ` maarten van den Berg
  3 siblings, 1 reply; 24+ messages in thread
From: Gordon Henderson @ 2003-10-28  8:26 UTC (permalink / raw)
  To: berk walker; +Cc: linux-raid

On Mon, 27 Oct 2003, berk walker wrote:

> The purpose of my going to raid is to ensure, short of a total
> meltdown/fire, etc, data loss prevention.  If my house and business
> burn, I'm hosed anyway.

Offsite backups...

> I am buying 4 maxtor 40 gb/200mb ultra 133 drives, and another promise
> board, to finally do swraid5 (after reading this list for a few months,
> it seems pretty scary in failure).

Good luck with your promise board (what type is it?) I've had a lot of
problems with them (kernels 2.4.20-22) They seem to work, but under heavy
load, I see processes getting stuck in "D" state (eg. nfsd or anything
doing lots of disk IO) Most of the time they recover, but I've load a disk
partition on more than one occasion (saved by raid, and it re-built OK
after a reboot). I've seen this in 2 different servers and tried both
Intel and AMD CPUs. Tonight I try a set of different PCI IDE controllers
in one server to see if that helps it.

It's hard to tell if it's a real hardware problem or a software one (the
Promise driver being fairly new, patched in at 2.4.20, included in 2.4.22)


> is there an advantage to >more< than 1 spare drive? .. more than 3
> drives in mdx?  why not cp old boot/root/whatever drive to mdx after
> booting on floppy?

The more drives you have in the RAID set, then less "wastage" there is.
Eg. with 3 drives, you get 2 drives worth of data storage, with 8 drives
you get 7 drives of storage.

> is there an advantage to having various mdx's allocated to various
> /directories?..ie: /home, var, /etc

Traditionally yes. I usually build a machine with 4 partitions, root,
swap, /usr and a data partition (may be /var or /home or something else,
depending on the use of the server) Traditionally this was to minimise
head movement between swap and /usr and help keep things separate should a
crash happen, or someone fills up the /home or /var, but these days I'm
not sure, but since I've been doing it that way for the past 18 years it
kinda sticks...

I don't bother with a /boot partition, (IIRC that was only needed in the
bad old >1024 cylinder days) just allocate about 256M to root.  Even thats
a lot more than needed if you have /var on a separate partition. So with 3
disks, I'd have identical partitions:

  0 - 256M
  1 - 1024
  2 - 2048
  3 - Rest of disk

Partitions 0 of the first 2 disks (masters on on-board controllers?) would
be in a RAID1 configuration so you can boot off them, the others in RAID5
configurations, partition 1 for swap, 2 for /usr and 3 for /var or /home
or whatever you need. Your swap partition might need to be differend size
- you'll want it twice the amount of RAM and then a bit more, or none at
all. Disk is cheap these days, but so is memory! With this setup, you'll
have single, spare partition of 256M, and in this case, I'd be happy to
just ignore it. In a 4 disk system, you can combie the 2 spare partitions
into a RAID one and use it for something - if you have use of a 256MB
partition! (but IDE drives are cheap, so generally don't bother, but I
have one server that uses the spare RAID1'd partition for the journal on
an XFS filesystem which seems to improve things a lot)

One interesting conundrum I had recently was with an 8-disk set I was
recycling. (2 SCSI busses, 4 on each bus if you were wondering) Do I put 2
partitions onto each disk and make 2 RAID sets over 8 disks, or use 4
disks in each set to achieve the same results? In the end, I went for 2
partitions on each disk to maximise data capacity (and it turned out to
benchmark slightly faster too) The disadvantage is that if a disk does
down it will mark both RAID sets as degraded, but I can live with that as
we have a cold spare ready to slot in should this ever happen. (and in the
past, this old array has suffered one failure - it's now nearly 5 years
old and has been in a Linux box with RAID5 for all that time, starting
with 2.2.10)

Gordon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  5:37     ` Luke Rosenthal
  2003-10-28  8:20       ` Spinning down disks [Re: I'm about ready to do SW-Raid5 - pointers needed] Gordon Henderson
@ 2003-10-28  8:39       ` Hermann Himmelbauer
  1 sibling, 0 replies; 24+ messages in thread
From: Hermann Himmelbauer @ 2003-10-28  8:39 UTC (permalink / raw)
  To: Luke Rosenthal, maarten van den Berg; +Cc: linux-raid

On Tuesday 28 October 2003 06:37, Luke Rosenthal wrote:
> On Tue, 28 Oct 2003, maarten van den Berg wrote:
> > One of the cheapest "backup" mediums right now are harddisks themselves
> > (weird, isn't it?) so it stands to reason some people try to build
> > something that is foolproof (at least against hardware failure) using
> > that, just disks.
>
> Ok, most folks would argue that hard disk failure is a byproduct of them
> being run continuously.  Especially in the case of IDE disks, which are
> usually cheaper than their SCSI counterparts on a cost-per-MB ratio, but
> consequently have a much shorter MTBF.
>
> So here's my question - would it be possible to use a very large IDE disk
> in a system purely for backups, eg one of those new 300GB behemoths - but
> with one caveat - leave it "asleep".  Ie. leave it "spun down", and only
> activate it once a day at backup time, run the backup, verify it, then
> unmount it, and put it to sleep in a cron job.  Feasible?

I think that would be o.k. The system normally deals good with spun down 
disks.

> I know there's the appropriate commands to do it, but are they risky?  If
> a drive's not mounted, but powered off, it should present no problems or
> lock up the OS, correct?

No, many laptop users are spinning down they root disk, so I don't think there 
are problems.

> Everyone's got a horror story.. :)  and most people learn from it :)

Some more thoughts to the backup problem:
- When a system gets very reliable (with RAID etc.) the possibility that a 
user does a "rm *" gets more likely than a hardware/software failure.

- Most users only have very little data that's worth backing up. When looking 
at myself, there is not much more than 300MB of data and this is compressable 
to ~ 150MB. But my homedir is filled up with a *lot* of junk that sums up to 
~ 2GB. One possibility to deal with this is that the user denotes the 
directories he wants to backup in a text file (or maybe web interface).

		Best Regards,
		Hermann

-- 
x1@aon.at
GPG key ID: 299893C7 (on keyservers)
FP: 0124 2584 8809 EF2A DBF9  4902 64B4 D16B 2998 93C7


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28  8:26 ` Gordon Henderson
@ 2003-10-28 14:06   ` maarten van den Berg
  2003-10-28 14:25     ` Gordon Henderson
  2003-10-28 18:28     ` jlewis
  0 siblings, 2 replies; 24+ messages in thread
From: maarten van den Berg @ 2003-10-28 14:06 UTC (permalink / raw)
  To: linux-raid

On Tuesday 28 October 2003 09:26, Gordon Henderson wrote:
> On Mon, 27 Oct 2003, berk walker wrote:

> Good luck with your promise board (what type is it?) I've had a lot of
> problems with them (kernels 2.4.20-22) They seem to work, but under heavy
> load, I see processes getting stuck in "D" state (eg. nfsd or anything
> doing lots of disk IO) Most of the time they recover, but I've load a disk
> partition on more than one occasion (saved by raid, and it re-built OK
> after a reboot). I've seen this in 2 different servers and tried both
> Intel and AMD CPUs. Tonight I try a set of different PCI IDE controllers
> in one server to see if that helps it.

Promise cards do suck somewhat -albeit I use them- but what else is there ? 
The highpoint-equipped cards are even more sucky in many cases.

> It's hard to tell if it's a real hardware problem or a software one (the
> Promise driver being fairly new, patched in at 2.4.20, included in 2.4.22)

Ehm, what ???  You're probably talking about the driver for the fasttrack 
cards who are -in their own respect- raid cards. But if you use a 'plain' 
card like the ultra it's just used as an additional IDE channel. That has 
worked long long before 2.4.20 and it is what I use.  No sense in buying a 
pricey fasttrack if you're not going to use the raid but use the md tools 
instead.

Regards,
Maarten

-- 
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 14:06   ` maarten van den Berg
@ 2003-10-28 14:25     ` Gordon Henderson
  2003-10-28 15:05       ` maarten van den Berg
  2003-10-28 18:28     ` jlewis
  1 sibling, 1 reply; 24+ messages in thread
From: Gordon Henderson @ 2003-10-28 14:25 UTC (permalink / raw)
  To: maarten van den Berg; +Cc: linux-raid

On Tue, 28 Oct 2003, maarten van den Berg wrote:

> On Tuesday 28 October 2003 09:26, Gordon Henderson wrote:
> > On Mon, 27 Oct 2003, berk walker wrote:
>
> > Good luck with your promise board (what type is it?) I've had a lot of
> > problems with them (kernels 2.4.20-22) They seem to work, but under heavy
> > load, I see processes getting stuck in "D" state (eg. nfsd or anything
> > doing lots of disk IO) Most of the time they recover, but I've load a disk
> > partition on more than one occasion (saved by raid, and it re-built OK
> > after a reboot). I've seen this in 2 different servers and tried both
> > Intel and AMD CPUs. Tonight I try a set of different PCI IDE controllers
> > in one server to see if that helps it.
>
> Promise cards do suck somewhat -albeit I use them- but what else is there ?
> The highpoint-equipped cards are even more sucky in many cases.

Hm. I'm just about to try a pair of HighPoint cards tonight...

> > It's hard to tell if it's a real hardware problem or a software one (the
> > Promise driver being fairly new, patched in at 2.4.20, included in 2.4.22)
>
> Ehm, what ???  You're probably talking about the driver for the fasttrack
> cards who are -in their own respect- raid cards. But if you use a 'plain'
> card like the ultra it's just used as an additional IDE channel. That has
> worked long long before 2.4.20 and it is what I use.  No sense in buying a
> pricey fasttrack if you're not going to use the raid but use the md tools
> instead.

The cards I have identify in /proc/pci as:

  Unknown mass storage controller: Promise Technology, Inc. 20269 (#2) (rev 2)

They aren't RAID cards (as far as I'm aware!) and I needed to apply the AC
patches to 2.4.20 to get them to be recognised. (These patches are
integrated into 2.4.22)

Gordon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 14:25     ` Gordon Henderson
@ 2003-10-28 15:05       ` maarten van den Berg
  2003-10-28 15:10         ` Gordon Henderson
  2003-10-28 16:37         ` Norman Schmidt
  0 siblings, 2 replies; 24+ messages in thread
From: maarten van den Berg @ 2003-10-28 15:05 UTC (permalink / raw)
  To: linux-raid

On Tuesday 28 October 2003 15:25, you wrote:
> On Tue, 28 Oct 2003, maarten van den Berg wrote:
> > On Tuesday 28 October 2003 09:26, Gordon Henderson wrote:
> > > On Mon, 27 Oct 2003, berk walker wrote:
> > >
> > > Good luck with your promise board (what type is it?) I've had a lot of
> > > problems with them (kernels 2.4.20-22) They seem to work, but under
> > > heavy load, I see processes getting stuck in "D" state (eg. nfsd or
> > > anything doing lots of disk IO) Most of the time they recover, but I've
> > > load a disk partition on more than one occasion (saved by raid, and it
> > > re-built OK after a reboot). I've seen this in 2 different servers and
> > > tried both Intel and AMD CPUs. Tonight I try a set of different PCI IDE
> > > controllers in one server to see if that helps it.
> >
> > Promise cards do suck somewhat -albeit I use them- but what else is there
> > ? The highpoint-equipped cards are even more sucky in many cases.
>
> Hm. I'm just about to try a pair of HighPoint cards tonight...

Well, you might be lucky. My own experiences with them were with older cards 
and chipsets, from years back.  Stuff changes.  :-)

> > > It's hard to tell if it's a real hardware problem or a software one
> > > (the Promise driver being fairly new, patched in at 2.4.20, included in
> > > 2.4.22)
> >
> > Ehm, what ???  You're probably talking about the driver for the fasttrack
> > cards who are -in their own respect- raid cards. But if you use a 'plain'
> > card like the ultra it's just used as an additional IDE channel. That has
> > worked long long before 2.4.20 and it is what I use.  No sense in buying
> > a pricey fasttrack if you're not going to use the raid but use the md
> > tools instead.
>
> The cards I have identify in /proc/pci as:
>
>   Unknown mass storage controller: Promise Technology, Inc. 20269 (#2) (rev
> 2)

That's an Ultra 133 TX, I have the same one.

> They aren't RAID cards (as far as I'm aware!) and I needed to apply the AC
> patches to 2.4.20 to get them to be recognised. (These patches are
> integrated into 2.4.22)

Hmm.  Ok.  Weird...  My antique SuSE linux distro 7.3 recognized them right 
away, and so did the latest SuSE 8.2 I installed over that yesterday.
SuSE has been known to apply a lot of patches to their kernel though...

> Gordon

Maarten

-- 
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 15:05       ` maarten van den Berg
@ 2003-10-28 15:10         ` Gordon Henderson
  2003-10-28 16:37         ` Norman Schmidt
  1 sibling, 0 replies; 24+ messages in thread
From: Gordon Henderson @ 2003-10-28 15:10 UTC (permalink / raw)
  To: maarten van den Berg; +Cc: linux-raid

On Tue, 28 Oct 2003, maarten van den Berg wrote:

> > > Promise cards do suck somewhat -albeit I use them- but what else is there
> > > ? The highpoint-equipped cards are even more sucky in many cases.
> >
> > Hm. I'm just about to try a pair of HighPoint cards tonight...
>
> Well, you might be lucky. My own experiences with them were with older cards
> and chipsets, from years back.  Stuff changes.  :-)

I'll let you know how I get on...

> > The cards I have identify in /proc/pci as:
> >
> >   Unknown mass storage controller: Promise Technology, Inc. 20269 (#2) (rev
> > 2)
>
> That's an Ultra 133 TX, I have the same one.
>
> > They aren't RAID cards (as far as I'm aware!) and I needed to apply the AC
> > patches to 2.4.20 to get them to be recognised. (These patches are
> > integrated into 2.4.22)
>
> Hmm.  Ok.  Weird...  My antique SuSE linux distro 7.3 recognized them right
> away, and so did the latest SuSE 8.2 I installed over that yesterday.
> SuSE has been known to apply a lot of patches to their kernel though...

I use Debian and stock kernels, so that might be it. It was intersting to
bring up - I had to use the on-board IDE controller to get the system
booted, then compile a kernel with driver for the PCI controller, then
move the drive... Then build all the RAID stuff and make it boot off RAID.
One day Debian will have an install that will work directly onto RAID, but
not yet!

Gordon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 15:05       ` maarten van den Berg
  2003-10-28 15:10         ` Gordon Henderson
@ 2003-10-28 16:37         ` Norman Schmidt
  2003-10-28 16:54           ` Gordon Henderson
  1 sibling, 1 reply; 24+ messages in thread
From: Norman Schmidt @ 2003-10-28 16:37 UTC (permalink / raw)
  To: linux-raid

maarten van den Berg schrieb:

>>The cards I have identify in /proc/pci as:
>>
>>  Unknown mass storage controller: Promise Technology, Inc. 20269 (#2) (rev
>>2)
> 
> 
> That's an Ultra 133 TX, I have the same one.
> 
> 
>>They aren't RAID cards (as far as I'm aware!) and I needed to apply the AC
>>patches to 2.4.20 to get them to be recognised. (These patches are
>>integrated into 2.4.22)
> 
> 
> Hmm.  Ok.  Weird...  My antique SuSE linux distro 7.3 recognized them right 
> away, and so did the latest SuSE 8.2 I installed over that yesterday.
> SuSE has been known to apply a lot of patches to their kernel though...

I use a plain kernel.org 2.4.22 kernel with tw0 Ultra133 and one 
Ultra100 in one server. It works without problems. Are you sure you 
activated ATA/IDE/MFM/RLL -> IDE, ATA and ATAPI Block devices -> PROMISE 
PDC202(68|69|70|71|75|76|77)?

That should do the trick.

Hope this helps, Norman.
-- 
--

Norman Schmidt          Institut fuer Physikal. u. Theoret. Chemie
Dipl.-Chem.             Friedrich-Alexander-Universitaet
schmidt@naa.net         Erlangen-Nuernberg


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 16:37         ` Norman Schmidt
@ 2003-10-28 16:54           ` Gordon Henderson
  0 siblings, 0 replies; 24+ messages in thread
From: Gordon Henderson @ 2003-10-28 16:54 UTC (permalink / raw)
  To: schmidt; +Cc: linux-raid

On Tue, 28 Oct 2003, Norman Schmidt wrote:

> I use a plain kernel.org 2.4.22 kernel with tw0 Ultra133 and one
> Ultra100 in one server. It works without problems. Are you sure you
> activated ATA/IDE/MFM/RLL -> IDE, ATA and ATAPI Block devices -> PROMISE
> PDC202(68|69|70|71|75|76|77)?

Absolutely. These board do not work in generic mode. From /var/log/dmesg:

PDC20269: IDE controller at PCI slot 02:05.0
PDC20269: chipset revision 2
PDC20269: not 100% native mode: will probe irqs later
    ide2: BM-DMA at 0x3080-0x3087, BIOS settings: hde:pio, hdf:pio
    ide3: BM-DMA at 0x3088-0x308f, BIOS settings: hdg:pio, hdh:pio
PDC20269: IDE controller at PCI slot 02:07.0
PDC20269: chipset revision 2
PDC20269: not 100% native mode: will probe irqs later
    ide4: BM-DMA at 0x3090-0x3097, BIOS settings: hdi:pio, hdj:pio
    ide5: BM-DMA at 0x3098-0x309f, BIOS settings: hdk:pio, hdl:pio

and from the .config file:

# CONFIG_BLK_DEV_PDC202XX_OLD is not set
# CONFIG_PDC202XX_BURST is not set
CONFIG_BLK_DEV_PDC202XX_NEW=y
# CONFIG_PDC202XX_FORCE is not set
CONFIG_BLK_DEV_PDC202XX=y
# CONFIG_BLK_DEV_ATARAID_PDC is not set

Gordon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 14:06   ` maarten van den Berg
  2003-10-28 14:25     ` Gordon Henderson
@ 2003-10-28 18:28     ` jlewis
  2003-10-28 19:14       ` Gordon Henderson
  2003-10-28 19:51       ` maarten van den Berg
  1 sibling, 2 replies; 24+ messages in thread
From: jlewis @ 2003-10-28 18:28 UTC (permalink / raw)
  To: maarten van den Berg; +Cc: linux-raid

On Tue, 28 Oct 2003, maarten van den Berg wrote:

> Promise cards do suck somewhat -albeit I use them- but what else is there ? 
> The highpoint-equipped cards are even more sucky in many cases.

What about Adaptec's IDE/ATA versions of their DPT RAID controllers?  
AFAIK, they even use the same dpt driver.

----------------------------------------------------------------------
 Jon Lewis *jlewis@lewis.org*|  I route
 Senior Network Engineer     |  therefore you are
 Atlantic Net                |  
_________ http://www.lewis.org/~jlewis/pgp for PGP public key_________


^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 18:28     ` jlewis
@ 2003-10-28 19:14       ` Gordon Henderson
  2003-10-29  4:33         ` Maurice Hilarius
  2003-10-28 19:51       ` maarten van den Berg
  1 sibling, 1 reply; 24+ messages in thread
From: Gordon Henderson @ 2003-10-28 19:14 UTC (permalink / raw)
  To: linux-raid

On Tue, 28 Oct 2003 jlewis@lewis.org wrote:

> On Tue, 28 Oct 2003, maarten van den Berg wrote:
>
> > Promise cards do suck somewhat -albeit I use them- but what else is there ?
> > The highpoint-equipped cards are even more sucky in many cases.
>
> What about Adaptec's IDE/ATA versions of their DPT RAID controllers?
> AFAIK, they even use the same dpt driver.

Who knows. I've just tried to swap the 2 promise cards for 2 highpoint
cards - with fairly disastrous results )-: It booted, recognised the 4
disks (hde,g,i,k) but then started to whinge about DMA errors and general
badness, so put the promise cards back in and it's now hapilly rebuilding
the raid arrays.

Maybe it's having 2 promise (or highpoint) cards? I had one highpoint card
in another PC for a few days and it was performing well...

It's very annoying, whatever it is, and unfortunately I don't have the
resources or time to try to really get to the bottom of this )-:

Gordon

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 18:28     ` jlewis
  2003-10-28 19:14       ` Gordon Henderson
@ 2003-10-28 19:51       ` maarten van den Berg
  1 sibling, 0 replies; 24+ messages in thread
From: maarten van den Berg @ 2003-10-28 19:51 UTC (permalink / raw)
  To: linux-raid

On Tuesday 28 October 2003 19:28, jlewis@lewis.org wrote:
> On Tue, 28 Oct 2003, maarten van den Berg wrote:
> > Promise cards do suck somewhat -albeit I use them- but what else is there
> > ? The highpoint-equipped cards are even more sucky in many cases.
>
> What about Adaptec's IDE/ATA versions of their DPT RAID controllers?
> AFAIK, they even use the same dpt driver.

Can't comment on the adaptec.  Adaptec usually makes a good product, so...

My experiences with HPT chipsets were that it _seems_ to work at first but 
then when you turn on DMA all hell breaks loose. Either the chipset or the 
linux driver just can't cope with DMA (and we all know how fast PIO is...)
This behaviour was observed with a HPT366 onboard version, and (much) later 
again with a standalone HPT370. From then on, I refused to touch highpoint 
chipset based cards with a 10' pole.     ( YMMV )

Maarten

-- 
Yes of course I'm sure it's the red cable. I guarante[^%!/+)F#0c|'NO CARRIER

^ permalink raw reply	[flat|nested] 24+ messages in thread

* Re: I'm about ready to do SW-Raid5 - pointers needed
  2003-10-28 19:14       ` Gordon Henderson
@ 2003-10-29  4:33         ` Maurice Hilarius
  0 siblings, 0 replies; 24+ messages in thread
From: Maurice Hilarius @ 2003-10-29  4:33 UTC (permalink / raw)
  To: Gordon Henderson; +Cc: linux-raid

With regards to your message at 12:14 PM 10/28/03, Gordon Henderson. Where 
you stated:
>Who knows. I've just tried to swap the 2 promise cards for 2 highpoint
>cards - with fairly disastrous results )-: It booted, recognised the 4
>disks (hde,g,i,k) but then started to whinge about DMA errors and general
>badness, so put the promise cards back in and it's now hapilly rebuilding
>the raid arrays.
>
>Maybe it's having 2 promise (or highpoint) cards? I had one highpoint card
>in another PC for a few days and it was performing well...
>
>It's very annoying, whatever it is, and unfortunately I don't have the
>resources or time to try to really get to the bottom of this )-:
>
>Gordon


3Ware.
They work.
Reliable
Decent Open Source drivers.
Decent support
Fast.
Do either hardware RAID ( real hardware RAID, not Promise so-called version)
Make lovely 2, 4 8, or 12 port devices for MD RAID.
12 channels, 12 disks, ummm, tasty.



With our best regards,

Maurice W. Hilarius       Telephone: 01-780-456-9771
Hard Data Ltd.               FAX:       01-780-456-9772
11060 - 166 Avenue        mailto:maurice@harddata.com
Edmonton, AB, Canada      http://www.harddata.com/
    T5X 1Y3


^ permalink raw reply	[flat|nested] 24+ messages in thread

end of thread, other threads:[~2003-10-29  4:33 UTC | newest]

Thread overview: 24+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-10-27 23:42 I'm about ready to do SW-Raid5 - pointers needed berk walker
2003-10-28  0:37 ` David Anderson
2003-10-28  1:55   ` maarten van den Berg
2003-10-28  3:32     ` rob
2003-10-28  5:37     ` Luke Rosenthal
2003-10-28  8:20       ` Spinning down disks [Re: I'm about ready to do SW-Raid5 - pointers needed] Gordon Henderson
2003-10-28  8:39       ` I'm about ready to do SW-Raid5 - pointers needed Hermann Himmelbauer
2003-10-28  1:07 ` maarten van den Berg
2003-10-28  1:21   ` David Anderson
2003-10-28  1:35     ` maarten van den Berg
2003-10-28  1:55       ` David Anderson
2003-10-28  1:58       ` David Anderson
2003-10-28  8:23 ` Hermann Himmelbauer
2003-10-28  8:26 ` Gordon Henderson
2003-10-28 14:06   ` maarten van den Berg
2003-10-28 14:25     ` Gordon Henderson
2003-10-28 15:05       ` maarten van den Berg
2003-10-28 15:10         ` Gordon Henderson
2003-10-28 16:37         ` Norman Schmidt
2003-10-28 16:54           ` Gordon Henderson
2003-10-28 18:28     ` jlewis
2003-10-28 19:14       ` Gordon Henderson
2003-10-29  4:33         ` Maurice Hilarius
2003-10-28 19:51       ` maarten van den Berg

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).