linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* (unknown)
@ 2006-01-11 14:47 bhess
  2006-01-11 17:44 ` your mail Ross Vandegrift
  2006-01-12 11:16 ` David Greaves
  0 siblings, 2 replies; 8+ messages in thread
From: bhess @ 2006-01-11 14:47 UTC (permalink / raw)
  To: linux-raid

linux-raid@vger.kernel.org



I originally sent this to Neil Brown who suggested I sent it to you.







Any help would be appreciated.







Has anyone put an effort into building a raid 1 based on USB connected

drives under Redhat 3/4 not as the root/boot

drive. A year ago I don't think this made any sense but with the

price of drives being far less that then the equivalent tape media

and the simple USB to IDE smart cable I am evaluating an expandable

USB disk farm for two uses. A reasonable robust place to store

data until I can select what I want to put on tape. The second

is secondary storage for all of the family video tapes that I am

capturing in preparation for editing to DVD. The system does not

have to be fast just large, robust, expandable and cheep.



I currently run a Redhat sandbox with a hardware raid 5 and 4 120G

SATA drives. I have added USB drives and have them mount with

the LABEL=/PNAME option in fstab. In this manner they end up

in the right place after reboot. I do not know enough about

the Linux drive interface to know if USB attached devices will

get properly mounted into the raid at reboot and after changes

or additions of drives to the USB.



I am a retired Bell Labs Research supervisor. I was in Murray Hill

when UNIX was born and still use Intel based UNIX in the current

form of SCO Unixware both professionally and personally. Unixware

is no longer a viable product since I see no future in it and

Oracle is not supported. I know way to much about how the guts

of Unixware works thanks to a friend who was one of SCO's kernel

and storage designers. I know way to little how Linux works to

get a USB based raid up without a lot of research and tinkering.

I don't mind research and tinkering but I don't like reinventing

the wheel.



I have read The Software-RAID HOWTO by Jakob 0stergaard and

Emilio Bueso and downloaded mdadm. I have not tried it yet.





The system I have in mind uses a Intel server motherboard,

hardware raid 1 SATA root/boot/swap drive, SCSI tape drive

and a 4 port USB card. In a 2U chasses. A second 2U chassis

will contain a supply, up to 14 drives and lots of fans.

I have everything except the drives. The sole use of this system

will be a disk farm with a NFS and Samba server. It will run

under Redhat 3 or 4. I am leaning toward Redhat 4 since I

understand SCSI tape support is more stable under 4. Any

comment in this area would also be appreciated.



Can you point me in the direction of newer articles that cover

Linux raid using USB connected drives or do you have any

suggestions on the configuration of a system. My main concern

is how to get USB drives correctly put back in the raid after

boot and/or a USB change since I do not know how they are assigned

to /dev/sdxy in the first place and how USB hubs interact with

the assignments. I realize I should have other concerns and

just don't know enough. Ignorance is bliss, up to an init 6.



Thank You for your time.



Bill Hess



bhess@patmedia.net 


^ permalink raw reply	[flat|nested] 8+ messages in thread
* (unknown), 
@ 2009-04-02  4:16 Lelsie Rhorer
  2009-04-02  6:56 ` your mail Luca Berra
  0 siblings, 1 reply; 8+ messages in thread
From: Lelsie Rhorer @ 2009-04-02  4:16 UTC (permalink / raw)
  To: linux-raid

I'm having a severe problem whose root cause I cannot determine.  I have a
RAID 6 array managed by mdadm running on Debian "Lenny" with a 3.2GHz AMD
Athlon 64 x 2 processor and 8G of RAM.  There are ten 1 Terabyte SATA
drives, unpartitioned, fully allocated to the /dev/md0 device. The drive
are served by 3 Silicon Image SATA port multipliers and a Silicon Image 4
port eSATA controller.  The /dev/md0 device is also unpartitioned, and all
8T of active space is formatted as a single Reiserfs file system.  The
entire volume is mounted to /RAID.  Various directories on the volume are
shared using both NFS and SAMBA.

Performance of the RAID system is very good.  The array can read and write
at over 450 Mbps, and I don't know if the limit is the array itself or the
network, but since the performance is more than adequate I really am not
concerned which is the case.

The issue is the entire array will occasionally pause completely for about
40 seconds when a file is created.  This does not always happen, but the
situation is easily reproducible.  The frequency at which the symptom
occurs seems to be related to the transfer load on the array.  If no other
transfers are in process, then the failure seems somewhat more rare,
perhaps accompanying less than 1 file creation in 10..  During heavy file
transfer activity, sometimes the system halts with every other file
creation.  Although I have observed many dozens of these events, I have
never once observed it to happen except when a file creation occurs. 
Reading and writing existing files never triggers the event, although any
read or write occurring during the event is halted for the duration. 
(There is one cron jog which runs every half-hour that creates a tiny file;
this is the most common failure vector.)  There are other drives formatted
with other file systems on the machine, but the issue has never been seen
on any of the other drives.  When the array runs its regularly scheduled
health check, the problem is much worse.  Not only does it lock up with
almost every single file creation, but the lock-up time is much longer -
sometimes in excess of 2 minutes.

Transfers via Linux based utilities (ftp, NFS, cp, mv, rsync, etc) all
recover after the event, but SAMBA based transfers frequently fail, both
reads and writes.

How can I troubleshoot and more importantly resolve this issue?


^ permalink raw reply	[flat|nested] 8+ messages in thread
* (unknown), 
@ 2007-10-09  9:56 Frédéric Mantegazza
  2007-10-09 10:46 ` your mail Justin Piszcz
  0 siblings, 1 reply; 8+ messages in thread
From: Frédéric Mantegazza @ 2007-10-09  9:56 UTC (permalink / raw)
  To: linux-raid

subscribe linux-raid
-- 
   Frédéric

-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html

^ permalink raw reply	[flat|nested] 8+ messages in thread
* (unknown)
@ 2002-10-30  1:26 Michael Robinton
  2002-10-30  9:59 ` your mail Massimiliano Masserelli
  0 siblings, 1 reply; 8+ messages in thread
From: Michael Robinton @ 2002-10-30  1:26 UTC (permalink / raw)
  To: linux-raid

>I've taken a look at the ML archives, and found an old thread (06/2002)
>on this subject, but found no solution.
>
>I've a working setup with a two disks RAID1 root, which boots
>flawlessly. Troubles arise when simulating hw failure. RAID setup is as
>follows:
>
>raiddev                 /dev/md0
>raid-level              1
>nr-raid-disks           2
>nr-spare-disks          0
>chunk-size              4
>
>device                  /dev/hda1
>raid-disk               0
>
>device                  /dev/hdc1
>raid-disk               1

>If I disconnect /dev/hda before booting, the kernel tries to initialize
>the array, can't access /dev/hda1 (no wonder), marks it as faulty, then
>refuses to initialize the array, dieing with a kernel panic, unable to
>mount root.
>
>If I disconnect /dev/hdc before booting, the array gets started in
>degraded mode, and the startup goes on without a glitch.
>
>If I disconnect /dev/hda and move /dev/hdc to its place (so it's now
>/dev/hda), the array gets started in degraded mode and the startup goes
>on.
>
>Actually, this is already a workable solution (if the first disk dies, I
>just "promote" the second to hda and go looking for a replacement of the
>broken disk), but I think this is not _elegant_. 8)
>
>Could anyone help me shedding some light on the subject?
>
>Tnx in advance.
>--
>Massimiliano Masserelli

There is no standard for the behavior of the motherboard bios when the
first device 0x80 is not available at boot time. Some motherboards will
automove 0x81 -> 0x80, some can do it as a bios change, some you're stuck.

Most scsii controllers will do this and a few IDE controllers will as
well.

Generally for best flexibility, use an independent lilo file for each hard
disk and set the boot disk pointer individually for each drive to 0x80 or
0x81 as needed for your environment rather than using the "raid" feature
of lilo.

See the Boot-Root-Raid-LILO howto for examples. This doc is a bit out of
date, but the examples and setups are all applicable for the 2.4 series
kernels.

Michael


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2009-04-02  6:56 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-01-11 14:47 (unknown) bhess
2006-01-11 17:44 ` your mail Ross Vandegrift
2006-01-12 11:16 ` David Greaves
2006-01-12 17:20   ` Re: Ross Vandegrift
2006-01-17 12:12     ` Re: David Greaves
  -- strict thread matches above, loose matches on Subject: below --
2009-04-02  4:16 (unknown), Lelsie Rhorer
2009-04-02  6:56 ` your mail Luca Berra
2007-10-09  9:56 (unknown), Frédéric Mantegazza
2007-10-09 10:46 ` your mail Justin Piszcz
2002-10-30  1:26 (unknown) Michael Robinton
2002-10-30  9:59 ` your mail Massimiliano Masserelli

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).