linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Multiple raids on one machine?
@ 2006-06-25 22:37 Chris Allen
  2006-06-25 22:44 ` Jim Buttafuoco
                   ` (4 more replies)
  0 siblings, 5 replies; 9+ messages in thread
From: Chris Allen @ 2006-06-25 22:37 UTC (permalink / raw)
  To: linux-raid

Back to my 12 terabyte fileserver, I have decided to split the storage 
into four partitions
each of 3TB. This way I can choose between XFS and EXT3 later on.

So now, my options are between the following:

1. Single 12TB /dev/md0, partitioned into four 3TB partitions. But how do
I do this? fdisk won't handle it. Can GNU Parted handle partitions this big?

2. Partition the raw disks into four partitions and make 
/dev/md0,md1,md2,md3.
But am I heading for problems here? Is there going to be a big 
performance hit
with four raid5 arrays on the same machine? Am I likely to have dataloss 
problems
if my machine crashes?

Thanks for listening...



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-25 22:37 Multiple raids on one machine? Chris Allen
@ 2006-06-25 22:44 ` Jim Buttafuoco
  2006-06-25 22:46 ` H. Peter Anvin
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: Jim Buttafuoco @ 2006-06-25 22:44 UTC (permalink / raw)
  To: Chris Allen, linux-raid

use LVM, that way you can resize the volumns


---------- Original Message -----------
From: Chris Allen <chris@cjx.com>
To: linux-raid@vger.kernel.org
Sent: Sun, 25 Jun 2006 23:37:01 +0100
Subject: Multiple raids on one machine?

> Back to my 12 terabyte fileserver, I have decided to split the storage 
> into four partitions
> each of 3TB. This way I can choose between XFS and EXT3 later on.
> 
> So now, my options are between the following:
> 
> 1. Single 12TB /dev/md0, partitioned into four 3TB partitions. But how do
> I do this? fdisk won't handle it. Can GNU Parted handle partitions this big?
> 
> 2. Partition the raw disks into four partitions and make 
> /dev/md0,md1,md2,md3.
> But am I heading for problems here? Is there going to be a big 
> performance hit
> with four raid5 arrays on the same machine? Am I likely to have dataloss 
> problems
> if my machine crashes?
> 
> Thanks for listening...
> 
> -
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
------- End of Original Message -------


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-25 22:37 Multiple raids on one machine? Chris Allen
  2006-06-25 22:44 ` Jim Buttafuoco
@ 2006-06-25 22:46 ` H. Peter Anvin
  2006-06-26  0:16 ` Bill Davidsen
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 9+ messages in thread
From: H. Peter Anvin @ 2006-06-25 22:46 UTC (permalink / raw)
  To: Chris Allen; +Cc: linux-raid

Chris Allen wrote:
> 
> 2. Partition the raw disks into four partitions and make 
> /dev/md0,md1,md2,md3.
> But am I heading for problems here? Is there going to be a big 
> performance hit
> with four raid5 arrays on the same machine? Am I likely to have dataloss 
> problems
> if my machine crashes?
> 

2 should work just fine.

	-hpa

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-25 22:37 Multiple raids on one machine? Chris Allen
  2006-06-25 22:44 ` Jim Buttafuoco
  2006-06-25 22:46 ` H. Peter Anvin
@ 2006-06-26  0:16 ` Bill Davidsen
  2006-06-26  8:05 ` Gordon Henderson
  2006-06-27  9:56 ` Nix
  4 siblings, 0 replies; 9+ messages in thread
From: Bill Davidsen @ 2006-06-26  0:16 UTC (permalink / raw)
  To: Chris Allen; +Cc: linux-raid

Chris Allen wrote:

> Back to my 12 terabyte fileserver, I have decided to split the storage 
> into four partitions
> each of 3TB. This way I can choose between XFS and EXT3 later on.
>
> So now, my options are between the following:
>
> 1. Single 12TB /dev/md0, partitioned into four 3TB partitions. But how do
> I do this? fdisk won't handle it. Can GNU Parted handle partitions 
> this big?
>
> 2. Partition the raw disks into four partitions and make 
> /dev/md0,md1,md2,md3.
> But am I heading for problems here? Is there going to be a big 
> performance hit
> with four raid5 arrays on the same machine? Am I likely to have 
> dataloss problems
> if my machine crashes? 

No. I regularly run a mix of RAID types over the same drives for 
performance vs. reliability reasons. And in the past I ran not only 
RAID-0 and RAID-5 using various partitions, but the stripe size was 
different as well. Survived a number of unscheduled restarts.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-25 22:37 Multiple raids on one machine? Chris Allen
                   ` (2 preceding siblings ...)
  2006-06-26  0:16 ` Bill Davidsen
@ 2006-06-26  8:05 ` Gordon Henderson
  2006-06-26  8:47   ` Chris Allen
  2006-06-27  9:56 ` Nix
  4 siblings, 1 reply; 9+ messages in thread
From: Gordon Henderson @ 2006-06-26  8:05 UTC (permalink / raw)
  To: Chris Allen; +Cc: linux-raid

On Sun, 25 Jun 2006, Chris Allen wrote:

> Back to my 12 terabyte fileserver, I have decided to split the storage
> into four partitions
> each of 3TB. This way I can choose between XFS and EXT3 later on.
>
> So now, my options are between the following:
>
> 1. Single 12TB /dev/md0, partitioned into four 3TB partitions. But how do
> I do this? fdisk won't handle it. Can GNU Parted handle partitions this big?
>
> 2. Partition the raw disks into four partitions and make
> /dev/md0,md1,md2,md3.
> But am I heading for problems here? Is there going to be a big
> performance hit
> with four raid5 arrays on the same machine? Am I likely to have dataloss
> problems
> if my machine crashes?

I use option 2 (above) all the time, and I've never noticed any
performance issues. (not issues with recovery after a power failure) I'd
like to think that on a modern processor the CPU can handle the parity,
etc. calculations several orders of magnitude faster than the hardware can
chug data to & from the drives, so all it's really adding is a tiny bit of
latency...

Someone some time back on this list posted a hint that I've been using
though - you might find it handy - name the md? devices after the
partition number, if possible. So md1 would be made up from /dev/sda1,
/dev/sdb1, /dev/sdc1, etc. md2 made up from /dev/sda2, /dev/sdb2, etc. It
might just save any confusion when hot adding/removing drives, etc.

The down-side is that if you do have to remove a drive, you have to
manually 'fail' each other md device+partition for that drive, then
manually remove them before you can hot-remove the physical drive. (or
cold remove it/whatever)

So if you have /dev/md{1,2.3,4} and /dev/md3 (etc) is made from /dev/sda3,
/dev/sdb3, /dev/sdc3, and md3 (eg. /dev/sdc3) has a failure, then you need
to:

  mdadm --fail /dev/md1 /dev/sdc1
  mdadm --fail /dev/md2 /dev/sdc2
# mdadm --fail /dev/md3 /dev/sdc3	# already failed
  mdadm --fail /dev/md4 /dev/sdc4

Then repeat, s/fail/remove/ then you can echo the right runes to
/proc/scsi/scsi and hot-remove /dev/sdc and plug a new one in.

At least, thats what I do when I've done it 'hot'. Doing it cold doesn't
really matter as the server will boot with a blank partition table in the
replaced disk and just kick it out of the array - you can then
re-partition, and mdadm --add ... the partition back into each array.

I like to keep each partition identical, if I can - heres an example:

Personalities : [raid1] [raid6]
md1 : active raid1 sdf1[5] sde1[4] sdd1[3] sdc1[2] sdb1[1] sda1[0]
      248896 blocks [6/6] [UUUUUU]

md2 : active raid6 sdf2[5] sde2[4] sdd2[3] sdc2[2] sdb2[1] sda2[0]
      1991680 blocks level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md3 : active raid6 sdf3[5] sde3[4] sdd3[3] sdc3[2] sdb3[1] sda3[0]
      1991680 blocks level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md5 : active raid6 sdf5[5] sde5[4] sdd5[3] sdc5[2] sdb5[1] sda5[0]
      3983616 blocks level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md6 : active raid6 sdf6[5] sde6[4] sdd6[3] sdc6[2] sdb6[1] sda6[0]
      277345536 blocks level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

md7 : active raid6 sdf7[5] sde7[4] sdd7[3] sdc7[2] sdb7[1] sda7[0]
      287177472 blocks level 6, 64k chunk, algorithm 2 [6/6] [UUUUUU]

Each drive is partitioned like:

Disk /dev/sda: 255 heads, 63 sectors, 17849 cylinders
Units = cylinders of 16065 * 512 bytes

   Device Boot    Start       End    Blocks   Id  System
/dev/sda1   *         1        31    248976   fd  Linux raid autodetect
/dev/sda2            32        93    498015   fd  Linux raid autodetect
/dev/sda3            94       155    498015   fd  Linux raid autodetect
/dev/sda4           156     17849 142127055    5  Extended
/dev/sda5           156       279    995998+  fd  Linux raid autodetect
/dev/sda6           280      8911  69336508+  fd  Linux raid autodetect
/dev/sda7          8912     17849  71794453+  fd  Linux raid autodetect


(Yes, md1 is a RAID-1 striped over 6 drives! write performance might be
"sub optimal", but it's only the root partition and hardly ever written
to, and yes, swap /dev/md2 on this box is under R-6)

Gordon

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-26  8:05 ` Gordon Henderson
@ 2006-06-26  8:47   ` Chris Allen
  0 siblings, 0 replies; 9+ messages in thread
From: Chris Allen @ 2006-06-26  8:47 UTC (permalink / raw)
  To: Gordon Henderson; +Cc: linux-raid



Gordon Henderson wrote:
> I use option 2 (above) all the time, and I've never noticed any
> performance issues. (not issues with recovery after a power failure) I'd
> like to think that on a modern processor the CPU can handle the parity,
> etc. calculations several orders of magnitude faster than the hardware can
> chug data to & from the drives, so all it's really adding is a tiny bit of
> latency...
>
>   

Thanks for this, and for the very informed responses from other 
subscribers. We have gone from
100GB of storage in May 2000 (which lasted us over a year) to 50TB today 
(which will last us
three months), and a forecast of 200TB by this time next year! I'm 
learning fast :-)



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-25 22:37 Multiple raids on one machine? Chris Allen
                   ` (3 preceding siblings ...)
  2006-06-26  8:05 ` Gordon Henderson
@ 2006-06-27  9:56 ` Nix
  2006-06-27 11:33   ` Chris Allen
  4 siblings, 1 reply; 9+ messages in thread
From: Nix @ 2006-06-27  9:56 UTC (permalink / raw)
  To: Chris Allen; +Cc: linux-raid

On 25 Jun 2006, Chris Allen uttered the following:
> Back to my 12 terabyte fileserver, I have decided to split the storage
> into four partitions each of 3TB. This way I can choose between XFS
> and EXT3 later on.
> 
> So now, my options are between the following:
> 
> 1. Single 12TB /dev/md0, partitioned into four 3TB partitions. But how do
> I do this? fdisk won't handle it. Can GNU Parted handle partitions this big?
> 
> 2. Partition the raw disks into four partitions and make /dev/md0,md1,md2,md3.
> But am I heading for problems here? Is there going to be a big performance hit
> with four raid5 arrays on the same machine? Am I likely to have dataloss problems
> if my machine crashes?

There is a third alternative which can be useful if you have a mess of
drives of widely-differing capacities: make several RAID arrays so as to tesselate
space across all the drives, and then pile an LVM on the top of all of them to
fuse them back into one again.

The result should give you the reliability of RAID-5 and the resizeability of
LVM :)

e.g. the config on my home server, which for reasons of disks-bought-at-
different-times has disks varying in size from 10Gb through 40Gb to
72Gb. Discounting the tiny RAID-1 array used for booting off (LILO won't
boot from RAID-5), it looks like this:

Two RAID arrays, positioned so as to fill up as much space as possible
on the various physical disks:

     Raid Level : raid5
     Array Size : 76807296 (73.25 GiB 78.65 GB)
    Device Size : 76807296 (36.62 GiB 39.33 GB)
   Raid Devices : 3
[...]
    Number   Major   Minor   RaidDevice State
       0       8        6        0      active sync   /dev/sda6
       1       8       22        1      active sync   /dev/sdb6
       3      22        5        2      active sync   /dev/hdc5

     Array Size : 19631104 (18.72 GiB 20.10 GB)
    Device Size : 19631104 (9.36 GiB 10.05 GB)
   Raid Devices : 3

     Raid Level : raid5
    Number   Major   Minor   RaidDevice State
       0       8       23        0      active sync   /dev/sdb7
       1       8        7        1      active sync   /dev/sda7
       3       3        5        2      active sync   /dev/hda5

(Note that the arrays share some disks, the largest ones: each lays
claim to almost the whole of one of the smaller disks.)

Then atop that we have two LVM volume groups, one filling up any
remaining non-RAIDed space and used for non-critical stuff which can be
regenerated on demand (if a disk dies the whole VG will vanish; if we
wanted to avoid that we could make that space into a RAID-1 array, but I
have a lot of easily-regeneratable data and so didn't bother with that),
and one filling *both* RAID arrays:

  VG    #PV #LV #SN Attr   VSize  VFree  Devices
  disks   3   7   0 wz--n- 43.95G 21.80G /dev/sda8(0)
  disks   3   7   0 wz--n- 43.95G 21.80G /dev/sdb8(0)
  disks   3   7   0 wz--n- 43.95G 21.80G /dev/hdc6(0)
  raid    2   9   0 wz--n- 91.96G 49.77G /dev/md1(0)
  raid    2   9   0 wz--n- 91.96G 49.77G /dev/md2(0)

The result can survive any single disk failure, just like a single
RAID-5 array: the worst case is that one of the /dev/c's dies and both
arrays go degraded at once, but nothing else bad would happen to the
RAIDed storage.

Try doing *that* with hardware RAID. :)))

-- 
`NB: Anyone suggesting that we should say "Tibibytes" instead of
 Terabytes there will be hunted down and brutally slain.
 That is all.' --- Matthew Wilcox

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-27  9:56 ` Nix
@ 2006-06-27 11:33   ` Chris Allen
  2006-06-27 16:08     ` Nix
  0 siblings, 1 reply; 9+ messages in thread
From: Chris Allen @ 2006-06-27 11:33 UTC (permalink / raw)
  To: Nix; +Cc: linux-raid



Nix wrote:
> On 25 Jun 2006, Chris Allen uttered the following:
>> Back to my 12 terabyte fileserver, I have decided to split the storage
>> into four partitions each of 3TB. This way I can choose between XFS
>> and EXT3 later on.
>>
>> So now, my options are between the following:
>>
>> 1. Single 12TB /dev/md0, partitioned into four 3TB partitions. But how do
>> I do this? fdisk won't handle it. Can GNU Parted handle partitions this big?
>>
>> 2. Partition the raw disks into four partitions and make /dev/md0,md1,md2,md3.
>> But am I heading for problems here? Is there going to be a big performance hit
>> with four raid5 arrays on the same machine? Am I likely to have dataloss problems
>> if my machine crashes?
> 
> There is a third alternative which can be useful if you have a mess of
> drives of widely-differing capacities: make several RAID arrays so as to tesselate
> space across all the drives, and then pile an LVM on the top of all of them to
> fuse them back into one again.
> 

But won't I be stuck with the same problem? ie I'll have a single 12TB 
lvm, and won't be able to use EXT3 on it?

^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: Multiple raids on one machine?
  2006-06-27 11:33   ` Chris Allen
@ 2006-06-27 16:08     ` Nix
  0 siblings, 0 replies; 9+ messages in thread
From: Nix @ 2006-06-27 16:08 UTC (permalink / raw)
  To: Chris Allen; +Cc: linux-raid

On Tue, 27 Jun 2006, Chris Allen wondered:
> Nix wrote:
>> There is a third alternative which can be useful if you have a mess of
>> drives of widely-differing capacities: make several RAID arrays so as to tesselate
>> space across all the drives, and then pile an LVM on the top of all of them to
>> fuse them back into one again.
> 
> But won't I be stuck with the same problem? ie I'll have a single 12TB
> lvm, and won't be able to use EXT3 on it?

Not without ext3 patches (until the very-large-ext3 patches now pending
on l-k go in), sure. But because it's LVMed you could cut it into a couple
of exg3 filesystems easily. (I find it hard to imagine a single *directory*
whose children contain 12Tb of files in a form that you can't cut into
pieces with suitable use of bind mounts, but still, perhaps such exists.)

-- 
`NB: Anyone suggesting that we should say "Tibibytes" instead of
 Terabytes there will be hunted down and brutally slain.
 That is all.' --- Matthew Wilcox

^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2006-06-27 16:08 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-06-25 22:37 Multiple raids on one machine? Chris Allen
2006-06-25 22:44 ` Jim Buttafuoco
2006-06-25 22:46 ` H. Peter Anvin
2006-06-26  0:16 ` Bill Davidsen
2006-06-26  8:05 ` Gordon Henderson
2006-06-26  8:47   ` Chris Allen
2006-06-27  9:56 ` Nix
2006-06-27 11:33   ` Chris Allen
2006-06-27 16:08     ` Nix

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).