linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* what happens to raid when more disks are added?
@ 2003-05-07 15:44 Herta Van den Eynde
  2003-05-07 16:37 ` Paul Clements
  0 siblings, 1 reply; 3+ messages in thread
From: Herta Van den Eynde @ 2003-05-07 15:44 UTC (permalink / raw)
  To: linux-raid

I currently have two systems using "standard" sw raid, and 2 more to set 
up (hopefully using mdadm).
Their data disks are located in a splitbus PowerVault and mirrored 
across scsi adapters  Adapter 1 connects to disks in slots 0, 1, and 2 
in the PowerVault, adapter 2 connects to disks in slots 9, 10, and 11.

To linux, they are known as devices sd[c-h], which have been configured 
as raid 0+1:

# cat /proc/mdstat
Personalities : [raid0] [raid1]
read_ahead 1024 sectors
md2 : active raid1 md1[1] md0[0]
      106679168 blocks [2/2] [UU]

md0 : active raid0 sde1[2] sdd1[1] sdc1[0]
      106679232 blocks 8k chunks

md1 : active raid0 sdh1[2] sdg1[1] sdf1[0]
      106679232 blocks 8k chunks

unused devices: <none>

When I need to add extra disks, e.g. in slots 3 and 12, I assume that 
the disk in slot 3 will get device name /dev/sdf, and the disks in slots 
9 through 12 will subsequently be known as /dev/sd[g-j].  How will that 
affect the raid 0+1 config?

Kind regards,

Herta


^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: what happens to raid when more disks are added?
  2003-05-07 15:44 what happens to raid when more disks are added? Herta Van den Eynde
@ 2003-05-07 16:37 ` Paul Clements
  2003-05-08  8:10   ` Herta Van den Eynde
  0 siblings, 1 reply; 3+ messages in thread
From: Paul Clements @ 2003-05-07 16:37 UTC (permalink / raw)
  To: Herta Van den Eynde; +Cc: linux-raid

Herta Van den Eynde wrote:
 
> To linux, they are known as devices sd[c-h], which have been configured
> as raid 0+1:
> 
> # cat /proc/mdstat
> Personalities : [raid0] [raid1]
> read_ahead 1024 sectors
> md2 : active raid1 md1[1] md0[0]
>       106679168 blocks [2/2] [UU]
> 
> md0 : active raid0 sde1[2] sdd1[1] sdc1[0]
>       106679232 blocks 8k chunks
> 
> md1 : active raid0 sdh1[2] sdg1[1] sdf1[0]
>       106679232 blocks 8k chunks
> 
> unused devices: <none>

It's generally thought to be better to set this up as a RAID 1+0 (three
raid1 devices striped together) but maybe there's a reason why you've
opted for the RAID 0+1? (there's one less md device, I guess)...

 
> When I need to add extra disks, e.g. in slots 3 and 12, I assume that
> the disk in slot 3 will get device name /dev/sdf, and the disks in slots
> 9 through 12 will subsequently be known as /dev/sd[g-j].  How will that
> affect the raid 0+1 config?

You may want to think about using the autodetection feature of md (or
mdadm's UUID capability)... these would allow you to avoid messing
things up if your drive letters shift on you...

--
Paul

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: what happens to raid when more disks are added?
  2003-05-07 16:37 ` Paul Clements
@ 2003-05-08  8:10   ` Herta Van den Eynde
  0 siblings, 0 replies; 3+ messages in thread
From: Herta Van den Eynde @ 2003-05-08  8:10 UTC (permalink / raw)
  To: Paul Clements; +Cc: linux-raid



Paul Clements wrote:

>Herta Van den Eynde wrote:
> 
>  
>
>>To linux, they are known as devices sd[c-h], which have been configured
>>as raid 0+1:
>>
>># cat /proc/mdstat
>>Personalities : [raid0] [raid1]
>>read_ahead 1024 sectors
>>md2 : active raid1 md1[1] md0[0]
>>      106679168 blocks [2/2] [UU]
>>
>>md0 : active raid0 sde1[2] sdd1[1] sdc1[0]
>>      106679232 blocks 8k chunks
>>
>>md1 : active raid0 sdh1[2] sdg1[1] sdf1[0]
>>      106679232 blocks 8k chunks
>>
>>unused devices: <none>
>>    
>>
>
>It's generally thought to be better to set this up as a RAID 1+0 (three
>raid1 devices striped together) but maybe there's a reason why you've
>opted for the RAID 0+1? (there's one less md device, I guess)...
>
> 
>  
>
>>When I need to add extra disks, e.g. in slots 3 and 12, I assume that
>>the disk in slot 3 will get device name /dev/sdf, and the disks in slots
>>9 through 12 will subsequently be known as /dev/sd[g-j].  How will that
>>affect the raid 0+1 config?
>>    
>>
>
>You may want to think about using the autodetection feature of md (or
>mdadm's UUID capability)... these would allow you to avoid messing
>things up if your drive letters shift on you...
>
>--
>Paul
>  
>
Thanks, Paul.

Raid 1+0 definitely was and is on my wish list, but when I set up the 
initial systems, I had no time to play around and check things out.  I 
was nervous about the potential impact of devices changing names, and 
figured that it'd be easier to manage with a raid 0+1, as it'd be easy 
to break the mirror and take it from there.

I've got a bit more time now, so I'll check out autodetection and uuid.

Kind regards,

Herta


-- 
******************************************************

Herta Van den Eynde
Toledo system management

K.U.Leuven - Ludit
W.de Croylaan 52A
B-3001 Heverlee
Belgium
tel: +32 (0)16 322 166
fax: +32 (0)16 322 999

******************************************************

"For something fulfilled this hour, loved or endured." 
(W.H. Auden)

******************************************************





^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2003-05-08  8:10 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2003-05-07 15:44 what happens to raid when more disks are added? Herta Van den Eynde
2003-05-07 16:37 ` Paul Clements
2003-05-08  8:10   ` Herta Van den Eynde

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).