linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Swap initialised as an md?
@ 2006-11-10 10:29 David
  2006-11-10 11:55 ` Mogens Kjaer
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: David @ 2006-11-10 10:29 UTC (permalink / raw)
  To: linux-raid

I have two devices mirrored which are partitioned like this:

    Device Boot      Start         End      Blocks   Id  System
/dev/sda1   *          63    30716279    15358108+  fd  Linux raid autodetect
/dev/sda2        30716280    71682029    20482875   fd  Linux raid autodetect
/dev/sda3        71682030   112647779    20482875   fd  Linux raid autodetect
/dev/sda4       112647780   156248189    21800205    5  Extended
/dev/sda5       112647843   122881184     5116671   82  Linux swap / Solaris
/dev/sda6       122881248   156248189    16683471   fd  Linux raid autodetect

My aim was to have the two swap partitions both mounted, no RAID (as I  
didn't see any benefit to that, but if I'm wrong then I'd appreciate  
being told!).  However it seems that sda5 seems to be recognised as an  
md anyway at boot, so swapon does not work correctly.  When  
initialising the partitions with mkswap, the RAID array is confused  
and refuses to boot until the superblocks are fixed.

At boot, the kernel says:

[17179589.184000] md: md3 stopped.
[17179589.184000] md: bind<sdb5>
[17179589.188000] md: bind<sda5>
[17179589.188000] raid1: raid set md3 active with 2 out of 2 mirrors

Then /proc/mdstat says:

md3 : active raid1 sda5[0] sdb5[1]
       5116544 blocks [2/2] [UU]

In /etc/mdadm/mdadm.conf, the following is present which was created  
by the installer and only lists 4 arrays.  In actual fact sdx6 is  
recognised as the fifth array md4.

DEVICE partitions
ARRAY /dev/md3 level=raid1 num-devices=2  
UUID=75575384:5fbe10ed:a5a46544:209740b3
ARRAY /dev/md2 level=raid1 num-devices=2  
UUID=5d133655:1d034197:c1c19528:56cc420a
ARRAY /dev/md1 level=raid1 num-devices=2  
UUID=2cda8230:b2fde7b4:97082351:880c918a
ARRAY /dev/md0 level=raid1 num-devices=2  
UUID=7f9abf32:c86071fd:3df4db9d:26ddd001

As /etc is on md0 I doubt this configuration file has anything to do  
with the kernel recognising and setting the arrays active.  However,  
is there any reason that the swap partitions (which have the correct  
partition type) are initialised as an md?  Can I stop it anyhow, or is  
the correct method to have them as an md with the md initialised as  
swap?

Brief details are the same as my previous mails last week: 2.6.15,  
mdadm 1.12.0 (on md0, so I can't see that it is at fault).

Thanks,

David

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Swap initialised as an md?
  2006-11-10 10:29 Swap initialised as an md? David
@ 2006-11-10 11:55 ` Mogens Kjaer
  2006-11-12 14:03   ` Gabor Gombas
  2007-03-23 14:56 ` Grow a RAID-6 ? Gordon Henderson
  2007-03-23 20:22 ` Swap initialised as an md? Bill Davidsen
  2 siblings, 1 reply; 10+ messages in thread
From: Mogens Kjaer @ 2006-11-10 11:55 UTC (permalink / raw)
  To: linux-raid

David wrote:
...
> My aim was to have the two swap partitions both mounted, no RAID (as I 
> didn't see any benefit to that, but if I'm wrong then I'd appreciate 
> being told!). 

If one of your disks fails, and you have pages in the swapfile
on the failing disk, your machine will crash when the pages are
needed again.

Mogens

-- 
Mogens Kjaer, Carlsberg A/S, Computer Department
Gamle Carlsberg Vej 10, DK-2500 Valby, Denmark
Phone: +45 33 27 53 25, Fax: +45 33 27 47 08
Email: mk@crc.dk Homepage: http://www.crc.dk

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Swap initialised as an md?
  2006-11-10 11:55 ` Mogens Kjaer
@ 2006-11-12 14:03   ` Gabor Gombas
  0 siblings, 0 replies; 10+ messages in thread
From: Gabor Gombas @ 2006-11-12 14:03 UTC (permalink / raw)
  To: Mogens Kjaer; +Cc: linux-raid

On Fri, Nov 10, 2006 at 12:55:57PM +0100, Mogens Kjaer wrote:

> If one of your disks fails, and you have pages in the swapfile
> on the failing disk, your machine will crash when the pages are
> needed again.

IMHO the machine will not crash just the application which the page
belongs to will be killed. Of course, if that application happens to be
init or your mission-critical daemon then the effect is not much
different from a crash...

Gabor

-- 
     ---------------------------------------------------------
     MTA SZTAKI Computer and Automation Research Institute
                Hungarian Academy of Sciences
     ---------------------------------------------------------

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Grow a RAID-6 ?
  2006-11-10 10:29 Swap initialised as an md? David
  2006-11-10 11:55 ` Mogens Kjaer
@ 2007-03-23 14:56 ` Gordon Henderson
  2007-03-23 15:31   ` Mattias Wadenstein
  2007-03-23 20:22 ` Swap initialised as an md? Bill Davidsen
  2 siblings, 1 reply; 10+ messages in thread
From: Gordon Henderson @ 2007-03-23 14:56 UTC (permalink / raw)
  To: linux-raid


Are there any plans in the near future to enable growing RAID-6 arrays by 
adding more disks into them?

I have a 15x500GB - drive unit and I need to add another 15 drives into 
it... Hindsight is telling me that maybe I should have put LVM on top of 
the RAID-6, however, the usable 6TB it yields should have been enough for 
anyone...

Cheers,

Gordon

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Grow a RAID-6 ?
  2007-03-23 14:56 ` Grow a RAID-6 ? Gordon Henderson
@ 2007-03-23 15:31   ` Mattias Wadenstein
  2007-03-23 16:34     ` Gordon Henderson
  0 siblings, 1 reply; 10+ messages in thread
From: Mattias Wadenstein @ 2007-03-23 15:31 UTC (permalink / raw)
  To: Gordon Henderson; +Cc: linux-raid

On Fri, 23 Mar 2007, Gordon Henderson wrote:

>
> Are there any plans in the near future to enable growing RAID-6 arrays by 
> adding more disks into them?
>
> I have a 15x500GB - drive unit and I need to add another 15 drives into it... 
> Hindsight is telling me that maybe I should have put LVM on top of the 
> RAID-6, however, the usable 6TB it yields should have been enough for 
> anyone...

Well, if you are doubling the space, you could take this opportunity to 
put lvm on the new disks, move all the data, then put in the old disks as 
a pv, extending the lvm space.

I really wouldn't recommend having a 30-disk raid6, imagine the rebuild 
time after a failed disk..

/Mattias Wadenstein

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Grow a RAID-6 ?
  2007-03-23 15:31   ` Mattias Wadenstein
@ 2007-03-23 16:34     ` Gordon Henderson
  2007-03-24  2:20       ` Daniel Korstad
  0 siblings, 1 reply; 10+ messages in thread
From: Gordon Henderson @ 2007-03-23 16:34 UTC (permalink / raw)
  To: Mattias Wadenstein; +Cc: linux-raid

On Fri, 23 Mar 2007, Mattias Wadenstein wrote:

> On Fri, 23 Mar 2007, Gordon Henderson wrote:
>
>> Are there any plans in the near future to enable growing RAID-6 arrays by 
>> adding more disks into them?
>> 
>> I have a 15x500GB - drive unit and I need to add another 15 drives into 
>> it... Hindsight is telling me that maybe I should have put LVM on top of 
>> the RAID-6, however, the usable 6TB it yields should have been enough for 
>> anyone...
>
> Well, if you are doubling the space, you could take this opportunity to put 
> lvm on the new disks, move all the data, then put in the old disks as a pv, 
> extending the lvm space.

Now why didn't I think of that. *thud*

> I really wouldn't recommend having a 30-disk raid6, imagine the rebuild time 
> after a failed disk..

There is that - it would give me 2 disks (ie. 1TB) more space though...

This isn't a performance limited server though, it's an off-site backup 
box, so it just has to be reasonably reliable.

Thanks!

Gordon

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Swap initialised as an md?
  2006-11-10 10:29 Swap initialised as an md? David
  2006-11-10 11:55 ` Mogens Kjaer
  2007-03-23 14:56 ` Grow a RAID-6 ? Gordon Henderson
@ 2007-03-23 20:22 ` Bill Davidsen
  2007-03-23 20:35   ` Michael Tokarev
  2 siblings, 1 reply; 10+ messages in thread
From: Bill Davidsen @ 2007-03-23 20:22 UTC (permalink / raw)
  To: David; +Cc: linux-raid

David wrote:
> I have two devices mirrored which are partitioned like this:
>
>    Device Boot      Start         End      Blocks   Id  System
> /dev/sda1   *          63    30716279    15358108+  fd  Linux raid 
> autodetect
> /dev/sda2        30716280    71682029    20482875   fd  Linux raid 
> autodetect
> /dev/sda3        71682030   112647779    20482875   fd  Linux raid 
> autodetect
> /dev/sda4       112647780   156248189    21800205    5  Extended
> /dev/sda5       112647843   122881184     5116671   82  Linux swap / 
> Solaris
> /dev/sda6       122881248   156248189    16683471   fd  Linux raid 
> autodetect
>
> My aim was to have the two swap partitions both mounted, no RAID (as I 
> didn't see any benefit to that, but if I'm wrong then I'd appreciate 
> being told!).  However it seems that sda5 seems to be recognised as an 
> md anyway at boot, so swapon does not work correctly.  When 
> initialising the partitions with mkswap, the RAID array is confused 
> and refuses to boot until the superblocks are fixed.
If you use RAID0 on an array it will be faster (usually) than just 
partitions, but any process with swapped pages will crash if you lose 
either drive. With RAID1 operation will be more reliable but no faster. 
If you use RAID10 the array will be faster and more reliable, but most 
recovery CDs don't know about RAID10 swap. Any reliable swap will also 
have the array size smaller than the sum of the partitions (you knew that).

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Swap initialised as an md?
  2007-03-23 20:22 ` Swap initialised as an md? Bill Davidsen
@ 2007-03-23 20:35   ` Michael Tokarev
  2007-03-26  3:45     ` Bill Davidsen
  0 siblings, 1 reply; 10+ messages in thread
From: Michael Tokarev @ 2007-03-23 20:35 UTC (permalink / raw)
  To: Bill Davidsen; +Cc: David, linux-raid

Bill Davidsen wrote:
[]
> If you use RAID0 on an array it will be faster (usually) than just
> partitions, but any process with swapped pages will crash if you lose
> either drive. With RAID1 operation will be more reliable but no faster.
> If you use RAID10 the array will be faster and more reliable, but most
> recovery CDs don't know about RAID10 swap. Any reliable swap will also
> have the array size smaller than the sum of the partitions (you knew that).

You seems to forgot to mention 2 more things:

 o swap isn't usually needed for recovery CDs

 o kernel vm subsystem already can do equivalent of raid0 for swap internally,
   by means of allocating several block devices for swap space with the
   same priority.

If reliability (of swapped processes) is important, one can create several
RAID1 arrays and "raid0 them" using regular vm techniques.  The result will
be RAID10 for swap.

/mjt

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: Grow a RAID-6 ?
  2007-03-23 16:34     ` Gordon Henderson
@ 2007-03-24  2:20       ` Daniel Korstad
  0 siblings, 0 replies; 10+ messages in thread
From: Daniel Korstad @ 2007-03-24  2:20 UTC (permalink / raw)
  To: gordon, maswan; +Cc: linux-raid

As I understand it, reshape for RAID6 is coming now.  It is in the 2.6.21 kernel still at rc4 as of today though.  I am looking forward to it, and plan to give it a test run when it released.

I have used lv with my past raid sets and it is very nice, adds some flexibility.  

I have also done a RAID5 reshape that was in an lv.  Unfortunately, at the time I had an older LVM version that did not support pvresize.  So I was stuck with a larger RAID set and my lv would not take advantage of it.

I think I needed LVM version 2.02.06 to solve that and get the pvresize feature. If you are running a relatively new disto that won't be an issue any more.  I think I had FC3 or 4.

-----Original Message-----
From: Gordon Henderson [mailto:gordon@drogon.net] 
Sent: Friday, March 23, 2007 11:35 AM
To: Mattias Wadenstein
Cc: linux-raid@vger.kernel.org
Subject: Re: Grow a RAID-6 ?

On Fri, 23 Mar 2007, Mattias Wadenstein wrote:

> On Fri, 23 Mar 2007, Gordon Henderson wrote:
>
>> Are there any plans in the near future to enable growing RAID-6 arrays by 
>> adding more disks into them?
>> 
>> I have a 15x500GB - drive unit and I need to add another 15 drives into 
>> it... Hindsight is telling me that maybe I should have put LVM on top of 
>> the RAID-6, however, the usable 6TB it yields should have been enough for 
>> anyone...
>
> Well, if you are doubling the space, you could take this opportunity to put 
> lvm on the new disks, move all the data, then put in the old disks as a pv, 
> extending the lvm space.

Now why didn't I think of that. *thud*

> I really wouldn't recommend having a 30-disk raid6, imagine the rebuild time 
> after a failed disk..

There is that - it would give me 2 disks (ie. 1TB) more space though...

This isn't a performance limited server though, it's an off-site backup 
box, so it just has to be reasonably reliable.

Thanks!

Gordon
-
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: Swap initialised as an md?
  2007-03-23 20:35   ` Michael Tokarev
@ 2007-03-26  3:45     ` Bill Davidsen
  0 siblings, 0 replies; 10+ messages in thread
From: Bill Davidsen @ 2007-03-26  3:45 UTC (permalink / raw)
  To: Michael Tokarev; +Cc: David, linux-raid

Michael Tokarev wrote:
> Bill Davidsen wrote:
> []
>   
>> If you use RAID0 on an array it will be faster (usually) than just
>> partitions, but any process with swapped pages will crash if you lose
>> either drive. With RAID1 operation will be more reliable but no faster.
>> If you use RAID10 the array will be faster and more reliable, but most
>> recovery CDs don't know about RAID10 swap. Any reliable swap will also
>> have the array size smaller than the sum of the partitions (you knew that).
>>     
>
> You seems to forgot to mention 2 more things:
>
>  o swap isn't usually needed for recovery CDs
>   
That's system dependent, but at least two report problems with swap if 
configured as RAID10. Confusing error messages are not a plus when you 
get to the stage of using a recovery CD. The need for swap depends on 
configuration.
>  o kernel vm subsystem already can do equivalent of raid0 for swap internally,
>    by means of allocating several block devices for swap space with the
>    same priority.
>
> If reliability (of swapped processes) is important, one can create several
> RAID1 arrays and "raid0 them" using regular vm techniques.  The result will
> be RAID10 for swap.

Sorry, no. It will be RAID0+1, not the same thing. See RAID10 description.

-- 
bill davidsen <davidsen@tmr.com>
  CTO TMR Associates, Inc
  Doing interesting things with small computers since 1979


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2007-03-26  3:45 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2006-11-10 10:29 Swap initialised as an md? David
2006-11-10 11:55 ` Mogens Kjaer
2006-11-12 14:03   ` Gabor Gombas
2007-03-23 14:56 ` Grow a RAID-6 ? Gordon Henderson
2007-03-23 15:31   ` Mattias Wadenstein
2007-03-23 16:34     ` Gordon Henderson
2007-03-24  2:20       ` Daniel Korstad
2007-03-23 20:22 ` Swap initialised as an md? Bill Davidsen
2007-03-23 20:35   ` Michael Tokarev
2007-03-26  3:45     ` Bill Davidsen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).