* what superblock to use
@ 2009-04-20 13:39 Farkas Levente
2009-04-20 14:10 ` Andrew Burgess
` (3 more replies)
0 siblings, 4 replies; 9+ messages in thread
From: Farkas Levente @ 2009-04-20 13:39 UTC (permalink / raw)
To: linux-raid
hi,
what's the current recommended superblock to use for a newly created
raid5-6 array with 6 pieces of 1tb disk? by default mdamd use 0.90. is
it worth to change it to any 1.x format?
anyway is there any advantage of a raid6 over raid5+1spare disk? afaik
raid5 will be faster and use less cpu and both case 2 disk can failed.
thanks in advance.
--
Levente "Si vis pacem para bellum!"
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: what superblock to use
2009-04-20 13:39 what superblock to use Farkas Levente
@ 2009-04-20 14:10 ` Andrew Burgess
2009-04-20 14:17 ` Luca Berra
` (2 subsequent siblings)
3 siblings, 0 replies; 9+ messages in thread
From: Andrew Burgess @ 2009-04-20 14:10 UTC (permalink / raw)
To: Farkas Levente; +Cc: linux-raid
On Mon, 2009-04-20 at 15:39 +0200, Farkas Levente wrote:
> anyway is there any advantage of a raid6 over raid5+1spare disk?
The raid6 owner sleeps soundly after a single disk failure.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: what superblock to use
2009-04-20 13:39 what superblock to use Farkas Levente
2009-04-20 14:10 ` Andrew Burgess
@ 2009-04-20 14:17 ` Luca Berra
2009-04-20 19:39 ` Leslie Rhorer
2009-04-20 14:17 ` Mario 'BitKoenig' Holbe
2009-04-21 14:20 ` Bill Davidsen
3 siblings, 1 reply; 9+ messages in thread
From: Luca Berra @ 2009-04-20 14:17 UTC (permalink / raw)
To: linux-raid
On Mon, Apr 20, 2009 at 03:39:46PM +0200, Farkas Levente wrote:
>hi,
>what's the current recommended superblock to use for a newly created
>raid5-6 array with 6 pieces of 1tb disk? by default mdamd use 0.90. is
>it worth to change it to any 1.x format?
0.9 has some limitations that 1.x do not have, none apply in your case
max_components = 28
max_size of component=2TB
1.x lifts those limitation
my stance would be going to 1.2
1.1 puts the superblock at the start of the device, this will prevent
the kernel from finding a partition table or fs superblock on the first
component device
1.2 puts the superblock 4k from start, thus preventing anything that
tries to touch the mbr from damaging it.
if you put the sb at the start of the device lilo and grub wont be able
to boot from it, but then they won't be able to boot of a raid5/6
anyway.
>anyway is there any advantage of a raid6 over raid5+1spare disk? afaik
>raid5 will be faster and use less cpu and both case 2 disk can failed.
>thanks in advance.
raid6 prevents two possible risks in raid5+spare
first:
when using a spare, you are never certain it is in perfect
working order, smart may help, or it might not.
second:
when a device fails a rebuild process starts using the spare drive
if during the rebuild a second device fails or has any error you risk
losing data.
the first problem could be solved by using raid5e, which md does not
support yet
the second can be minimized by 'scrubbing' via
/sys/block/md*/md/sync_action
regards,
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: what superblock to use
2009-04-20 13:39 what superblock to use Farkas Levente
2009-04-20 14:10 ` Andrew Burgess
2009-04-20 14:17 ` Luca Berra
@ 2009-04-20 14:17 ` Mario 'BitKoenig' Holbe
2009-04-21 17:31 ` H. Peter Anvin
2009-04-21 14:20 ` Bill Davidsen
3 siblings, 1 reply; 9+ messages in thread
From: Mario 'BitKoenig' Holbe @ 2009-04-20 14:17 UTC (permalink / raw)
To: linux-raid
Farkas Levente <lfarkas@lfarkas.org> wrote:
> anyway is there any advantage of a raid6 over raid5+1spare disk? afaik
> raid5 will be faster and use less cpu and both case 2 disk can failed.
No, raid6 survives the simultaneous failure of two disks.
Raid5 survives the simultaneous failure of one disk only. Even with a
hot-spare, after this failure you have a time frame where the spare is
synching and your array has no redundancy left. Thus, nearly every other
disk failure (except a failure on the synching spare) within this time
frame kills your array.
regards
Mario
--
There are two major products that come from Berkeley: LSD and UNIX.
We don't believe this to be a coincidence. -- Jeremy S. Anderson
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: what superblock to use
2009-04-20 14:17 ` Luca Berra
@ 2009-04-20 19:39 ` Leslie Rhorer
2009-04-20 21:24 ` Luca Berra
0 siblings, 1 reply; 9+ messages in thread
From: Leslie Rhorer @ 2009-04-20 19:39 UTC (permalink / raw)
To: 'Linux RAID'
> >what's the current recommended superblock to use for a newly created
> >raid5-6 array with 6 pieces of 1tb disk? by default mdamd use 0.90. is
> >it worth to change it to any 1.x format?
> 0.9 has some limitations that 1.x do not have, none apply in your case
> max_components = 28
> max_size of component=2TB
Oops! I did not realize this when I built my array. I'm going to need to
grow the array to 3T components after the 3T drives are released. Can the
superblock be changed on an existing array, or does it have to be rebuilt
from scratch? If the latter, I'm glad I learned this now, because I may
have to rebuild the array, anyway, to try to alleviate the halt issue I am
experiencing.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: what superblock to use
2009-04-20 19:39 ` Leslie Rhorer
@ 2009-04-20 21:24 ` Luca Berra
0 siblings, 0 replies; 9+ messages in thread
From: Luca Berra @ 2009-04-20 21:24 UTC (permalink / raw)
To: 'Linux RAID'
On Mon, Apr 20, 2009 at 02:39:05PM -0500, Leslie Rhorer wrote:
>
>> >what's the current recommended superblock to use for a newly created
>> >raid5-6 array with 6 pieces of 1tb disk? by default mdamd use 0.90. is
>> >it worth to change it to any 1.x format?
>> 0.9 has some limitations that 1.x do not have, none apply in your case
>> max_components = 28
>> max_size of component=2TB
>
>Oops! I did not realize this when I built my array. I'm going to need to
>grow the array to 3T components after the 3T drives are released. Can the
>superblock be changed on an existing array, or does it have to be rebuilt
>from scratch? If the latter, I'm glad I learned this now, because I may
>have to rebuild the array, anyway, to try to alleviate the halt issue I am
>experiencing.
afaik it cannot be changed after creation
if you have free space you could play tricks reducing the fs, recreating
the array with the exact same parameters except metadata and
--assume-clean (you are limited to using 1.0 in this case)
anyway do a backup before trying this.
L.
--
Luca Berra -- bluca@comedia.it
Communication Media & Services S.r.l.
/"\
\ / ASCII RIBBON CAMPAIGN
X AGAINST HTML MAIL
/ \
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: what superblock to use
2009-04-20 13:39 what superblock to use Farkas Levente
` (2 preceding siblings ...)
2009-04-20 14:17 ` Mario 'BitKoenig' Holbe
@ 2009-04-21 14:20 ` Bill Davidsen
3 siblings, 0 replies; 9+ messages in thread
From: Bill Davidsen @ 2009-04-21 14:20 UTC (permalink / raw)
To: Farkas Levente; +Cc: linux-raid
Farkas Levente wrote:
> hi,
> what's the current recommended superblock to use for a newly created
> raid5-6 array with 6 pieces of 1tb disk? by default mdamd use 0.90. is
> it worth to change it to any 1.x format?
> anyway is there any advantage of a raid6 over raid5+1spare disk? afaik
> raid5 will be faster and use less cpu and both case 2 disk can failed.
> thanks in advance.
>
>
Let me be Devil's Advocate. The advantage of raid6 is that it will
survive the failure of two drives at the same time, while a spare must
be rebuilt (a good argument for fast rebuild and let response go to blazes).
The advantage of raid5+S is that with a failure of a single drive you
run your io in recovery mode, and it is slow, while after rebuild on the
spare raid5+S is as fast as ever.
I have been trying some things with raid5e, and as soon as I find a good
primer on using events to kick the recovery off I will be able to report
some sucess with this. My POC uses a script, and works fine if I poll to
detect the disk failure.
--
bill davidsen <davidsen@tmr.com>
CTO TMR Associates, Inc
"You are disgraced professional losers. And by the way, give us our money back."
- Representative Earl Pomeroy, Democrat of North Dakota
on the A.I.G. executives who were paid bonuses after a federal bailout.
^ permalink raw reply [flat|nested] 9+ messages in thread
* Re: what superblock to use
2009-04-20 14:17 ` Mario 'BitKoenig' Holbe
@ 2009-04-21 17:31 ` H. Peter Anvin
2009-04-22 16:26 ` Leslie Rhorer
0 siblings, 1 reply; 9+ messages in thread
From: H. Peter Anvin @ 2009-04-21 17:31 UTC (permalink / raw)
To: Mario 'BitKoenig' Holbe; +Cc: linux-raid
Mario 'BitKoenig' Holbe wrote:
> Farkas Levente <lfarkas@lfarkas.org> wrote:
>> anyway is there any advantage of a raid6 over raid5+1spare disk? afaik
>> raid5 will be faster and use less cpu and both case 2 disk can failed.
>
> No, raid6 survives the simultaneous failure of two disks.
> Raid5 survives the simultaneous failure of one disk only. Even with a
> hot-spare, after this failure you have a time frame where the spare is
> synching and your array has no redundancy left. Thus, nearly every other
> disk failure (except a failure on the synching spare) within this time
> frame kills your array.
>
It's worth noting that a failure mode that is getting increasingly
frequently reported is the failure of a drive *during sync*. I suspect
that the cause is that synchronization puts different stresses on the
drives than normal operation.
-hpa
--
H. Peter Anvin, Intel Open Source Technology Center
I work for Intel. I don't speak on their behalf.
^ permalink raw reply [flat|nested] 9+ messages in thread
* RE: what superblock to use
2009-04-21 17:31 ` H. Peter Anvin
@ 2009-04-22 16:26 ` Leslie Rhorer
0 siblings, 0 replies; 9+ messages in thread
From: Leslie Rhorer @ 2009-04-22 16:26 UTC (permalink / raw)
To: 'Linux RAID'
> > No, raid6 survives the simultaneous failure of two disks.
> > Raid5 survives the simultaneous failure of one disk only. Even with a
> > hot-spare, after this failure you have a time frame where the spare is
> > synching and your array has no redundancy left. Thus, nearly every other
> > disk failure (except a failure on the synching spare) within this time
> > frame kills your array.
> >
>
> It's worth noting that a failure mode that is getting increasingly
> frequently reported is the failure of a drive *during sync*. I suspect
> that the cause is that synchronization puts different stresses on the
> drives than normal operation.
Yes, that, and with both drives and RAID arrays growing larger, it's taking
longer for arrays to re-sync, meaning a failure during any random time
period is going to be more likely to fall during a sync. When I first
installed a RAID system on my server, it only took a few hours to sync the
320G array. Now it can take up to 3 days to synch the 8T array.
^ permalink raw reply [flat|nested] 9+ messages in thread
end of thread, other threads:[~2009-04-22 16:26 UTC | newest]
Thread overview: 9+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2009-04-20 13:39 what superblock to use Farkas Levente
2009-04-20 14:10 ` Andrew Burgess
2009-04-20 14:17 ` Luca Berra
2009-04-20 19:39 ` Leslie Rhorer
2009-04-20 21:24 ` Luca Berra
2009-04-20 14:17 ` Mario 'BitKoenig' Holbe
2009-04-21 17:31 ` H. Peter Anvin
2009-04-22 16:26 ` Leslie Rhorer
2009-04-21 14:20 ` Bill Davidsen
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).