From: "Keld Jørn Simonsen" <keld@keldix.com>
To: Roberto Spadim <roberto@spadim.com.br>
Cc: Denis <denismpa@gmail.com>, Linux-RAID <linux-raid@vger.kernel.org>
Subject: Re: What's the typical RAID10 setup?
Date: Mon, 31 Jan 2011 20:28:58 +0100 [thread overview]
Message-ID: <20110131192858.GD27952@www2.open-std.org> (raw)
In-Reply-To: <AANLkTim_uQ7AQs_ZSyeYPJPYSLCJd6ChzzTBwBNk2qA9@mail.gmail.com>
Top-posting...
How is the raid0+1 problem of only 33 % survival for 2 disk with RAID10?
I know for RAID10,F2 the implementation in Linux MD is bad.
It is only 33 % survival, while it with a probably minor fix could be 66%.
But how with RAID10,n2 and RAID10,o2?
best regards
keld
On Mon, Jan 31, 2011 at 05:15:29PM -0200, Roberto Spadim wrote:
> ok, but lost of a disk = problem with hardware = big problems = mirror failed
> think about a 'disaster recover' system
> you can?t lost the main data (you MUST have one 'primary' data source)
>
> raid1 don?t have ecc or anyother 'paged' data recover solution (it
> have just all mirror resync)
>
> let?s get back a level... (inside hard disk)
> if your hard disk have 2 heads, you have a raid0 inside you disk (got
> the point?)
> using your math, you should consider head problem (since it make the
> real read of information)
>
> but at raid (1/0) software (firmware) level, you have devices (with
> out without heads, can be memory or anyother type of adresseable
> information souce, RAID0 = DEVICE for raid software/firmware, but you
> have A DEVICE)
>
> for raid 1 you have mirrors(a copy of one primary device)
> if software find 1bit of error inside this mirror(device), you lost
> the full mirror, 1bit of fail = mirror fail!!!!! it?s not more sync
> with the main(primary) data source!!!!
>
> got the problem? mirror will need a resync if any disk fail (check
> what fail make you mirror to fail, but i think linux raid1 mirror fail
> with any disk fail)
>
> if you have 4 mirrors you can loose 4 disks (1 disk fail = mirror
> fail, 2 disk fail = mirror fail, 3 disk fail = mirror fail, any device
> with fail inside a raid1 device will make the mirror to fail, got? you
> can have good and bad disks on raid0, but you will have a mirror
> failed if you have >=1 disk fail inside your raid0)
>
> got the point?
> what?s the probability of your mirror fail?
> if you use raid0 as mirror
> any disk of raid0 failed = mirror failed got?
> you can lose all raid0 but you have just 1 mirror failed!
>
>
> could i be more explicit? you can?t make probability using bit, you
> must make probability using mirror, since it?s you level of data
> consistency
> =] got?
>
>
> 2011/1/31 Denis <denismpa@gmail.com>:
> > 2011/1/31 Roberto Spadim <roberto@spadim.com.br>:
> >> i think that partial failure (raid0 fail) of a mirror, is a fail
> >> (since all mirror is repaired and resync)
> >> the security is, if you lose all mirrors you have a device
> >> so your 'secure' is the number of mirrors, not the number of disks ssd
> >> or another type of device...
> >> how many mirrors you have here:
> >> raid0= 1,2(a) 3,4(b)
> >> raid1=a,b
> >> 1 mirror (a or b)
> >>
> >> and here:
> >> raid1=1,2(a) 3,4(b)
> >> raid0=ab
> >> 1 mirror (a or b)
> >>
> >> let?s think about hard disk?
> >> your hard disk have 2 disks?
> >> why not make two partition? first partition is disk1, second partition is disk2
> >> mirror it
> >> what?s your security? 1 mirror
> >> is it security? normaly when a harddisk crash all disks inside it
> >> crash but you is secury if only one internal disk fail...
> >>
> >> that?s the point, how many mirror?
> >> the point is
> >> with raid1+0 (raid10) we know that disks are fragments (raid1)
> >> with raid0+1 we know that disks are a big disk (raid0)
> >> the point is, we can?t allow that information stop, we need mirror to
> >> be secured (1 is good, 2 better, 3 really better, 4 5 6 7...)
> >> you can?t break mirror (not disk) to don?t break mirror have a second
> >> mirror (raid0 don?t help here! just raid1)
> >>
> >> with raid10 you will repair smal size of information (raid1), here
> >> sync will cost less time
> >> with raid01 you will repair big size of information (raid0), here
> >> sync will cost more time
> >
> > Roberto, to quite understend how better a raid 10 is over raid 01 you
> > need to take down into a mathematical level:
> >
> > once I had the same doubt:
> >
> > "The difference is that the chance of system failure with two drive
> > failures in a RAID 0+1 system with two sets of drives is (n/2)/(n - 1)
> > where n is the total number of drives in the system. The chance of
> > system failure in a RAID 1+0 system with two drives per mirror is 1/(n
> > - 1). So, for example, using a 8 drive system, the chance that losing
> > a second drive would bring down the RAID system is 4/7 with a RAID 0+1
> > system and 1/7 with a RAID 1+0 system."
> >
> >
> > Another problem is that in the case of a failury of one disk ( in a
> > two sets case), in a raid01 you will loose redundancy for ALL your
> > data, while in a raid10 you will loose redundancy for 1/[(n/2
> > -1)/(n/2)], in the same case 1/4 of your data set.
> >
> > And also, in a raid 10 you will have o re-mirror just one disk in the
> > case of a disk failure, in raid 01 you will have to re-mirror the
> > whole failed set.
> >
> > --
> > Denis Anjos,
> > www.versatushpc.com.br
> > --
> > To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> > the body of a message to majordomo@vger.kernel.org
> > More majordomo info at http://vger.kernel.org/majordomo-info.html
> >
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
> --
> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-01-31 19:28 UTC|newest]
Thread overview: 127+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-31 9:41 What's the typical RAID10 setup? Mathias Burén
2011-01-31 10:14 ` Robin Hill
2011-01-31 10:22 ` Mathias Burén
2011-01-31 10:36 ` CoolCold
2011-01-31 15:00 ` Roberto Spadim
2011-01-31 15:21 ` Robin Hill
2011-01-31 15:27 ` Roberto Spadim
2011-01-31 15:28 ` Roberto Spadim
2011-01-31 15:32 ` Roberto Spadim
2011-01-31 15:34 ` Roberto Spadim
2011-01-31 15:37 ` Roberto Spadim
2011-01-31 15:45 ` Robin Hill
2011-01-31 16:55 ` Denis
2011-01-31 17:31 ` Roberto Spadim
2011-01-31 18:35 ` Denis
2011-01-31 19:15 ` Roberto Spadim
2011-01-31 19:28 ` Keld Jørn Simonsen [this message]
2011-01-31 19:35 ` Roberto Spadim
2011-01-31 19:37 ` Roberto Spadim
2011-01-31 20:22 ` Keld Jørn Simonsen
2011-01-31 20:17 ` Stan Hoeppner
2011-01-31 20:37 ` Keld Jørn Simonsen
2011-01-31 21:20 ` Roberto Spadim
2011-01-31 21:24 ` Mathias Burén
2011-01-31 21:27 ` Jon Nelson
2011-01-31 21:47 ` Roberto Spadim
2011-01-31 21:51 ` Roberto Spadim
2011-01-31 22:50 ` NeilBrown
2011-01-31 22:53 ` Roberto Spadim
2011-01-31 23:10 ` NeilBrown
2011-01-31 23:14 ` Roberto Spadim
2011-01-31 22:52 ` Keld Jørn Simonsen
2011-01-31 23:00 ` Roberto Spadim
2011-02-01 10:01 ` David Brown
2011-02-01 13:50 ` Jon Nelson
2011-02-01 14:25 ` Roberto Spadim
2011-02-01 14:48 ` David Brown
2011-02-01 15:41 ` Roberto Spadim
2011-02-03 3:36 ` Drew
2011-02-03 8:18 ` Stan Hoeppner
[not found] ` <AANLkTikerSZfhMbkEvGBVyLB=wHDSHLWszoEz5As5Hi4@mail.gmail.com>
[not found] ` <AANLkTikLyR206x4aMy+veNkWPV67uF9r5dZKGqXJUEqN@mail.gmail.com>
2011-02-03 14:35 ` Roberto Spadim
2011-02-03 15:43 ` Keld Jørn Simonsen
2011-02-03 15:50 ` Roberto Spadim
2011-02-03 15:54 ` Roberto Spadim
2011-02-03 16:02 ` Keld Jørn Simonsen
2011-02-03 16:07 ` Roberto Spadim
2011-02-03 16:16 ` Roberto Spadim
2011-02-01 22:05 ` Stan Hoeppner
2011-02-01 23:12 ` Roberto Spadim
2011-02-02 9:25 ` Robin Hill
2011-02-02 16:00 ` Roberto Spadim
2011-02-02 16:06 ` Roberto Spadim
2011-02-02 16:07 ` Roberto Spadim
2011-02-02 16:10 ` Roberto Spadim
2011-02-02 16:13 ` Roberto Spadim
2011-02-02 19:44 ` Keld Jørn Simonsen
2011-02-02 20:28 ` Roberto Spadim
2011-02-02 21:31 ` Roberto Spadim
2011-02-02 22:13 ` Keld Jørn Simonsen
2011-02-02 22:26 ` Roberto Spadim
2011-02-03 1:57 ` Roberto Spadim
2011-02-03 3:05 ` Stan Hoeppner
2011-02-03 3:13 ` Roberto Spadim
2011-02-03 3:17 ` Roberto Spadim
2011-02-01 23:35 ` Keld Jørn Simonsen
2011-02-01 16:02 ` Keld Jørn Simonsen
2011-02-01 16:24 ` Roberto Spadim
2011-02-01 17:56 ` Keld Jørn Simonsen
2011-02-01 18:09 ` Roberto Spadim
2011-02-01 20:16 ` Keld Jørn Simonsen
2011-02-01 20:32 ` Keld Jørn Simonsen
2011-02-01 20:58 ` Roberto Spadim
2011-02-01 21:04 ` Roberto Spadim
2011-02-01 21:18 ` David Brown
2011-02-01 0:58 ` Stan Hoeppner
2011-02-01 12:50 ` Roman Mamedov
2011-02-03 11:04 ` Keld Jørn Simonsen
2011-02-03 14:17 ` Roberto Spadim
2011-02-03 15:54 ` Keld Jørn Simonsen
2011-02-03 18:39 ` Keld Jørn Simonsen
2011-02-03 18:41 ` Roberto Spadim
2011-02-03 23:43 ` Stan Hoeppner
2011-02-04 3:49 ` hansbkk
2011-02-04 7:06 ` Keld Jørn Simonsen
2011-02-04 8:27 ` Stan Hoeppner
2011-02-04 9:06 ` Keld Jørn Simonsen
2011-02-04 10:04 ` Stan Hoeppner
2011-02-04 11:15 ` hansbkk
2011-02-04 13:33 ` Keld Jørn Simonsen
2011-02-04 20:35 ` Keld Jørn Simonsen
2011-02-04 20:42 ` Keld Jørn Simonsen
2011-02-04 21:15 ` Stan Hoeppner
2011-02-04 22:05 ` Keld Jørn Simonsen
2011-02-04 23:03 ` Stan Hoeppner
2011-02-06 3:59 ` Drew
2011-02-06 4:27 ` Stan Hoeppner
2011-02-04 11:34 ` David Brown
2011-02-04 13:53 ` Keld Jørn Simonsen
2011-02-04 14:17 ` David Brown
2011-02-04 14:21 ` hansbkk
2011-02-06 4:02 ` Drew
2011-02-06 7:58 ` Keld Jørn Simonsen
2011-02-06 12:03 ` Roman Mamedov
2011-02-06 14:30 ` Roberto Spadim
2011-02-01 8:46 ` hansbkk
2011-01-31 19:37 ` Phillip Susi
2011-01-31 19:41 ` Roberto Spadim
2011-01-31 19:46 ` Phillip Susi
2011-01-31 19:53 ` Roberto Spadim
2011-01-31 22:10 ` Phillip Susi
2011-01-31 22:14 ` Denis
2011-01-31 22:33 ` Roberto Spadim
2011-01-31 22:36 ` Roberto Spadim
2011-01-31 20:23 ` Stan Hoeppner
2011-01-31 21:59 ` Phillip Susi
2011-01-31 22:08 ` Jon Nelson
2011-01-31 22:38 ` Phillip Susi
2011-02-01 10:05 ` David Brown
2011-02-01 9:20 ` Robin Hill
2011-02-04 16:03 ` Phillip Susi
2011-02-04 16:22 ` Robin Hill
2011-02-04 20:35 ` [OT] " Phil Turmel
2011-02-04 20:35 ` Phillip Susi
2011-02-04 21:05 ` Stan Hoeppner
2011-02-04 21:13 ` Roberto Spadim
2011-01-31 15:30 ` Robin Hill
2011-01-31 20:07 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20110131192858.GD27952@www2.open-std.org \
--to=keld@keldix.com \
--cc=denismpa@gmail.com \
--cc=linux-raid@vger.kernel.org \
--cc=roberto@spadim.com.br \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).