From: Roberto Spadim <roberto@spadim.com.br>
To: "Keld Jørn Simonsen" <keld@keldix.com>
Cc: Drew <drew.kay@gmail.com>, Linux-RAID <linux-raid@vger.kernel.org>
Subject: Re: What's the typical RAID10 setup?
Date: Thu, 3 Feb 2011 13:54:54 -0200 [thread overview]
Message-ID: <AANLkTimznEe8Vm2HoYJyAPpjHLbp3AWMqmd68csSgQjB@mail.gmail.com> (raw)
In-Reply-To: <AANLkTinKHG0dE0BeXX3v1z9my4+jDLebc96vMHJpjHDB@mail.gmail.com>
sorry
/sys/block/md0/distance_rate -> /sys/block/md0/md/sda1_distance_rate
/sys/block/md0/byte_read_rate ->/sys/block/md0/md/sda1_byte_read_rate
2011/2/3 Roberto Spadim <roberto@spadim.com.br>:
> hummm, nice
> keld (or anyone), do you know someone (with time, not much, total time
> i think it´s just 2 hours) to try develop modifications on raid1
> read_balance function?
> what modification, today read_balance have distance (current_head -
> next_head), multiply it by a number at /sys/block/md0/distance_rate,
> and make add read_size*byte_rate (byte_rate at
> /sys/block/md0/byte_read_rate), with this, the algorithm will make
> minimal time, and not minimal distance
> with this, i can get better read_balance (for ssd)
> for a second time we could implement device queue time to end (i think
> we will work about 1 day to get it working with all device
> schedulers), but it´s not for now
>
>
> 2011/2/3 Keld Jørn Simonsen <keld@keldix.com>:
>> On Thu, Feb 03, 2011 at 12:35:52PM -0200, Roberto Spadim wrote:
>>> =] i think that we can end discussion and conclude that context (test
>>> / production) allow or don't allow lucky on probability, what's lucky?
>>> for production, lucky = poor disk, for production we don't allow
>>> failed disks, we have smart to predict, and when a disk fail we change
>>> many disks to prevent another disk fail
>>>
>>> could we update our raid wiki with some informations about this discussion?
>>
>> I would like to, but it is a bit complicated.
>> Anyway I think there already is something there on the wiki.
>> And then, for one of the most important raid types in Linux MD,
>> namely raid10, I am not sure what to write. It could be raid1+0, or
>> raid0+1 like, and as far as I kow, it is raid0+1 for F2:-(
>> but I don't know for n2 and o2.
>>
>> The German version on raid at wikipedia has a lot of info on probability
>> http://de.wikipedia.org/wiki/RAID - but it is wrong a number of places.
>> I have tried to correct it, but the German version is moderated, and
>> they don't know what they are writing about.
>> http://de.wikipedia.org/wiki/RAID
>>
>> Best regards
>> Keld
>>
>>> 2011/2/3 Drew <drew.kay@gmail.com>:
>>> >> for test, raid1 and after raid0 have better probability to don't stop
>>> >> raid10, but it's a probability... don't believe in lucky, since it's
>>> >> just for test, not production, it doesn't matter...
>>> >>
>>> >> what i whould implement? for production? anyone, if a disk fail, all
>>> >> array should be replaced (if without money replace disk with small
>>> >> life)
>>> >
>>> > A lot of this discussion about failure rates and probabilities is
>>> > academic. There are assumptions about each disk having it's own
>>> > independent failure probability, which if that can not be predicted
>>> > must be assumed to be 50%. At the end of the day I agree that when
>>> > the first disk fails the RAID is degraded and one *must* take steps to
>>> > remedy that. This discussion is more about why RAID 10 (1+0) is better
>>> > then 0+1.
>>> >
>>> > On our production systems we work with our vendor to ensure the
>>> > individual drives we get aren't from the same batch/production run,
>>> > thereby mitigating some issues around flaws in specific batches. We
>>> > keep spare drives on hand for all three RAID arrays, so as to minimize
>>> > the time we're operating in a degraded state. All data on RAID arrays
>>> > is backed up nightly to storage which is then mirrored off-site.
>>> >
>>> > At the end of the day our decision around what RAID type (10/5/6) to
>>> > use was based on a balance between performance, safety, & capacity
>>> > then on specific failure criteria. RAID 10 backs the iSCSI LUN that
>>> > our VMware cluster uses for the individual OSes, and the data
>>> > partition for the accounting database server. RAID 5 backs the
>>> > partitions we store user data one. And RAID 6 backs the NASes we use
>>> > for our backup system.
>>> >
>>> > RAID 10 was chosen for performance reasons. It doesn't have to
>>> > calculate parity on every write so for the OS & database, which do a
>>> > lot of small reads & writes, it's faster. For user disks we went with
>>> > RAID 5 because we get more space in the array at a small performance
>>> > penalty, which is fine as the users have to access the file server
>>> > over the LAN and the bottle neck is the pipe between the switch & the
>>> > VM, not between the iSCSI SAN & the server. For backups we went with
>>> > RAID 6 because the performance & storage penalties for the array were
>>> > outweighed by the need for maximum safety.
>>> >
>>> >
>>> >
>>> > --
>>> > Drew
>>> >
>>> > "Nothing in life is to be feared. It is only to be understood."
>>> > --Marie Curie
>>> >
>>> >
>>>
>>>
>>>
>>> --
>>> Roberto Spadim
>>> Spadim Technology / SPAEmpresarial
>>> --
>>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>>> the body of a message to majordomo@vger.kernel.org
>>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>> --
>> To unsubscribe from this list: send the line "unsubscribe linux-raid" in
>> the body of a message to majordomo@vger.kernel.org
>> More majordomo info at http://vger.kernel.org/majordomo-info.html
>>
>
>
>
> --
> Roberto Spadim
> Spadim Technology / SPAEmpresarial
>
--
Roberto Spadim
Spadim Technology / SPAEmpresarial
--
To unsubscribe from this list: send the line "unsubscribe linux-raid" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at http://vger.kernel.org/majordomo-info.html
next prev parent reply other threads:[~2011-02-03 15:54 UTC|newest]
Thread overview: 127+ messages / expand[flat|nested] mbox.gz Atom feed top
2011-01-31 9:41 What's the typical RAID10 setup? Mathias Burén
2011-01-31 10:14 ` Robin Hill
2011-01-31 10:22 ` Mathias Burén
2011-01-31 10:36 ` CoolCold
2011-01-31 15:00 ` Roberto Spadim
2011-01-31 15:21 ` Robin Hill
2011-01-31 15:27 ` Roberto Spadim
2011-01-31 15:28 ` Roberto Spadim
2011-01-31 15:32 ` Roberto Spadim
2011-01-31 15:34 ` Roberto Spadim
2011-01-31 15:37 ` Roberto Spadim
2011-01-31 15:45 ` Robin Hill
2011-01-31 16:55 ` Denis
2011-01-31 17:31 ` Roberto Spadim
2011-01-31 18:35 ` Denis
2011-01-31 19:15 ` Roberto Spadim
2011-01-31 19:28 ` Keld Jørn Simonsen
2011-01-31 19:35 ` Roberto Spadim
2011-01-31 19:37 ` Roberto Spadim
2011-01-31 20:22 ` Keld Jørn Simonsen
2011-01-31 20:17 ` Stan Hoeppner
2011-01-31 20:37 ` Keld Jørn Simonsen
2011-01-31 21:20 ` Roberto Spadim
2011-01-31 21:24 ` Mathias Burén
2011-01-31 21:27 ` Jon Nelson
2011-01-31 21:47 ` Roberto Spadim
2011-01-31 21:51 ` Roberto Spadim
2011-01-31 22:50 ` NeilBrown
2011-01-31 22:53 ` Roberto Spadim
2011-01-31 23:10 ` NeilBrown
2011-01-31 23:14 ` Roberto Spadim
2011-01-31 22:52 ` Keld Jørn Simonsen
2011-01-31 23:00 ` Roberto Spadim
2011-02-01 10:01 ` David Brown
2011-02-01 13:50 ` Jon Nelson
2011-02-01 14:25 ` Roberto Spadim
2011-02-01 14:48 ` David Brown
2011-02-01 15:41 ` Roberto Spadim
2011-02-03 3:36 ` Drew
2011-02-03 8:18 ` Stan Hoeppner
[not found] ` <AANLkTikerSZfhMbkEvGBVyLB=wHDSHLWszoEz5As5Hi4@mail.gmail.com>
[not found] ` <AANLkTikLyR206x4aMy+veNkWPV67uF9r5dZKGqXJUEqN@mail.gmail.com>
2011-02-03 14:35 ` Roberto Spadim
2011-02-03 15:43 ` Keld Jørn Simonsen
2011-02-03 15:50 ` Roberto Spadim
2011-02-03 15:54 ` Roberto Spadim [this message]
2011-02-03 16:02 ` Keld Jørn Simonsen
2011-02-03 16:07 ` Roberto Spadim
2011-02-03 16:16 ` Roberto Spadim
2011-02-01 22:05 ` Stan Hoeppner
2011-02-01 23:12 ` Roberto Spadim
2011-02-02 9:25 ` Robin Hill
2011-02-02 16:00 ` Roberto Spadim
2011-02-02 16:06 ` Roberto Spadim
2011-02-02 16:07 ` Roberto Spadim
2011-02-02 16:10 ` Roberto Spadim
2011-02-02 16:13 ` Roberto Spadim
2011-02-02 19:44 ` Keld Jørn Simonsen
2011-02-02 20:28 ` Roberto Spadim
2011-02-02 21:31 ` Roberto Spadim
2011-02-02 22:13 ` Keld Jørn Simonsen
2011-02-02 22:26 ` Roberto Spadim
2011-02-03 1:57 ` Roberto Spadim
2011-02-03 3:05 ` Stan Hoeppner
2011-02-03 3:13 ` Roberto Spadim
2011-02-03 3:17 ` Roberto Spadim
2011-02-01 23:35 ` Keld Jørn Simonsen
2011-02-01 16:02 ` Keld Jørn Simonsen
2011-02-01 16:24 ` Roberto Spadim
2011-02-01 17:56 ` Keld Jørn Simonsen
2011-02-01 18:09 ` Roberto Spadim
2011-02-01 20:16 ` Keld Jørn Simonsen
2011-02-01 20:32 ` Keld Jørn Simonsen
2011-02-01 20:58 ` Roberto Spadim
2011-02-01 21:04 ` Roberto Spadim
2011-02-01 21:18 ` David Brown
2011-02-01 0:58 ` Stan Hoeppner
2011-02-01 12:50 ` Roman Mamedov
2011-02-03 11:04 ` Keld Jørn Simonsen
2011-02-03 14:17 ` Roberto Spadim
2011-02-03 15:54 ` Keld Jørn Simonsen
2011-02-03 18:39 ` Keld Jørn Simonsen
2011-02-03 18:41 ` Roberto Spadim
2011-02-03 23:43 ` Stan Hoeppner
2011-02-04 3:49 ` hansbkk
2011-02-04 7:06 ` Keld Jørn Simonsen
2011-02-04 8:27 ` Stan Hoeppner
2011-02-04 9:06 ` Keld Jørn Simonsen
2011-02-04 10:04 ` Stan Hoeppner
2011-02-04 11:15 ` hansbkk
2011-02-04 13:33 ` Keld Jørn Simonsen
2011-02-04 20:35 ` Keld Jørn Simonsen
2011-02-04 20:42 ` Keld Jørn Simonsen
2011-02-04 21:15 ` Stan Hoeppner
2011-02-04 22:05 ` Keld Jørn Simonsen
2011-02-04 23:03 ` Stan Hoeppner
2011-02-06 3:59 ` Drew
2011-02-06 4:27 ` Stan Hoeppner
2011-02-04 11:34 ` David Brown
2011-02-04 13:53 ` Keld Jørn Simonsen
2011-02-04 14:17 ` David Brown
2011-02-04 14:21 ` hansbkk
2011-02-06 4:02 ` Drew
2011-02-06 7:58 ` Keld Jørn Simonsen
2011-02-06 12:03 ` Roman Mamedov
2011-02-06 14:30 ` Roberto Spadim
2011-02-01 8:46 ` hansbkk
2011-01-31 19:37 ` Phillip Susi
2011-01-31 19:41 ` Roberto Spadim
2011-01-31 19:46 ` Phillip Susi
2011-01-31 19:53 ` Roberto Spadim
2011-01-31 22:10 ` Phillip Susi
2011-01-31 22:14 ` Denis
2011-01-31 22:33 ` Roberto Spadim
2011-01-31 22:36 ` Roberto Spadim
2011-01-31 20:23 ` Stan Hoeppner
2011-01-31 21:59 ` Phillip Susi
2011-01-31 22:08 ` Jon Nelson
2011-01-31 22:38 ` Phillip Susi
2011-02-01 10:05 ` David Brown
2011-02-01 9:20 ` Robin Hill
2011-02-04 16:03 ` Phillip Susi
2011-02-04 16:22 ` Robin Hill
2011-02-04 20:35 ` [OT] " Phil Turmel
2011-02-04 20:35 ` Phillip Susi
2011-02-04 21:05 ` Stan Hoeppner
2011-02-04 21:13 ` Roberto Spadim
2011-01-31 15:30 ` Robin Hill
2011-01-31 20:07 ` Stan Hoeppner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=AANLkTimznEe8Vm2HoYJyAPpjHLbp3AWMqmd68csSgQjB@mail.gmail.com \
--to=roberto@spadim.com.br \
--cc=drew.kay@gmail.com \
--cc=keld@keldix.com \
--cc=linux-raid@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).