linux-raid.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Stan Hoeppner <stan@hardwarefreak.com>
To: Marcus Sorensen <shadowsor@gmail.com>
Cc: "daobang wang" <wangdb1981@gmail.com>,
	"Mathias Burén" <mathias.buren@gmail.com>,
	linux-raid <linux-raid@vger.kernel.org>
Subject: Re: RAID5 created by 8 disks works with xfs
Date: Sun, 01 Apr 2012 22:47:32 -0500	[thread overview]
Message-ID: <4F792154.5020006@hardwarefreak.com> (raw)
In-Reply-To: <CALFpzo5aKM2y_stDR14PNYMHEcq5CEptuFiXrvcCpXTzSQmAxw@mail.gmail.com>

On 4/1/2012 2:08 AM, Marcus Sorensen wrote:
> Streaming workloads don't benefit much from writeback cache.
> Writeback can absorb spikes, but if you have a constant load that goes
> beyond what your disks can handle, you'll have good performance
> exactly to the point where your writeback is full. Once you hit
> dirty_bytes, dirty_ratio, or the timeout, your system will be crushed
> with I/O beyond recovery. It's best to limit your writeback cache to a
> relatively small number with such a constant IO load.

My comments WRT battery or flash backed write cache, whether write back
or write through, were strictly related to running with XFS barriers
disabled.  The only scenaio where you can safely disable XFS barriers is
when you have a properly functioning BBWC RAID controller, whether an
HBA, or a host independent external array such as a SAN controller.

Of course, I agree 100% that write cache yields little benefit with high
throughput workloads, and especially those generating high seek rates to
boot.  The workload described is many parallel streaming writes of .25
MB/s each.  If we use 96 streams, that's "only" 24 MB/s aggregate.  But
as each of the 16 drives will likely be hitting its seek ceiling of
~150/s using XFS on striped RAID, the aggregate throughput of the 15
RAID5 spindles will probably be less than 10 MB/s.

Using the linear array with XFS instead of RAID5 will eliminate much of
the head seeking, increasing throughput.  The increase may not be a
huge, but it will be enough to handle many more parallel write streams
than RAID5 before the drives hit their seek ceiling.

-- 
Stan

  reply	other threads:[~2012-04-02  3:47 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-31  1:22 RAID5 created by 8 disks works with xfs daobang wang
2012-03-31  7:59 ` Mathias Burén
2012-03-31 20:09   ` Stan Hoeppner
2012-04-01  1:16     ` daobang wang
2012-04-01  2:05       ` daobang wang
2012-04-01  5:13         ` Stan Hoeppner
2012-04-01  3:51       ` Stan Hoeppner
2012-04-01  5:12         ` daobang wang
2012-04-01  5:40           ` Stan Hoeppner
2012-04-01  5:59             ` daobang wang
2012-04-01  6:20               ` daobang wang
2012-04-01  7:08                 ` Marcus Sorensen
2012-04-02  3:47                   ` Stan Hoeppner [this message]
2012-04-05  0:48                     ` daobang wang
     [not found]                       ` <CACwgYDOtCoVF-p+KKqPYxHhA4vWF78Ueecx9hcVWLoyxFWzV9Q@mail.gmail.com>
2012-04-05 21:01                         ` Stan Hoeppner
2012-04-06  0:25                           ` daobang wang
2012-04-06  2:33                             ` daobang wang
2012-04-06  6:00                               ` Jack Wang
2012-04-06  6:45                                 ` daobang wang
2012-04-06  6:49                                   ` daobang wang
2012-04-06  8:18                                     ` Stan Hoeppner
2012-04-06  8:45                                       ` daobang wang
2012-04-06 11:12                                         ` Stan Hoeppner
2012-04-18  2:23                                           ` daobang wang
2012-04-02  3:12                 ` Stan Hoeppner
2012-04-01 10:33             ` David Brown
2012-04-01 12:28               ` John Robinson
2012-04-02  6:59                 ` David Brown
     [not found]                 ` <CA+res+QkLi7sxZrD-XOcbR47CeJ5gADf7P6pa1w1oMf8CKSB4g@mail.gmail.com>
2012-04-02  8:01                   ` John Robinson
2012-04-02 10:01                     ` Jack Wang
2012-04-02 10:28                       ` John Robinson
2012-04-02 20:41                         ` Stan Hoeppner
2012-04-02  5:43               ` Stan Hoeppner
2012-04-02  7:04                 ` David Brown
2012-04-02 20:21                   ` Stan Hoeppner
2012-04-01  4:52       ` Stan Hoeppner
2012-04-01  8:06         ` John Robinson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F792154.5020006@hardwarefreak.com \
    --to=stan@hardwarefreak.com \
    --cc=linux-raid@vger.kernel.org \
    --cc=mathias.buren@gmail.com \
    --cc=shadowsor@gmail.com \
    --cc=wangdb1981@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).