From: Hendrik Siedelmann <hendrik.siedelmann@googlemail.com>
To: Chris Murphy <lists@colorremedies.com>
Cc: linux-btrfs@vger.kernel.org
Subject: Re: Btrfs raid allocator
Date: Wed, 07 May 2014 00:45:27 +0200 [thread overview]
Message-ID: <53696607.4090103@googlemail.com> (raw)
In-Reply-To: <97F9F845-5EF6-4AC8-AD58-AE0E183742FF@colorremedies.com>
On 06.05.2014 23:49, Chris Murphy wrote:
>
> On May 6, 2014, at 4:41 AM, Hendrik Siedelmann
> <hendrik.siedelmann@googlemail.com> wrote:
>
>> Hello all!
>>
>> I would like to use btrfs (or anyting else actually) to maximize
>> raid0 performance. Basically I have a relatively constant stream of
>> data that simply has to be written out to disk.
>
> I think the only way to know what works best for your workload is to
> test configurations with the actual workload. For optimization of
> multiple device file systems, it's hard to beat XFS on raid0 or even
> linear/concat due to its parallelization, if you have more than one
> stream (or a stream that produces a lot of files that XFS can
> allocate into separate allocation groups). Also mdadm supports use
> specified strip/chunk sizes, whereas currently on Btrfs this is fixed
> to 64KiB. Depending on the file size for your workload, it's possible
> a much larger strip will yield better performance.
Thanks, that's quite a few knobs I can try out - I just have a lot of
data - with a rate up to 450MB/s that I want to write out in time,
preferably without having to rely on too expensive hardware.
> Another optimization is hardware RAID with a battery backed write
> cache (the drives' write cache are disabled) and using nobarrier
> mount option. If your workload supports linear/concat then it's fine
> to use md linear for this. What I'm not sure of is if it's an OK
> practice to disable barriers if the system is on a UPS (rather than a
> battery backed hardware RAID cache). You should post the workload and
> hardware details on the XFS list to get suggestions about such
> things. They'll also likely recommend the deadline scheduler over
> cfq.
Actually data integrity does not matter for the workload. If everything
is succesfull the result will be backed up - before that full filesystem
corruption is acceptable as a failure mode.
> Unless you have a workload really familiar to the responder, they'll
> tell you any benchmarking you do needs to approximate the actual
> workflow. A mismatched benchmark to the workload will lead you to the
> wrong conclusions. Typically when you optimize for a particular
> workload, other workloads suffer.
>
> Chris Murphy
>
Thanks again for all the infos! I'll get back if everything works fine -
or if it doesn't ;-)
Cheers
Hendrik
prev parent reply other threads:[~2014-05-06 22:45 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-05-06 10:41 Btrfs raid allocator Hendrik Siedelmann
2014-05-06 10:59 ` Hugo Mills
2014-05-06 11:14 ` Hendrik Siedelmann
2014-05-06 11:19 ` Hugo Mills
2014-05-06 11:26 ` Hendrik Siedelmann
2014-05-06 11:46 ` Hugo Mills
2014-05-06 12:16 ` Hendrik Siedelmann
2014-05-06 20:59 ` Duncan
2014-05-06 21:49 ` Chris Murphy
2014-05-06 22:45 ` Hendrik Siedelmann [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=53696607.4090103@googlemail.com \
--to=hendrik.siedelmann@googlemail.com \
--cc=linux-btrfs@vger.kernel.org \
--cc=lists@colorremedies.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).