From: Ric Wheeler <rwheeler@redhat.com>
To: Tobias Oetiker <tobi@oetiker.ch>
Cc: Daniel J Blueman <daniel.blueman@gmail.com>,
Florian Weimer <fweimer@bfk.de>,
linux-btrfs@vger.kernel.org
Subject: Re: Benchmarking btrfs on HW Raid ... BAD
Date: Wed, 30 Sep 2009 10:35:23 -0400 [thread overview]
Message-ID: <4AC36CAB.5000208@redhat.com> (raw)
In-Reply-To: <alpine.DEB.2.00.0909281124410.11292@sebohet.brgvxre.pu>
On 09/28/2009 05:39 AM, Tobias Oetiker wrote:
> Hi Daniel,
>
> Today Daniel J Blueman wrote:
>
>
>> On Mon, Sep 28, 2009 at 9:17 AM, Florian Weimer<fweimer@bfk.de> wrote:
>>
>>> * Tobias Oetiker:
>>>
>>>
>>>> Running this on a single disk, I get the quite acceptable results.
>>>> When running on-top of a Areca HW Raid6 (lvm partitioned)
>>>> then both read and write performance go down by at least 2
>>>> magnitudes.
>>>>
>>> Does the HW RAID use write caching (preferably battery-backed)?
>>>
>> I believe Areca controllers have an option for writeback or
>> writethrough caching, so it's worth checking this and that you're
>> running the current firmware, in case of errata. Ironically, disabling
>> writeback will give the OS tighter control of request latency, but
>> throughput may drop a lot. I still can't help thinking that this is
>> down to the behaviour of the controller, due to the 1-disk case
>> working well.
>>
> it certainly is down to a behaviour of the controller, or the
> results would be the same as with a single sata disk :-) It would
> be interesting to see what results others get on HW Raid
> Controllers ...
>
>
>> One way would be to configure the array as 6 or 7 devices, and allow
>> BTRFS/DM to mange the array, then see if performance under write load
>> is better, and with or without writeback caching...
>>
> I can imagine that this would help, but since btrfs aims to be
> multipurpose, this does not realy help all that much since this
> fundamentally alters the 'conditions' at the moment the RAID
> contains different filesystem and is partitioned using lvm ...
>
> cheers
> tobi
>
> the results for ext3 fs look like this ...
>
>
I would be more suspicious of the barrier/flushes being issued. If your
write cache is non-volatile, we really do not want to send them down to
this type of device. Flushing this type of cache could certainly be
very, very expensive and slow.
Try "mount -o nobarrier" and see if your performance (write cache still
enabled on the controller) is back to expected levels,
Ric
next prev parent reply other threads:[~2009-09-30 14:35 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2009-09-28 8:06 Benchmarking btrfs on HW Raid ... BAD Tobias Oetiker
2009-09-28 8:17 ` Florian Weimer
2009-09-28 9:19 ` Tobias Oetiker
2009-09-28 9:19 ` Daniel J Blueman
2009-09-28 9:39 ` Tobias Oetiker
2009-09-30 14:35 ` Ric Wheeler [this message]
2009-09-30 18:44 ` Tobias Oetiker
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4AC36CAB.5000208@redhat.com \
--to=rwheeler@redhat.com \
--cc=daniel.blueman@gmail.com \
--cc=fweimer@bfk.de \
--cc=linux-btrfs@vger.kernel.org \
--cc=tobi@oetiker.ch \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox