From: Christoph Hellwig <hch@lst.de>
To: Neil Brown <neilb@suse.de>
Cc: Christoph Hellwig <hch@lst.de>, Shaohua Li <shli@fb.com>,
"linux-raid@vger.kernel.org" <linux-raid@vger.kernel.org>,
Kernel Team <Kernel-team@fb.com>,
"dan.j.williams@intel.com" <dan.j.williams@intel.com>
Subject: Re: raid5-cache I/O path improvements V2
Date: Wed, 30 Sep 2015 17:00:15 +0200 [thread overview]
Message-ID: <20150930150015.GA26216@lst.de> (raw)
In-Reply-To: <87vbaslb2v.fsf@notabene.neil.brown.name>
On Wed, Sep 30, 2015 at 03:39:52PM +1000, Neil Brown wrote:
> Christoph Hellwig <hch@lst.de> writes:
>
> > So the summary is that for now you want me to resend with a patch
> > to opt into using FUA?
>
> I'd like to avoid "opt in" if at all possible.
> Shoahua measured that using "FUA" for all writes to the journal
> hurt performance on at least one device. Do you have a different device
> where it demonstrably helps?
> If there any chance of automatically detecting which is which?
I have a high end SAS SSD where it helps, but the real use case where
it makes a major difference are battery backed dimms (NV-DIMMS) or
other devices where we don't even need the FUA bit as they don't have
a cache at all. The important part is to avoid the batching up for
the non-existant flush in that case.
So I could defintively default the code to on only for those, but not
even allowing a tunable for devices that have the FUA bit seems like
an odd restriction.
prev parent reply other threads:[~2015-09-30 15:00 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-09-12 6:17 raid5-cache I/O path improvements V2 Christoph Hellwig
2015-09-12 6:17 ` [PATCH 01/12] raid5-cache: port to 4.3-rc Christoph Hellwig
2015-09-12 6:17 ` [PATCH 02/12] raid5-cache: free I/O units earlier Christoph Hellwig
2015-09-15 7:00 ` Neil Brown
2015-09-17 1:50 ` Christoph Hellwig
2015-09-15 8:07 ` Neil Brown
2015-09-17 1:48 ` Christoph Hellwig
2015-09-12 6:17 ` [PATCH 03/12] raid5-cache: rename flushed_ios to finished_ios Christoph Hellwig
2015-09-12 6:17 ` [PATCH 04/12] raid5-cache: factor out a helper to run all stripes for an I/O unit Christoph Hellwig
2015-09-12 6:17 ` [PATCH 05/12] raid5-cache: use FUA writes for the log Christoph Hellwig
2015-09-12 6:17 ` [PATCH 06/12] raid5-cache: clean up r5l_get_meta Christoph Hellwig
2015-09-12 6:17 ` [PATCH 07/12] raid5-cache: refactor bio allocation Christoph Hellwig
2015-09-12 6:17 ` [PATCH 08/12] raid5-cache: take rdev->data_offset into account early on Christoph Hellwig
2015-09-12 6:17 ` [PATCH 09/12] raid5-cache: inline r5l_alloc_io_unit into r5l_new_meta Christoph Hellwig
2015-09-12 6:17 ` [PATCH 10/12] raid5-cache: new helper: r5_reserve_log_entry Christoph Hellwig
2015-09-12 6:17 ` [PATCH 11/12] raid5-cache: small log->seq cleanup Christoph Hellwig
2015-09-12 6:17 ` [PATCH 12/12] raid5-cache: use bio chaining Christoph Hellwig
2015-09-14 19:11 ` raid5-cache I/O path improvements V2 Shaohua Li
2015-09-15 7:23 ` Neil Brown
2015-09-15 21:54 ` Shaohua Li
2015-09-17 1:53 ` Christoph Hellwig
2015-09-28 14:01 ` Christoph Hellwig
2015-09-30 5:39 ` Neil Brown
2015-09-30 15:00 ` Christoph Hellwig [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20150930150015.GA26216@lst.de \
--to=hch@lst.de \
--cc=Kernel-team@fb.com \
--cc=dan.j.williams@intel.com \
--cc=linux-raid@vger.kernel.org \
--cc=neilb@suse.de \
--cc=shli@fb.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).