From: David Sterba <dsterba@suse.cz>
To: Bart Van Assche <Bart.VanAssche@sandisk.com>
Cc: "linux-btrfs@vger.kernel.org" <linux-btrfs@vger.kernel.org>,
"linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
"anand.jain@oracle.com" <anand.jain@oracle.com>
Subject: Re: [RFC PATCH 0/2] Introduce blkdev_issue_flush_no_wait()
Date: Wed, 17 May 2017 19:14:00 +0200 [thread overview]
Message-ID: <20170517171400.GI4065@suse.cz> (raw)
In-Reply-To: <1494943642.2935.1.camel@sandisk.com>
On Tue, May 16, 2017 at 02:07:23PM +0000, Bart Van Assche wrote:
> On Tue, 2017-05-16 at 17:39 +0800, Anand Jain wrote:
> > BTRFS wanted a block device flush function which does not wait for
> > its completion, so that the flush for the next device can be called
> > in the same thread.
> >
> > Here is a RFC patch to provide the function
> > 'blkdev_issue_flush_no_wait()', which is based on the current device
> > flush function 'blkdev_issue_flush()', however it uses submit_bio()
> > instead of submit_bio_wait().
> >
> > This patch is for review comments, will send out a final patch based
> > on the comments received.
>
> Since the block layer can reorder requests, I think using
> blkdev_issue_flush_no_wait() will only yield the intended result if
> the caller waits until the requests that have to be flushed have completed.
> Is that how you intend to use this function?
Yes, this is intended. Regarding the two patches, I don't think we need
them. A more detailed explanation below.
The function blkdev_issue_flush_no_wait would be used in multi-device
btrfs to submit the barriers in parallel.
fs/btrfs/disk-io.c:barrier_all_devices
pseudocode:
foreach device
write_dev_flush(wait=0)
submit_bio(device->bio)
(would newly use blkdev_issue_flush_no_wait)
foreach device
write_dev_flush(wiat=1)
wait_for_completion(device->bio)
The submission path of write_dev_flush mimics the structure of
blkdev_issue_flush, so Anand supposedly wants to move that to API.
I personally don't think this is necessary and am fine with opencoding
it, btrfs would likely be the only user of the new function anyway.
Other reason is that we want to preallocate the bio used for flushing so
we can avoid ENOMEM when submitting disk barriers. This would not be
possible. In summary, I think we can address all the problems inside
btrfs without extending block layer as for now.
next prev parent reply other threads:[~2017-05-17 17:14 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-05-16 9:39 [RFC PATCH 0/2] Introduce blkdev_issue_flush_no_wait() Anand Jain
2017-05-16 9:39 ` [PATCH 1/2] block: " Anand Jain
2017-05-16 11:56 ` Christoph Hellwig
2017-05-18 9:31 ` Anand Jain
2017-05-21 7:09 ` Christoph Hellwig
2017-05-24 8:42 ` Anand Jain
2017-05-16 9:39 ` [PATCH 2/2] btrfs: Use blkdev_issue_flush_no_wait() Anand Jain
2017-05-16 12:00 ` Christoph Hellwig
2017-05-16 14:07 ` [RFC PATCH 0/2] Introduce blkdev_issue_flush_no_wait() Bart Van Assche
2017-05-17 17:14 ` David Sterba [this message]
2017-05-18 9:31 ` Anand Jain
2017-05-18 9:27 ` Anand Jain
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20170517171400.GI4065@suse.cz \
--to=dsterba@suse.cz \
--cc=Bart.VanAssche@sandisk.com \
--cc=anand.jain@oracle.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-btrfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).