From: Jens Axboe <axboe@kernel.dk>
To: Kent Overstreet <kent.overstreet@gmail.com>,
Christoph Hellwig <hch@infradead.org>
Cc: Ming Lei <ming.lei@redhat.com>,
linux-block@vger.kernel.org, Coly Li <colyli@suse.de>,
Keith Busch <kbusch@kernel.org>,
linux-bcache@vger.kernel.org
Subject: Re: [PATCH V4] block: optimize for small block size IO
Date: Mon, 4 Nov 2019 11:23:42 -0700 [thread overview]
Message-ID: <f7fab4e0-58e4-76e4-a503-bb535b2a3da6@kernel.dk> (raw)
In-Reply-To: <20191104181742.GC8984@kmo-pixel>
On 11/4/19 11:17 AM, Kent Overstreet wrote:
> On Mon, Nov 04, 2019 at 10:15:41AM -0800, Christoph Hellwig wrote:
>> On Mon, Nov 04, 2019 at 01:14:03PM -0500, Kent Overstreet wrote:
>>> On Sat, Nov 02, 2019 at 03:29:11PM +0800, Ming Lei wrote:
>>>> __blk_queue_split() may be a bit heavy for small block size(such as
>>>> 512B, or 4KB) IO, so introduce one flag to decide if this bio includes
>>>> multiple page. And only consider to try splitting this bio in case
>>>> that the multiple page flag is set.
>>>
>>> So, back in the day I had an alternative approach in mind: get rid of
>>> blk_queue_split entirely, by pushing splitting down to the request layer - when
>>> we map the bio/request to sgl, just have it map as much as will fit in the sgl
>>> and if it doesn't entirely fit bump bi_remaining and leave it on the request
>>> queue.
>>>
>>> This would mean there'd be no need for counting segments at all, and would cut a
>>> fair amount of code out of the io path.
>>
>> I thought about that to, but it will take a lot more effort. Mostly
>> because md/dm heavily rely on splitting as well. I still think it is
>> worthwhile, it will just take a significant amount of time and we
>> should have the quick improvement now.
>
> We can do it one driver at a time - driver sets a flag to disable
> blk_queue_split(). Obvious one to do first would be nvme since that's where it
> shows up the most.
>
> And md/md do splitting internally, but I'm not so sure they need
> blk_queue_split().
I'm a big proponent of doing something like that instead, but it is a
lot of work. I absolutely hate the splitting we're doing now, even
though the original "let's work as hard as we add add page time to get
things right" was pretty abysmal as well.
--
Jens Axboe
next prev parent reply other threads:[~2019-11-04 18:23 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-11-02 7:29 [PATCH V4] block: optimize for small block size IO Ming Lei
2019-11-02 14:03 ` Jens Axboe
2019-11-02 15:57 ` Jens Axboe
2019-11-04 0:01 ` Ming Lei
2019-11-04 18:14 ` Kent Overstreet
2019-11-04 18:15 ` Christoph Hellwig
2019-11-04 18:17 ` Kent Overstreet
2019-11-04 18:23 ` Jens Axboe [this message]
2019-11-04 18:42 ` Kent Overstreet
2019-11-05 1:11 ` Ming Lei
2019-11-05 2:11 ` Kent Overstreet
2019-11-05 2:20 ` Ming Lei
2019-11-05 2:30 ` Kent Overstreet
2019-11-05 2:38 ` Jens Axboe
2019-11-05 3:14 ` Kent Overstreet
2019-11-05 3:33 ` Jens Axboe
2019-11-05 2:46 ` Ming Lei
2019-11-05 2:49 ` Jens Axboe
2019-11-05 3:34 ` Ming Lei
2019-11-05 0:44 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=f7fab4e0-58e4-76e4-a503-bb535b2a3da6@kernel.dk \
--to=axboe@kernel.dk \
--cc=colyli@suse.de \
--cc=hch@infradead.org \
--cc=kbusch@kernel.org \
--cc=kent.overstreet@gmail.com \
--cc=linux-bcache@vger.kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=ming.lei@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox