From: Christoph Hellwig <hch@infradead.org>
To: Ming Lei <ming.lei@canonical.com>
Cc: Jens Axboe <axboe@fb.com>,
linux-kernel@vger.kernel.org, linux-block@vger.kernel.org,
Christoph Hellwig <hch@infradead.org>,
Kent Overstreet <kent.overstreet@gmail.com>,
Eric Wheeler <bcache@lists.ewheeler.net>,
Sebastian Roesner <sroesner-kernelorg@roesner-online.de>,
"4.3+" <stable@vger.kernel.org>, Shaohua Li <shli@fb.com>,
Jens Axboe <axboe@kernel.dk>
Subject: Re: [PATCH v3] block: make sure big bio is splitted into at most 256 bvecs
Date: Mon, 15 Aug 2016 11:23:28 -0700 [thread overview]
Message-ID: <20160815182328.GA13886@infradead.org> (raw)
In-Reply-To: <1471273882-3938-1-git-send-email-ming.lei@canonical.com>
On Mon, Aug 15, 2016 at 11:11:22PM +0800, Ming Lei wrote:
> After arbitrary bio size is supported, the incoming bio may
> be very big. We have to split the bio into small bios so that
> each holds at most BIO_MAX_PAGES bvecs for safety reason, such
> as bio_clone().
I still think working around a rough driver submitting too large
I/O is a bad thing until we've done a full audit of all consuming
bios through ->make_request, and we've enabled it for the common
path as well.
> bool do_split = true;
> struct bio *new = NULL;
> const unsigned max_sectors = get_max_io_size(q, bio);
> + unsigned bvecs = 0;
> +
> + *no_merge = true;
>
> bio_for_each_segment(bv, bio, iter) {
> /*
> + * With arbitrary bio size, the incoming bio may be very
> + * big. We have to split the bio into small bios so that
> + * each holds at most BIO_MAX_PAGES bvecs because
> + * bio_clone() can fail to allocate big bvecs.
> + *
> + * It should have been better to apply the limit per
> + * request queue in which bio_clone() is involved,
> + * instead of globally. The biggest blocker is
> + * bio_clone() in bio bounce.
> + *
> + * If bio is splitted by this reason, we should allow
> + * to continue bios merging.
> + *
> + * TODO: deal with bio bounce's bio_clone() gracefully
> + * and convert the global limit into per-queue limit.
> + */
> + if (bvecs++ >= BIO_MAX_PAGES) {
> + *no_merge = false;
> + goto split;
> + }
That being said this simple if check here is simple enough that it's
probably fine. But I see no need to uglify the whole code path
with that no_merge flag. Please drop if for now, and if we start
caring for this path in common code we should just move the
REQ_NOMERGE setting into the actual blk_bio_*_split helpers.
next prev parent reply other threads:[~2016-08-15 18:23 UTC|newest]
Thread overview: 7+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-08-15 15:11 [PATCH v3] block: make sure big bio is splitted into at most 256 bvecs Ming Lei
2016-08-15 18:23 ` Christoph Hellwig [this message]
2016-08-15 19:12 ` Kent Overstreet
2016-08-19 0:41 ` Eric Wheeler
2016-08-21 9:31 ` Ming Lei
2016-08-21 17:58 ` Kent Overstreet
2016-08-22 8:57 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20160815182328.GA13886@infradead.org \
--to=hch@infradead.org \
--cc=axboe@fb.com \
--cc=axboe@kernel.dk \
--cc=bcache@lists.ewheeler.net \
--cc=kent.overstreet@gmail.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=ming.lei@canonical.com \
--cc=shli@fb.com \
--cc=sroesner-kernelorg@roesner-online.de \
--cc=stable@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).