From: Ming Lei <ming.lei@redhat.com>
To: Mike Snitzer <snitzer@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, Christoph Hellwig <hch@lst.de>,
Jeffle Xu <jefflexu@linux.alibaba.com>,
dm-devel@redhat.com
Subject: Re: [RFC PATCH V2 09/13] block: use per-task poll context to implement bio based io poll
Date: Fri, 19 Mar 2021 08:30:47 +0800 [thread overview]
Message-ID: <YFPwt6+sNa6SD+m/@T590> (raw)
In-Reply-To: <20210318172622.GA3871@redhat.com>
On Thu, Mar 18, 2021 at 01:26:22PM -0400, Mike Snitzer wrote:
> On Thu, Mar 18 2021 at 12:48pm -0400,
> Ming Lei <ming.lei@redhat.com> wrote:
>
> > Currently bio based IO poll needs to poll all hw queue blindly, this way
> > is very inefficient, and the big reason is that we can't pass bio
> > submission result to io poll task.
> >
> > In IO submission context, track associated underlying bios by per-task
> > submission queue and save 'cookie' poll data in bio->bi_iter.bi_private_data,
> > and return current->pid to caller of submit_bio() for any bio based
> > driver's IO, which is submitted from FS.
> >
> > In IO poll context, the passed cookie tells us the PID of submission
> > context, and we can find the bio from that submission context. Moving
> > bio from submission queue to poll queue of the poll context, and keep
> > polling until these bios are ended. Remove bio from poll queue if the
> > bio is ended. Add BIO_DONE and BIO_END_BY_POLL for such purpose.
> >
> > In previous version, kfifo is used to implement submission queue, and
> > Jeffle Xu found that kfifo can't scale well in case of high queue depth.
> > So far bio's size is close to 2 cacheline size, and it may not be
> > accepted to add new field into bio for solving the scalability issue by
> > tracking bios via linked list, switch to bio group list for tracking bio,
> > the idea is to reuse .bi_end_io for linking bios into a linked list for
> > all sharing same .bi_end_io(call it bio group), which is recovered before
> > really end bio, since BIO_END_BY_POLL is added for enhancing this point.
> > Usually .bi_end_bio is same for all bios in same layer, so it is enough to
> > provide very limited groups, such as 32 for fixing the scalability issue.
> >
> > Usually submission shares context with io poll. The per-task poll context
> > is just like stack variable, and it is cheap to move data between the two
> > per-task queues.
> >
> > Signed-off-by: Ming Lei <ming.lei@redhat.com>
> > ---
> > block/bio.c | 5 ++
> > block/blk-core.c | 149 +++++++++++++++++++++++++++++++-
> > block/blk-mq.c | 173 +++++++++++++++++++++++++++++++++++++-
> > block/blk.h | 9 ++
> > include/linux/blk_types.h | 16 +++-
> > 5 files changed, 348 insertions(+), 4 deletions(-)
> >
> > diff --git a/block/bio.c b/block/bio.c
> > index 26b7f721cda8..04c043dc60fc 100644
> > --- a/block/bio.c
> > +++ b/block/bio.c
> > @@ -1402,6 +1402,11 @@ static inline bool bio_remaining_done(struct bio *bio)
> > **/
> > void bio_endio(struct bio *bio)
> > {
> > + /* BIO_END_BY_POLL has to be set before calling submit_bio */
> > + if (bio_flagged(bio, BIO_END_BY_POLL)) {
> > + bio_set_flag(bio, BIO_DONE);
> > + return;
> > + }
> > again:
> > if (!bio_remaining_done(bio))
> > return;
> > diff --git a/block/blk-core.c b/block/blk-core.c
> > index efc7a61a84b4..778d25a7e76c 100644
> > --- a/block/blk-core.c
> > +++ b/block/blk-core.c
> > @@ -805,6 +805,77 @@ static inline unsigned int bio_grp_list_size(unsigned int nr_grps)
> > sizeof(struct bio_grp_list_data);
> > }
> >
> > +static inline void *bio_grp_data(struct bio *bio)
> > +{
> > + return bio->bi_poll;
> > +}
> > +
> > +/* add bio into bio group list, return true if it is added */
> > +static bool bio_grp_list_add(struct bio_grp_list *list, struct bio *bio)
> > +{
> > + int i;
> > + struct bio_grp_list_data *grp;
> > +
> > + for (i = 0; i < list->nr_grps; i++) {
> > + grp = &list->head[i];
> > + if (grp->grp_data == bio_grp_data(bio)) {
> > + __bio_grp_list_add(&grp->list, bio);
> > + return true;
> > + }
> > + }
> > +
> > + if (i == list->max_nr_grps)
> > + return false;
> > +
> > + /* create a new group */
> > + grp = &list->head[i];
> > + bio_list_init(&grp->list);
> > + grp->grp_data = bio_grp_data(bio);
> > + __bio_grp_list_add(&grp->list, bio);
> > + list->nr_grps++;
> > +
> > + return true;
> > +}
> > +
> > +static int bio_grp_list_find_grp(struct bio_grp_list *list, void *grp_data)
> > +{
> > + int i;
> > + struct bio_grp_list_data *grp;
> > +
> > + for (i = 0; i < list->max_nr_grps; i++) {
> > + grp = &list->head[i];
> > + if (grp->grp_data == grp_data)
> > + return i;
> > + }
> > + for (i = 0; i < list->max_nr_grps; i++) {
> > + grp = &list->head[i];
> > + if (bio_grp_list_grp_empty(grp))
> > + return i;
> > + }
> > + return -1;
> > +}
> > +
> > +/* Move as many as possible groups from 'src' to 'dst' */
> > +void bio_grp_list_move(struct bio_grp_list *dst, struct bio_grp_list *src)
> > +{
> > + int i, j, cnt = 0;
> > + struct bio_grp_list_data *grp;
> > +
> > + for (i = src->nr_grps - 1; i >= 0; i--) {
> > + grp = &src->head[i];
> > + j = bio_grp_list_find_grp(dst, grp->grp_data);
> > + if (j < 0)
> > + break;
> > + if (bio_grp_list_grp_empty(&dst->head[j]))
> > + dst->head[j].grp_data = grp->grp_data;
> > + __bio_grp_list_merge(&dst->head[j].list, &grp->list);
> > + bio_list_init(&grp->list);
> > + cnt++;
> > + }
> > +
> > + src->nr_grps -= cnt;
> > +}
> > +
> > static void bio_poll_ctx_init(struct blk_bio_poll_ctx *pc)
> > {
> > pc->sq = (void *)pc + sizeof(*pc);
> > @@ -866,6 +937,46 @@ static inline void blk_bio_poll_preprocess(struct request_queue *q,
> > bio->bi_opf |= REQ_TAG;
> > }
> >
> > +static bool blk_bio_poll_prep_submit(struct io_context *ioc, struct bio *bio)
> > +{
> > + struct blk_bio_poll_ctx *pc = ioc->data;
> > + unsigned int queued;
> > +
> > + /*
> > + * We rely on immutable .bi_end_io between blk-mq bio submission
> > + * and completion. However, bio crypt may update .bi_end_io during
> > + * submitting, so simply not support bio based polling for this
> > + * setting.
> > + */
> > + if (likely(!bio_has_crypt_ctx(bio))) {
> > + /* track this bio via bio group list */
> > + spin_lock(&pc->sq_lock);
> > + queued = bio_grp_list_add(pc->sq, bio);
> > + spin_unlock(&pc->sq_lock);
> > + } else {
> > + queued = false;
> > + }
> > +
> > + /*
> > + * Now the bio is added per-task fifo, mark it as END_BY_POLL,
> > + * and the bio is always completed from the pair poll context.
> > + *
> > + * One invariant is that if bio isn't completed, blk_poll() will
> > + * be called by passing cookie returned from submitting this bio.
> > + */
> > + if (!queued)
> > + bio->bi_opf &= ~(REQ_HIPRI | REQ_TAG);
> > + else
> > + bio_set_flag(bio, BIO_END_BY_POLL);
> > +
> > + return queued;
> > +}
> > +
> > +static void blk_bio_poll_post_submit(struct bio *bio, blk_qc_t cookie)
> > +{
> > + bio->bi_iter.bi_private_data = cookie;
> > +}
> > +
> > static noinline_for_stack bool submit_bio_checks(struct bio *bio)
> > {
> > struct block_device *bdev = bio->bi_bdev;
> > @@ -1020,7 +1131,7 @@ static blk_qc_t __submit_bio(struct bio *bio)
> > * bio_list_on_stack[1] contains bios that were submitted before the current
> > * ->submit_bio_bio, but that haven't been processed yet.
> > */
> > -static blk_qc_t __submit_bio_noacct(struct bio *bio)
> > +static blk_qc_t __submit_bio_noacct_int(struct bio *bio, struct io_context *ioc)
> > {
> > struct bio_list bio_list_on_stack[2];
> > blk_qc_t ret = BLK_QC_T_NONE;
> > @@ -1043,7 +1154,16 @@ static blk_qc_t __submit_bio_noacct(struct bio *bio)
> > bio_list_on_stack[1] = bio_list_on_stack[0];
> > bio_list_init(&bio_list_on_stack[0]);
> >
> > - ret = __submit_bio(bio);
> > + if (ioc && queue_is_mq(q) &&
> > + (bio->bi_opf & (REQ_HIPRI | REQ_TAG))) {
> > + bool queued = blk_bio_poll_prep_submit(ioc, bio);
> > +
> > + ret = __submit_bio(bio);
> > + if (queued)
> > + blk_bio_poll_post_submit(bio, ret);
> > + } else {
> > + ret = __submit_bio(bio);
> > + }
>
> So you're only supporting bio-based polling if the bio-based device is
> stacked _directly_ ontop of blk-mq? Severely limits the utility of
> bio-based IO polling support if such shallow stacking is required.
No, not directly ontop of blk-mq, and it can be any descendant blk-mq
device, so far only blk-mq can provide direct polling support, see
blk_poll():
ret = q->mq_ops->poll(hctx);
If not any descendant blk-mq device is involved in this bio based
device, we can't support polling so far.
Thanks,
Ming
next prev parent reply other threads:[~2021-03-19 0:32 UTC|newest]
Thread overview: 41+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-03-18 16:48 [RFC PATCH V2 00/13] block: support bio based io polling Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 01/13] block: add helper of blk_queue_poll Ming Lei
2021-03-19 16:52 ` Mike Snitzer
2021-03-23 11:17 ` Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 02/13] block: add one helper to free io_context Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 03/13] block: add helper of blk_create_io_context Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 04/13] block: create io poll context for submission and poll task Ming Lei
2021-03-19 17:05 ` Mike Snitzer
2021-03-23 11:23 ` Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 05/13] block: add req flag of REQ_TAG Ming Lei
2021-03-19 7:59 ` JeffleXu
2021-03-19 8:48 ` Ming Lei
2021-03-19 9:47 ` JeffleXu
2021-03-19 17:38 ` Mike Snitzer
2021-03-23 11:26 ` Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 06/13] block: add new field into 'struct bvec_iter' Ming Lei
2021-03-19 17:44 ` Mike Snitzer
2021-03-23 11:29 ` Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 07/13] block/mq: extract one helper function polling hw queue Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 08/13] block: prepare for supporting bio_list via other link Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 09/13] block: use per-task poll context to implement bio based io poll Ming Lei
2021-03-18 17:26 ` Mike Snitzer
2021-03-18 17:38 ` [dm-devel] " Mike Snitzer
2021-03-19 0:30 ` Ming Lei [this message]
2021-03-19 9:38 ` JeffleXu
2021-03-19 13:46 ` Ming Lei
2021-03-20 5:56 ` JeffleXu
2021-03-23 11:39 ` Ming Lei
2021-03-19 18:38 ` Mike Snitzer
2021-03-23 11:55 ` Ming Lei
2021-03-23 3:46 ` Sagi Grimberg
2021-03-23 12:01 ` Ming Lei
2021-03-23 16:54 ` Sagi Grimberg
2021-03-24 0:10 ` Ming Lei
2021-03-24 15:43 ` Sagi Grimberg
2021-03-18 16:48 ` [RFC PATCH V2 10/13] block: add queue_to_disk() to get gendisk from request_queue Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 11/13] block: add poll_capable method to support bio-based IO polling Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 12/13] dm: support IO polling for bio-based dm device Ming Lei
2021-03-18 16:48 ` [RFC PATCH V2 13/13] blk-mq: limit hw queues to be polled in each blk_poll() Ming Lei
2021-03-19 5:50 ` [RFC PATCH V2 00/13] block: support bio based io polling JeffleXu
2021-03-19 18:45 ` Mike Snitzer
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YFPwt6+sNa6SD+m/@T590 \
--to=ming.lei@redhat.com \
--cc=axboe@kernel.dk \
--cc=dm-devel@redhat.com \
--cc=hch@lst.de \
--cc=jefflexu@linux.alibaba.com \
--cc=linux-block@vger.kernel.org \
--cc=snitzer@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).