From: Ming Lei <ming.lei@redhat.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org,
Lai Jiangshan <jiangshanlai@gmail.com>,
"Paul E . McKenney" <paulmck@kernel.org>,
Josh Triplett <josh@joshtriplett.org>,
Bart Van Assche <bvanassche@acm.org>,
Johannes Thumshirn <Johannes.Thumshirn@wdc.com>,
Chao Leng <lengchao@huawei.com>, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH] blk-mq: implement queue quiesce via percpu_ref for BLK_MQ_F_BLOCKING
Date: Tue, 25 Aug 2020 10:32:12 +0800 [thread overview]
Message-ID: <20200825023212.GA3233087@T590> (raw)
In-Reply-To: <44160549-0273-b8e6-1599-d54ce84eb47f@grimberg.me>
On Mon, Aug 24, 2020 at 02:34:04PM -0700, Sagi Grimberg wrote:
>
> > > I'd think it'd be an improvement, yes.
> >
> > Please see the reason why it is put back of hctx in
> > 073196787727("blk-mq: Reduce blk_mq_hw_ctx size").
>
> I know why it is there, just was saying that having an additional
> pointer is fine. But the discussion is moot.
>
> > > > .q_usage_counter should have been put in the 1st cacheline of
> > > > request queue. If it is moved to the 1st cacheline of request queue,
> > > > we shouldn't put 'dispatch_counter' there, because it may hurt other
> > > > non-blocking drivers.
> > >
> > > q_usage_counter currently there, and the two will always be taken
> > > together, and there are several other stuff that we can remove from
> > > that cacheline without hurting performance for anything.
> > >
> > > And when q_usage_counter is moved to the first cacheline, then I'd
> > > expect that the dispatch_counter also moves to the front (maybe not
> > > the first if it is on the expense of other hot members, but definitely
> > > it should be treated as a hot member).
> > >
> > > Anyways, I think that for now we should place them together.
> >
> > Then it may hurt non-blocking.
> >
> > Each hctx has only one run-work, if the hctx is blocked, no other request
> > may be queued to hctx any more. That is basically sync run queue, so I
> > am not sure good enough perf can be expected on blocking.
>
> I don't think that you should assume that a blocking driver will block
> normally, it will only rarely block (very rarely).
If nvme-tcp only blocks rarely, just wondering why not switch to non-blocking
which can be done simply with one driver specific wq work? Then nvme-tcp
can be aligned with other nvme drivers.
>
> > So it may not be worth of putting the added .dispatch_counter together
> > with .q_usage_counter.
>
> I happen to think it would. Not sure why you resist so much given how
> request_queue is arranged currently.
The reason is same with 073196787727("blk-mq: Reduce blk_mq_hw_ctx size").
non-blocking is the preferred style for blk-mq driver, so we can just
focus on non-blocking wrt. performance improvement as I mentioned blocking
has big problem of sync run queue.
It may be contradictory for improving both, for example, if the
added .dispatch_counter is put with .q_usage_cunter together, it will
be fetched to L1 unnecessarily which is definitely not good for
non-blocking.
>
> > > > > Also maybe a better name is needed here since it's just
> > > > > for blocking hctxs.
> > > > >
> > > > > > + wait_queue_head_t mq_quiesce_wq;
> > > > > > +
> > > > > > struct dentry *debugfs_dir;
> > > > > > #ifdef CONFIG_BLK_DEBUG_FS
> > > > > >
> > > > >
> > > > > What I think is needed here is at a minimum test quiesce/unquiesce loops
> > > > > during I/O. code auditing is not enough, there may be driver assumptions
> > > > > broken with this change (although I hope there shouldn't be).
> > > >
> > > > We have elevator switch / updating nr_request stress test, and both relies
> > > > on quiesce/unquiesce, and I did run such test with this patch.
> > >
> > > You have a blktest for this? If not, I strongly suggest that one is
> > > added to validate the change also moving forward.
> >
> > There are lots of blktest tests doing that, such as block/005,
> > block/016, block/021, ...
>
> Good, but I'd also won't want to get this without making sure the async
> quiesce works well on large number of namespaces (the reason why this
> is proposed in the first place). Not sure who is planning to do that...
That can be added when async quiesce is done.
thanks,
Ming
next prev parent reply other threads:[~2020-08-25 2:32 UTC|newest]
Thread overview: 15+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-08-20 3:02 [PATCH] blk-mq: implement queue quiesce via percpu_ref for BLK_MQ_F_BLOCKING Ming Lei
2020-08-21 6:34 ` Christoph Hellwig
2020-08-21 10:16 ` Ming Lei
2020-08-21 14:46 ` Jens Axboe
2020-08-21 15:05 ` Jens Axboe
2020-08-21 20:14 ` Sagi Grimberg
2020-08-22 13:39 ` Ming Lei
2020-08-24 8:19 ` Sagi Grimberg
2020-08-24 10:40 ` Ming Lei
2020-08-24 21:34 ` Sagi Grimberg
2020-08-25 2:32 ` Ming Lei [this message]
2020-08-25 5:24 ` Sagi Grimberg
2020-08-25 9:41 ` Chao Leng
2020-08-25 17:38 ` Sagi Grimberg
2020-08-26 7:25 ` Chao Leng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200825023212.GA3233087@T590 \
--to=ming.lei@redhat.com \
--cc=Johannes.Thumshirn@wdc.com \
--cc=axboe@kernel.dk \
--cc=bvanassche@acm.org \
--cc=hch@lst.de \
--cc=jiangshanlai@gmail.com \
--cc=josh@joshtriplett.org \
--cc=lengchao@huawei.com \
--cc=linux-block@vger.kernel.org \
--cc=paulmck@kernel.org \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox