From: Sagi Grimberg <sagi@grimberg.me>
To: Ming Lei <ming.lei@redhat.com>
Cc: Jens Axboe <axboe@kernel.dk>,
linux-block@vger.kernel.org, linux-nvme@lists.infradead.org,
Christoph Hellwig <hch@lst.de>,
Keith Busch <keith.busch@intel.com>
Subject: Re: [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues
Date: Tue, 4 Jul 2017 11:59:43 +0300 [thread overview]
Message-ID: <43b6e445-d83e-dda0-51a5-a680a2b2623d@grimberg.me> (raw)
In-Reply-To: <20170704081541.GC29053@ming.t460p>
>> @@ -791,7 +791,8 @@ static void nvme_rdma_error_recovery_work(struct work_struct *work)
>> * queues are not a live anymore, so restart the queues to fail fast
>> * new IO
>> */
>> - blk_mq_start_stopped_hw_queues(ctrl->ctrl.admin_q, true);
>> + blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
>> + blk_mq_kick_requeue_list(ctrl->ctrl.admin_q);
>
> Now the queue won't be stopped via blk_mq_quiesce_queue(), so why do
> you add blk_mq_kick_requeue_list() here?
I think you're right.
We now quiesce the queue and fast fail inflight io, in
nvme_complete_rq we call blk_mq_requeue_request with
!blk_mq_queue_stopped(req->q) which is now true.
So the requeue_work is triggered and requeue the request,
and when we unquiesce we simply run the hw queues again.
If we were to call it with !blk_queue_quiesced(req->q)
I think it would be needed though...
next prev parent reply other threads:[~2017-07-04 8:59 UTC|newest]
Thread overview: 29+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-07-04 7:55 [PATCH 0/8] correct quiescing in several block drivers Sagi Grimberg
2017-07-04 7:55 ` [PATCH 1/8] nvme-rdma: quiesce/unquiesce admin_q instead of start/stop its hw queues Sagi Grimberg
2017-07-04 8:15 ` Ming Lei
2017-07-04 8:59 ` Sagi Grimberg [this message]
2017-07-04 9:07 ` Sagi Grimberg
2017-07-04 12:41 ` Ming Lei
2017-07-04 15:35 ` Sagi Grimberg
2017-07-04 7:55 ` [PATCH 2/8] nvme-fc: " Sagi Grimberg
2017-07-04 8:18 ` Ming Lei
2017-07-04 7:55 ` [PATCH 3/8] nvme-loop: quiesce admin_q instead of stopping " Sagi Grimberg
2017-07-04 8:23 ` Ming Lei
2017-07-04 9:24 ` Sagi Grimberg
2017-07-04 10:38 ` Ming Lei
2017-07-04 7:55 ` [PATCH 4/8] nvme-pci: quiesce/unquiesce admin_q instead of start/stop its " Sagi Grimberg
2017-07-04 8:26 ` Ming Lei
2017-07-04 7:55 ` [PATCH 5/8] nbd: quiesce request queues to make sure no submissions are inflight Sagi Grimberg
2017-07-04 8:28 ` Ming Lei
2017-07-04 7:55 ` [PATCH 6/8] mtip32xx: " Sagi Grimberg
2017-07-04 22:32 ` Ming Lei
2017-07-05 6:34 ` Sagi Grimberg
2017-07-04 7:55 ` [PATCH 7/8] virtio_blk: quiesce/unquiesce live IO when entering PM states Sagi Grimberg
2017-07-04 8:41 ` Ming Lei
2017-07-04 21:39 ` Michael S. Tsirkin
2017-07-04 7:55 ` [PATCH 8/8] xen-blockfront: quiesce IO before device removal Sagi Grimberg
2017-07-04 22:19 ` Ming Lei
2017-07-05 6:29 ` Sagi Grimberg
2017-07-05 22:56 ` Christoph Hellwig
2017-07-06 6:52 ` Sagi Grimberg
2017-07-04 8:12 ` [PATCH 0/8] correct quiescing in several block drivers Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=43b6e445-d83e-dda0-51a5-a680a2b2623d@grimberg.me \
--to=sagi@grimberg.me \
--cc=axboe@kernel.dk \
--cc=hch@lst.de \
--cc=keith.busch@intel.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=ming.lei@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).