From: Max Gurtovoy <maxg@mellanox.com>
To: Ming Lei <ming.lei@redhat.com>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
Jens Axboe <axboe@fb.com>, <linux-nvme@lists.infradead.org>
Subject: Re: how can one drain MQ request queue ?
Date: Thu, 22 Feb 2018 12:56:05 +0200 [thread overview]
Message-ID: <a18346ff-e323-dd38-bcd9-77b3dfebf7ef@mellanox.com> (raw)
In-Reply-To: <20180222025943.GB1150@ming.t460p>
On 2/22/2018 4:59 AM, Ming Lei wrote:
> Hi Max,
Hi Ming,
>
> On Tue, Feb 20, 2018 at 11:56:07AM +0200, Max Gurtovoy wrote:
>> hi all,
>> is there a way to drain a blk-mq based request queue (similar to
>> blk_drain_queue for non MQ) ?
>
> Generally speaking, blk_mq_freeze_queue() should be fine to drain blk-mq
> based request queue, but it may not work well when the hardware is broken.
I tried that, but the path failover takes ~cmd_timeout seconds and this
is not good enough...
>
>>
>> I try to fix the following situation:
>> Running DM-multipath over NVMEoF/RDMA block devices, toggling the switch
>> ports during traffic using fio and making sure the traffic never fails.
>>
>> when the switch port goes down the initiator driver start an error recovery
>
> What is the code you are referring to?
from nvme_rdma driver:
static void nvme_rdma_error_recovery_work(struct work_struct *work)
{
struct nvme_rdma_ctrl *ctrl = container_of(work,
struct nvme_rdma_ctrl, err_work);
nvme_stop_keep_alive(&ctrl->ctrl);
if (ctrl->ctrl.queue_count > 1) {
nvme_stop_queues(&ctrl->ctrl);
blk_mq_tagset_busy_iter(&ctrl->tag_set,
nvme_cancel_request, &ctrl->ctrl);
nvme_rdma_destroy_io_queues(ctrl, false);
}
blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
nvme_cancel_request, &ctrl->ctrl);
nvme_rdma_destroy_admin_queue(ctrl, false);
/*
* queues are not a live anymore, so restart the queues to fail
fast
* new IO
*/
blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
nvme_start_queues(&ctrl->ctrl);
if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
/* state change failure should never happen */
WARN_ON_ONCE(1);
return;
}
nvme_rdma_reconnect_or_remove(ctrl);
}
>
>> process
>> - blk_mq_quiesce_queue for each namespace request queue
>
> blk_mq_quiesce_queue() only guarantees that no requests can be dispatched to
> low level driver, and new requests still can be allocated, but can't be
> dispatched until the queue becomes unquiesced.
>
>> - cancel all requests of the tagset using blk_mq_tagset_busy_iter
>
> Generally blk_mq_tagset_busy_iter() is used to cancel all in-flight
> requests, and it depends on implementation of the busy_tag_iter_fn, and
> timed-out request can't be covered by blk_mq_tagset_busy_iter().
How can we deal with timed-out commands ?
>
> So blk_mq_tagset_busy_iter() is often used in error recovery path, such
> as nvme_dev_disable(), which is usually used in resetting PCIe NVMe controller.
>
>> - destroy the QPs/RDMA connections and MR pools
>> - blk_mq_unquiesce_queue for each namespace request queue
>> - reconnect to the target (after creating RDMA resources again)
>>
>> During the QP destruction, I see a warning that not all the memory regions
>> were back to the mr_pool. For every request we get from the block layer
>> (well, almost every request) we get a MR from the MR pool.
>> So what I see is that, depends on the timing, some requests are
>> dispatched/completed after we blk_mq_unquiesce_queue and after we destroy
>> the QP and the MR pool. Probably these request were inserted during
>> quiescing,
>
> Yes.
maybe we need to update the nvmf_check_init_req to check that the ctrl
is in NVME_CTRL_LIVE state (otherwise return IOERR), but I need to think
about it and test it.
>
>> and I want to flush/drain them before I destroy the QP.
>
> As mentioned above, you can't do that by blk_mq_quiesce_queue() &
> blk_mq_tagset_busy_iter().
>
> The PCIe NVMe driver takes two steps for the error recovery: nvme_dev_disable() &
> nvme_reset_work(), and you may consider the similar approach, but the in-flight
> requests won't be drained in this case because they can be requeued.
>
> Could you explain a bit what your exact problem is?
The problem is that I assign an MR from QP mr_pool for each call to
nvme_rdma_queue_rq. During the error recovery I destroy the QP and the
mr_pool *but* some MR's are missing and not returned to the pool.
>
> Thanks,
> Ming
>
Thanks,
Max.
next prev parent reply other threads:[~2018-02-22 10:56 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2018-02-20 9:56 how can one drain MQ request queue ? Max Gurtovoy
2018-02-20 10:13 ` Johannes Thumshirn
2018-02-22 2:59 ` Ming Lei
2018-02-22 10:56 ` Max Gurtovoy [this message]
2018-02-22 13:10 ` Ming Lei
2018-02-22 13:39 ` Ming Lei
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=a18346ff-e323-dd38-bcd9-77b3dfebf7ef@mellanox.com \
--to=maxg@mellanox.com \
--cc=axboe@fb.com \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=ming.lei@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox