public inbox for linux-block@vger.kernel.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Max Gurtovoy <maxg@mellanox.com>
Cc: "linux-block@vger.kernel.org" <linux-block@vger.kernel.org>,
	Jens Axboe <axboe@fb.com>,
	linux-nvme@lists.infradead.org
Subject: Re: how can one drain MQ request queue ?
Date: Thu, 22 Feb 2018 21:39:36 +0800	[thread overview]
Message-ID: <20180222133935.GC15254@ming.t460p> (raw)
In-Reply-To: <20180222131026.GB15254@ming.t460p>

On Thu, Feb 22, 2018 at 09:10:26PM +0800, Ming Lei wrote:
> On Thu, Feb 22, 2018 at 12:56:05PM +0200, Max Gurtovoy wrote:
> > 
> > 
> > On 2/22/2018 4:59 AM, Ming Lei wrote:
> > > Hi Max,
> > 
> > Hi Ming,
> > 
> > > 
> > > On Tue, Feb 20, 2018 at 11:56:07AM +0200, Max Gurtovoy wrote:
> > > > hi all,
> > > > is there a way to drain a blk-mq based request queue (similar to
> > > > blk_drain_queue for non MQ) ?
> > > 
> > > Generally speaking, blk_mq_freeze_queue() should be fine to drain blk-mq
> > > based request queue, but it may not work well when the hardware is broken.
> > 
> > I tried that, but the path failover takes ~cmd_timeout seconds and this is
> > not good enough...
> 
> Yeah, I agree it isn't good for handling timeout.
> 
> > 
> > > 
> > > > 
> > > > I try to fix the following situation:
> > > > Running DM-multipath over NVMEoF/RDMA block devices, toggling the switch
> > > > ports during traffic using fio and making sure the traffic never fails.
> > > > 
> > > > when the switch port goes down the initiator driver start an error recovery
> > > 
> > > What is the code you are referring to?
> > 
> > from nvme_rdma driver:
> > 
> > static void nvme_rdma_error_recovery_work(struct work_struct *work)
> > {
> >         struct nvme_rdma_ctrl *ctrl = container_of(work,
> >                         struct nvme_rdma_ctrl, err_work);
> > 
> >         nvme_stop_keep_alive(&ctrl->ctrl);
> > 
> >         if (ctrl->ctrl.queue_count > 1) {
> >                 nvme_stop_queues(&ctrl->ctrl);
> >                 blk_mq_tagset_busy_iter(&ctrl->tag_set,
> >                                         nvme_cancel_request, &ctrl->ctrl);
> >                 nvme_rdma_destroy_io_queues(ctrl, false);
> >         }
> > 
> >         blk_mq_quiesce_queue(ctrl->ctrl.admin_q);
> >         blk_mq_tagset_busy_iter(&ctrl->admin_tag_set,
> >                                 nvme_cancel_request, &ctrl->ctrl);
> >         nvme_rdma_destroy_admin_queue(ctrl, false);
> 
> I am not sure if it is good to destroy admin queue here since
> nvme_rdma_configure_admin_queue() need to use admin queue, and I saw
> there is report of 'nvme nvme0: Identify namespace failed' in Red Hat
> BZ.
> 
> > 
> >         /*
> >          * queues are not a live anymore, so restart the queues to fail fast
> >          * new IO
> >          */
> >         blk_mq_unquiesce_queue(ctrl->ctrl.admin_q);
> >         nvme_start_queues(&ctrl->ctrl);
> > 
> >         if (!nvme_change_ctrl_state(&ctrl->ctrl, NVME_CTRL_CONNECTING)) {
> >                 /* state change failure should never happen */
> >                 WARN_ON_ONCE(1);
> >                 return;
> >         }
> > 
> >         nvme_rdma_reconnect_or_remove(ctrl);
> > }
> > 
> > 
> > > 
> > > > process
> > > > - blk_mq_quiesce_queue for each namespace request queue
> > > 
> > > blk_mq_quiesce_queue() only guarantees that no requests can be dispatched to
> > > low level driver, and new requests still can be allocated, but can't be
> > > dispatched until the queue becomes unquiesced.
> > > 
> > > > - cancel all requests of the tagset using blk_mq_tagset_busy_iter
> > > 
> > > Generally blk_mq_tagset_busy_iter() is used to cancel all in-flight
> > > requests, and it depends on implementation of the busy_tag_iter_fn, and
> > > timed-out request can't be covered by blk_mq_tagset_busy_iter().
> > 
> > How can we deal with timed-out commands ?
> 
> For PCI NVMe, they are handled by requeuing, just like all canceled
> in-flight commands, and all these commands will be dispatched to driver
> again after reset is done successfully.
> 
> > 
> > 
> > > 
> > > So blk_mq_tagset_busy_iter() is often used in error recovery path, such
> > > as nvme_dev_disable(), which is usually used in resetting PCIe NVMe controller.
> > > 
> > > > - destroy the QPs/RDMA connections and MR pools
> > > > - blk_mq_unquiesce_queue for each namespace request queue
> > > > - reconnect to the target (after creating RDMA resources again)
> > > > 
> > > > During the QP destruction, I see a warning that not all the memory regions
> > > > were back to the mr_pool. For every request we get from the block layer
> > > > (well, almost every request) we get a MR from the MR pool.
> > > > So what I see is that, depends on the timing, some requests are
> > > > dispatched/completed after we blk_mq_unquiesce_queue and after we destroy
> > > > the QP and the MR pool. Probably these request were inserted during
> > > > quiescing,
> > > 
> > > Yes.
> > 
> > maybe we need to update the nvmf_check_init_req to check that the ctrl is in
> > NVME_CTRL_LIVE state (otherwise return IOERR), but I need to think about it
> > and test it.
> > 
> > > 
> > > > and I want to flush/drain them before I destroy the QP.
> > > 
> > > As mentioned above, you can't do that by blk_mq_quiesce_queue() &
> > > blk_mq_tagset_busy_iter().
> > > 
> > > The PCIe NVMe driver takes two steps for the error recovery: nvme_dev_disable() &
> > > nvme_reset_work(), and you may consider the similar approach, but the in-flight
> > > requests won't be drained in this case because they can be requeued.
> > > 
> > > Could you explain a bit what your exact problem is?
> > 
> > The problem is that I assign an MR from QP mr_pool for each call to
> > nvme_rdma_queue_rq. During the error recovery I destroy the QP and the
> > mr_pool *but* some MR's are missing and not returned to the pool.
> 
> OK, looks you think all in-flight requests can be completed during error
> recovery. That shouldn't be correct since all in-flight requests have to
> be retried after error recovery is done for avoiding data loss.

Looks there is one issue wrt. timed-out request:

	nvme_rdma_destroy_io_queues() may be called before the timed-out
	request is completed.

And it is very likely since the timed-out request is completed by
__blk_mq_complete_request() in blk_mq_rq_timed_out() only after
nvme_rdma_timeout() returns.

We discussed the similar issue on PCI NVMe too, seems RDMA need to
sync between error recovery and timeout handler too.

	https://www.spinics.net/lists/stable/msg211856.html

-- 
Ming

      reply	other threads:[~2018-02-22 13:39 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-02-20  9:56 how can one drain MQ request queue ? Max Gurtovoy
2018-02-20 10:13 ` Johannes Thumshirn
2018-02-22  2:59 ` Ming Lei
2018-02-22 10:56   ` Max Gurtovoy
2018-02-22 13:10     ` Ming Lei
2018-02-22 13:39       ` Ming Lei [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180222133935.GC15254@ming.t460p \
    --to=ming.lei@redhat.com \
    --cc=axboe@fb.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=maxg@mellanox.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox