public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Ming Lei <ming.lei@redhat.com>
To: Sagi Grimberg <sagi@grimberg.me>
Cc: Yi Zhang <yi.zhang@redhat.com>,
	"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>,
	linux-block <linux-block@vger.kernel.org>,
	Bart Van Assche <bvanassche@acm.org>
Subject: Re: [bug report] nvme/rdma: nvme connect failed after offline one cpu on host side
Date: Tue, 5 Jul 2022 08:49:33 +0800	[thread overview]
Message-ID: <YsOKnb7MWLCeJxBE@T590> (raw)
In-Reply-To: <2c42c70a-8eb4-a095-1d2b-139614ebd903@grimberg.me>

On Tue, Jul 05, 2022 at 02:04:53AM +0300, Sagi Grimberg wrote:
> 
> > update the subject to better describe the issue:
> > 
> > So I tried this issue on one nvme/rdma environment, and it was also
> > reproducible, here are the steps:
> > 
> > # echo 0 >/sys/devices/system/cpu/cpu0/online
> > # dmesg | tail -10
> > [  781.577235] smpboot: CPU 0 is now offline
> > # nvme connect -t rdma -a 172.31.45.202 -s 4420 -n testnqn
> > Failed to write to /dev/nvme-fabrics: Invalid cross-device link
> > no controller found: failed to write to nvme-fabrics device
> > 
> > # dmesg
> > [  781.577235] smpboot: CPU 0 is now offline
> > [  799.471627] nvme nvme0: creating 39 I/O queues.
> > [  801.053782] nvme nvme0: mapped 39/0/0 default/read/poll queues.
> > [  801.064149] nvme nvme0: Connect command failed, error wo/DNR bit: -16402
> > [  801.073059] nvme nvme0: failed to connect queue: 1 ret=-18
> 
> This is because of blk_mq_alloc_request_hctx() and was raised before.
> 
> IIRC there was reluctance to make it allocate a request for an hctx even
> if its associated mapped cpu is offline.
> 
> The latest attempt was from Ming:
> [PATCH V7 0/3] blk-mq: fix blk_mq_alloc_request_hctx
> 
> Don't know where that went tho...

The attempt relies on that the queue for connecting io queue uses
non-admined irq, unfortunately that can't be true for all drivers,
so that way can't go.

So far, I'd suggest to fix nvme_*_connect_io_queues() to ignore failed
io queue, then the nvme host still can be setup with less io queues.

Otherwise nvme_*_connect_io_queues() could fail easily, especially for
1:1 mapping.


Thanks,
Ming



  reply	other threads:[~2022-07-05  0:49 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-06-30  6:02 [bug report] blktests nvme/004 failed after offline cpu Yi Zhang
2022-07-04  5:42 ` [bug report] nvme/rdma: nvme connect failed after offline one cpu on host side Yi Zhang
2022-07-04 23:04   ` Sagi Grimberg
2022-07-05  0:49     ` Ming Lei [this message]
2022-07-06 15:30       ` Sagi Grimberg
2022-07-07  1:46         ` Ming Lei
2022-07-07  7:28           ` Sagi Grimberg
2022-07-07  8:07             ` Ming Lei
2022-07-26  2:05             ` Ming Lei
2022-07-26  8:56               ` Sagi Grimberg

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YsOKnb7MWLCeJxBE@T590 \
    --to=ming.lei@redhat.com \
    --cc=bvanassche@acm.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox