Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Chao Leng <lengchao@huawei.com>
To: Keith Busch <kbusch@kernel.org>
Cc: Hannes Reinecke <hare@suse.de>, <linux-nvme@lists.infradead.org>
Subject: Re: [PATCH] nvme-rdma: fix crash for no IO queues
Date: Wed, 3 Mar 2021 10:27:01 +0800	[thread overview]
Message-ID: <399fd796-64c2-7d25-b0d8-0f445c4bb2dd@huawei.com> (raw)
In-Reply-To: <20210302182418.GA22346@redsun51.ssa.fujisawa.hgst.com>



On 2021/3/3 2:24, Keith Busch wrote:
> On Tue, Mar 02, 2021 at 05:49:05PM +0800, Chao Leng wrote:
>>
>>
>> On 2021/3/2 15:48, Hannes Reinecke wrote:
>>> On 2/27/21 10:30 AM, Chao Leng wrote:
>>>>
>>>>
>>>> On 2021/2/27 17:12, Hannes Reinecke wrote:
>>>>> On 2/24/21 6:59 AM, Chao Leng wrote:
>>>>>>
>>>>>>
>>>>>> On 2021/2/24 7:21, Keith Busch wrote:
>>>>>>> On Tue, Feb 23, 2021 at 03:26:02PM +0800, Chao Leng wrote:
>>>>>>>> A crash happens when set feature(NVME_FEAT_NUM_QUEUES) timeout in nvme
>>>>>>>> over rdma(roce) reconnection, the reason is use the queue which is not
>>>>>>>> alloced.
>>>>>>>>
>>>>>>>> If it is not discovery and no io queues, the connection should fail.
>>>>>>>
>>>>>>> If you're getting a timeout, we need to quit initialization. Hannes
>>>>>>> attempted making that status visible for fabrics here:
>>>>>>>
>>>>>>> http://lists.infradead.org/pipermail/linux-nvme/2021-January/022353.html
>>>>>>>
>>>>>> I know the patch. It can not solve the scenario: target may be an
>>>>>> attacker or the target behavior is incorrect.
>>>>>> If target return 0 io queues or return other error code, the crash will
>>>>>> still happen. We should not allow this to happen.
>>>>> I'm fully with you that we shouldn't crash, but at the same time a
>>>>> value of '0' for the number of I/O queues is considered valid.
>>>>> So we should fix the code to handle this scenario, and not disallowing
>>>>> zero I/O queues.
>>>> '0' I/O queues doesn't make any sense to nvme over fabrics, it is
>>>> different with nvme over pci. If there is some bug with target, we can
>>>> debug it in target instead of use admin queue in host.
>>>> target may be an attacker or the target behavior is incorrect. So we
>>>> should avoid crash. Another option: prohibit  request delivery if
>>>> io queue do not created.
>>>> I think failed connection with '0' I/O queues is a better choice.
>>>
>>> Might be, but that's not for me to decide.
>>> I tried that initially, but that patch got rejected as _technically_ the
>>> controller is reachable via its admin queue.
>> I know about your patch. That patch failed connection for all transports.
>> It is not good for pcie transport, the controller can accept admin
>> commands to get some diagnostics (perhaps an error log page), this is
>> keith's thoughts.
> 
> We can continue to administrate a controller that didn't create IO
> queues, but the controller must provide a response to all commands. If
> it doesn't, the controller will either be reset or abandoned. This
> should be the same behavior for any transport, though; there's nothing
> special about PCIe for that.
Though I don't see any useful scenarios for nvme over fabric now,
Reserved for future possibilities may be a better choice.
I will send another patch. Prohibit request delivery if the queue is not
live.

> .
> 

_______________________________________________
Linux-nvme mailing list
Linux-nvme@lists.infradead.org
http://lists.infradead.org/mailman/listinfo/linux-nvme

  reply	other threads:[~2021-03-03 18:17 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-23  7:26 [PATCH] nvme-rdma: fix crash for no IO queues Chao Leng
2021-02-23 22:03 ` Chaitanya Kulkarni
2021-02-24  5:52   ` Chao Leng
2021-02-23 23:21 ` Keith Busch
2021-02-24  5:59   ` Chao Leng
2021-02-27  9:12     ` Hannes Reinecke
2021-02-27  9:30       ` Chao Leng
2021-03-02  7:48         ` Hannes Reinecke
2021-03-02  9:49           ` Chao Leng
2021-03-02 18:24             ` Keith Busch
2021-03-03  2:27               ` Chao Leng [this message]
2021-03-03  3:14                 ` Keith Busch
2021-03-03  3:39                   ` Chao Leng
2021-03-03  7:41                     ` Hannes Reinecke
2021-03-03 15:08                       ` Keith Busch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=399fd796-64c2-7d25-b0d8-0f445c4bb2dd@huawei.com \
    --to=lengchao@huawei.com \
    --cc=hare@suse.de \
    --cc=kbusch@kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox