From: Max Gurtovoy <mgurtovoy@nvidia.com>
To: Yi Zhang <yi.zhang@redhat.com>
Cc: Sagi Grimberg <sagi@grimberg.me>,
Max Gurtovoy <maxg@mellanox.com>,
"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>,
RDMA mailing list <linux-rdma@vger.kernel.org>
Subject: Re: [bug report] NVMe/IB: reset_controller need more than 1min
Date: Thu, 16 Dec 2021 15:21:39 +0200 [thread overview]
Message-ID: <fcad6dac-6d40-98d7-93e2-bfa307e8e101@nvidia.com> (raw)
In-Reply-To: <CAHj4cs9QuANriWeLrhDvkahnNzHp-4WNFoxtWi2qGH9V0L3+Rw@mail.gmail.com>
On 12/16/2021 4:18 AM, Yi Zhang wrote:
> On Wed, Dec 15, 2021 at 8:10 PM Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
>>
>> On 12/15/2021 3:15 AM, Yi Zhang wrote:
>>> On Tue, Dec 14, 2021 at 8:01 PM Max Gurtovoy <mgurtovoy@nvidia.com> wrote:
>>>> On 12/14/2021 12:39 PM, Sagi Grimberg wrote:
>>>>>>>> Hi Sagi
>>>>>>>> It is still reproducible with the change, here is the log:
>>>>>>>>
>>>>>>>> # time nvme reset /dev/nvme0
>>>>>>>>
>>>>>>>> real 0m12.973s
>>>>>>>> user 0m0.000s
>>>>>>>> sys 0m0.006s
>>>>>>>> # time nvme reset /dev/nvme0
>>>>>>>>
>>>>>>>> real 1m15.606s
>>>>>>>> user 0m0.000s
>>>>>>>> sys 0m0.007s
>>>>>>> Does it speed up if you use less queues? (i.e. connect with -i 4) ?
>>>>>> Yes, with -i 4, it has stablee 1.3s
>>>>>> # time nvme reset /dev/nvme0
>>>>> So it appears that destroying a qp takes a long time on
>>>>> IB for some reason...
>>>>>
>>>>>> real 0m1.225s
>>>>>> user 0m0.000s
>>>>>> sys 0m0.007s
>>>>>>
>>>>>>>> # dmesg | grep nvme
>>>>>>>> [ 900.634877] nvme nvme0: resetting controller
>>>>>>>> [ 909.026958] nvme nvme0: creating 40 I/O queues.
>>>>>>>> [ 913.604297] nvme nvme0: mapped 40/0/0 default/read/poll queues.
>>>>>>>> [ 917.600993] nvme nvme0: resetting controller
>>>>>>>> [ 988.562230] nvme nvme0: I/O 2 QID 0 timeout
>>>>>>>> [ 988.567607] nvme nvme0: Property Set error: 881, offset 0x14
>>>>>>>> [ 988.608181] nvme nvme0: creating 40 I/O queues.
>>>>>>>> [ 993.203495] nvme nvme0: mapped 40/0/0 default/read/poll queues.
>>>>>>>>
>>>>>>>> BTW, this issue cannot be reproduced on my NVME/ROCE environment.
>>>>>>> Then I think that we need the rdma folks to help here...
>>>>> Max?
>>>> It took me 12s to reset a controller with 63 IO queues with 5.16-rc3+.
>>>>
>>>> Can you try repro with latest versions please ?
>>>>
>>>> Or give the exact scenario ?
>>> Yeah, both target and client are using Mellanox Technologies MT27700
>>> Family [ConnectX-4], could you try stress "nvme reset /dev/nvme0", the
>>> first time reset will take 12s, and it always can be reproduced at the
>>> second reset operation.
>> I created a target with 1 namespace backed by null_blk and connected to
>> it from the same server in loopback rdma connection using the ConnectX-4
>> adapter.
> Could you share your loop.json file so I can try it on my environment?
{
"hosts": [],
"ports": [
{
"addr": {
"adrfam": "ipv4",
"traddr": "<ip>",
"treq": "not specified",
"trsvcid": "4420",
"trtype": "rdma"
},
"portid": 1,
"referrals": [],
"subsystems": [
"testsubsystem_0"
]
}
],
"subsystems": [
{
"allowed_hosts": [],
"attr": {
"allow_any_host": "1",
"cntlid_max": "65519",
"cntlid_min": "1",
"model": "Linux",
"serial": "3d83c78b76623f1d",
"version": "1.3"
},
"namespaces": [
{
"device": {
"nguid": "5b722b05-e9b6-542d-ba80-62010b57775d",
"path": "/dev/nullb0",
"uuid": "26ffc8ce-73b4-321d-9685-7d7a9872c460"
},
"enable": 1,
"nsid": 1
}
],
"nqn": "testsubsystem_0"
}
]
}
>
> And can you try it with two servers that both have CX-4? This should
> be easier to reproduce it.
I did this experiment. I have only a setup with 12 cores so I created 12
nvmf queues.
The reset took 4 seconds. The test did 100 loops of "nvme reset".
I saw that you also complained on the disconnect flow so I assume the
root cause is the same.
My disconnect took 2 seconds.
My FW version is 12.28.2006.
>> I run a loop with the "nvme reset" command and it took me 4-5 secs to
>> reset each time..
>>
>>
>
next prev parent reply other threads:[~2021-12-16 13:36 UTC|newest]
Thread overview: 38+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-21 16:12 [bug report] NVMe/IB: reset_controller need more than 1min Yi Zhang
2021-05-21 18:00 ` Sagi Grimberg
2021-05-22 4:27 ` Yi Zhang
2021-06-23 10:01 ` Yi Zhang
2021-06-23 21:32 ` Sagi Grimberg
2021-06-24 16:14 ` Yi Zhang
2021-12-11 3:01 ` Yi Zhang
2021-12-12 9:45 ` Sagi Grimberg
2021-12-13 6:12 ` Yi Zhang
2021-12-13 9:04 ` Sagi Grimberg
2021-12-13 17:05 ` Yi Zhang
2021-12-14 10:39 ` Sagi Grimberg
2021-12-14 12:00 ` Max Gurtovoy
2021-12-15 1:15 ` Yi Zhang
2021-12-15 12:10 ` Max Gurtovoy
2021-12-16 2:18 ` Yi Zhang
2021-12-16 13:21 ` Max Gurtovoy [this message]
2021-12-16 16:32 ` Yi Zhang
2021-12-16 17:33 ` Haakon Bugge
2021-12-17 7:03 ` Yi Zhang
2021-12-17 11:19 ` Haakon Bugge
2022-02-14 9:47 ` Yi Zhang
2022-02-14 11:00 ` Chaitanya Kulkarni
2022-02-14 11:32 ` Sagi Grimberg
2022-02-14 12:11 ` Max Gurtovoy
2022-02-15 13:52 ` Yi Zhang
2022-02-15 14:30 ` Max Gurtovoy
2022-02-21 10:00 ` Yi Zhang
2022-02-23 10:04 ` Max Gurtovoy
2022-02-23 10:30 ` Sagi Grimberg
2022-02-23 11:20 ` Max Gurtovoy
2022-03-01 0:06 ` Yi Zhang
2022-03-16 15:16 ` Sagi Grimberg
2022-03-19 7:29 ` Yi Zhang
2022-03-20 10:50 ` Max Gurtovoy
2022-03-20 13:03 ` Sagi Grimberg
2022-03-20 15:11 ` Max Gurtovoy
2022-03-21 9:28 ` Sagi Grimberg
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=fcad6dac-6d40-98d7-93e2-bfa307e8e101@nvidia.com \
--to=mgurtovoy@nvidia.com \
--cc=linux-nvme@lists.infradead.org \
--cc=linux-rdma@vger.kernel.org \
--cc=maxg@mellanox.com \
--cc=sagi@grimberg.me \
--cc=yi.zhang@redhat.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox