From: Jirong Feng <jirong.feng@easystack.cn>
To: Sagi Grimberg <sagi@grimberg.me>, Christoph Hellwig <hch@lst.de>,
Keith Busch <kbusch@kernel.org>
Cc: Jens Axboe <axboe@fb.com>,
linux-nvme@lists.infradead.org, peng.xiao@easystack.cn
Subject: Re: Should NVME_SC_INVALID_NS be translated to BLK_STS_IOERR instead of BLK_STS_NOTSUPP so that multipath(both native and dm) can failover on the failure?
Date: Wed, 3 Jan 2024 18:24:09 +0800 [thread overview]
Message-ID: <89b542d3-dedb-4d5c-ad7a-279467d28e51@easystack.cn> (raw)
In-Reply-To: <a688834a-99b1-4dd2-b698-c9e54070f78f@grimberg.me>
> OK, can you please check nvme native mpath as well?
switch to nvme native mpath:
[root@fjr-vm1 ~]# nvme list-subsys
nvme-subsys0 -
NQN=nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06
\
+- nvme0 tcp traddr=192.168.111.99 trsvcid=4420 live
+- nvme1 tcp traddr=192.168.111.111 trsvcid=4420 live
[root@fjr-vm1 ~]# multipath -ll
uuid.cf4bb93c-949f-4532-a5c1-b8bd267a4e06 [nvme]:nvme0n1 NVMe,Linux,6.6.0-my
size=209715200 features='n/a' hwhandler='ANA' wp=rw
|-+- policy='n/a' prio=50 status=optimized
| `- 0:0:1 nvme0c0n1 0:0 n/a optimized live
`-+- policy='n/a' prio=50 status=optimized
`- 0:1:1 nvme0c1n1 0:0 n/a optimized live
fio still keeps running without any error, just for this time. (see below)
host dmesg:
[Wed Jan 3 07:42:55 2024] nvme nvme0: reschedule traffic based
keep-alive timer
[Wed Jan 3 07:42:55 2024] nvme nvme1: reschedule traffic based
keep-alive timer
[Wed Jan 3 07:43:00 2024] nvme nvme0: reschedule traffic based
keep-alive timer
[Wed Jan 3 07:43:00 2024] nvme nvme1: reschedule traffic based
keep-alive timer
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 0
[Wed Jan 3 07:43:05 2024] nvme nvme0: ANA group 1: optimized.
[Wed Jan 3 07:43:05 2024] nvme nvme0: creating 4 I/O queues.
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 1
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 2
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 3
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 4
[Wed Jan 3 07:43:05 2024] nvme nvme0: rescanning namespaces.
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 0
[Wed Jan 3 07:43:05 2024] nvme nvme0: ANA group 1: optimized.
[Wed Jan 3 07:43:05 2024] nvme nvme0: creating 4 I/O queues.
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 1
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 2
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 3
[Wed Jan 3 07:43:05 2024] nvme nvme0: connecting queue 4
[Wed Jan 3 07:43:05 2024] nvme nvme1: reschedule traffic based
keep-alive timer
[Wed Jan 3 07:43:10 2024] nvme nvme0: reschedule traffic based
keep-alive timer
[Wed Jan 3 07:43:10 2024] nvme nvme1: reschedule traffic based
keep-alive timer
target dmesg:
[Wed Jan 3 07:41:23 2024] nvmet: ctrl 1 update keep-alive timer for 15 secs
[Wed Jan 3 07:41:33 2024] nvmet: ctrl 1 update keep-alive timer for 15 secs
[Wed Jan 3 07:41:43 2024] nvmet: ctrl 1 update keep-alive timer for 15 secs
[Wed Jan 3 07:41:58 2024] nvmet: ctrl 1 reschedule traffic based
keep-alive timer
[Wed Jan 3 07:42:14 2024] nvmet: ctrl 1 reschedule traffic based
keep-alive timer
[Wed Jan 3 07:42:29 2024] nvmet: ctrl 1 reschedule traffic based
keep-alive timer
[Wed Jan 3 07:42:44 2024] nvmet: ctrl 1 reschedule traffic based
keep-alive timer
[Wed Jan 3 07:43:00 2024] nvmet: ctrl 1 reschedule traffic based
keep-alive timer
[Wed Jan 3 07:43:04 2024] nvmet: fjr add: returning
NVME_ANA_PERSISTENT_LOSS
[Wed Jan 3 07:43:04 2024] nvmet_tcp: failed cmd 0000000034dfe760 id 14
opcode 1, data_len: 4096
[Wed Jan 3 07:43:04 2024] nvmet: got cmd 12 while CC.EN == 0 on qid = 0
[Wed Jan 3 07:43:04 2024] nvmet_tcp: failed cmd 00000000228b330a id 31
opcode 12, data_len: 0
[Wed Jan 3 07:43:04 2024] nvmet: ctrl 2 start keep-alive timer for 15 secs
[Wed Jan 3 07:43:04 2024] nvmet: ctrl 1 stop keep-alive
[Wed Jan 3 07:43:04 2024] nvmet: creating nvm controller 2 for
subsystem
nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06
for NQN
nqn.2014-08.org.nvmexpress:uuid:1d8f7c82-9deb-4bc8-8292-5ff32ee3a2be.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 1 to ctrl 2.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 2 to ctrl 2.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 3 to ctrl 2.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 4 to ctrl 2.
[Wed Jan 3 07:43:04 2024] nvmet: fjr add: returning
NVME_ANA_PERSISTENT_LOSS
[Wed Jan 3 07:43:04 2024] nvmet_tcp: failed cmd 00000000d9d3dba9 id 100
opcode 1, data_len: 4096
[Wed Jan 3 07:43:04 2024] nvmet: ctrl 1 start keep-alive timer for 15 secs
[Wed Jan 3 07:43:04 2024] nvmet: ctrl 2 stop keep-alive
[Wed Jan 3 07:43:04 2024] nvmet: creating nvm controller 1 for
subsystem
nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06
for NQN
nqn.2014-08.org.nvmexpress:uuid:1d8f7c82-9deb-4bc8-8292-5ff32ee3a2be.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 1 to ctrl 1.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 2 to ctrl 1.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 3 to ctrl 1.
[Wed Jan 3 07:43:04 2024] nvmet: adding queue 4 to ctrl 1.
[Wed Jan 3 07:43:14 2024] nvmet: ctrl 1 update keep-alive timer for 15 secs
>
> Can you try returning NVME_SC_CTRL_PATH_ERROR instead of
> NVME_SC_ANA_PERSISTENT_LOSS ?
I enabled/disabled again and again, found that fio keeps running for
most time, but occasionally(about 10% or less) fails and stops with error.
fio: io_u error on file /dev/nvme0n1: Input/output error: write
offset=100662296576, buflen=4096
fio: pid=1485, err=5/file:io_u.c:1747, func=io_u error,
error=Input/output error
fio_iops: (groupid=0, jobs=1): err= 5 (file:io_u.c:1747, func=io_u
error, error=Input/output error): pid=1485: Wed Jan 3 08:44:09 2024
host dmesg:
[Wed Jan 3 08:44:06 2024] nvme nvme1: reschedule traffic based
keep-alive timer
[Wed Jan 3 08:44:07 2024] nvme nvme0: reschedule traffic based
keep-alive timer
[Wed Jan 3 08:44:09 2024] nvme nvme0: connecting queue 0
[Wed Jan 3 08:44:09 2024] nvme nvme0: ANA group 1: optimized.
[Wed Jan 3 08:44:09 2024] nvme nvme0: creating 4 I/O queues.
[Wed Jan 3 08:44:09 2024] nvme nvme0: connecting queue 1
[Wed Jan 3 08:44:09 2024] nvme nvme0: connecting queue 2
[Wed Jan 3 08:44:09 2024] nvme nvme0: connecting queue 3
[Wed Jan 3 08:44:09 2024] nvme nvme0: connecting queue 4
[Wed Jan 3 08:44:09 2024] nvme nvme0: rescanning namespaces.
[Wed Jan 3 08:44:09 2024] Buffer I/O error on dev nvme0n1, logical
block 0, async page read
[Wed Jan 3 08:44:09 2024] nvme0n1: unable to read partition table
[Wed Jan 3 08:44:09 2024] Buffer I/O error on dev nvme0n1, logical
block 6, async page read
[Wed Jan 3 08:44:11 2024] nvme nvme1: reschedule traffic based
keep-alive timer
[Wed Jan 3 08:44:14 2024] nvme nvme0: reschedule traffic based
keep-alive timer
target dmesg:
[Wed Jan 3 08:44:08 2024] nvmet: fjr add: returning NVME_SC_CTRL_PATH_ERROR
[Wed Jan 3 08:44:08 2024] nvmet_tcp: failed cmd 00000000c11e0ae7 id 53
opcode 1, data_len: 4096
[Wed Jan 3 08:44:08 2024] nvmet: fjr add: returning NVME_SC_CTRL_PATH_ERROR
[Wed Jan 3 08:44:08 2024] nvmet_tcp: failed cmd 00000000e0d12c37 id 54
opcode 1, data_len: 4096
[Wed Jan 3 08:44:08 2024] nvmet: ctrl 2 start keep-alive timer for 15 secs
[Wed Jan 3 08:44:08 2024] nvmet: ctrl 1 stop keep-alive
[Wed Jan 3 08:44:08 2024] nvmet: creating nvm controller 2 for
subsystem
nqn.2014-08.org.nvmexpress:NVMf:uuid:cf4bb93c-949f-4532-a5c1-b8bd267a4e06
for NQN
nqn.2014-08.org.nvmexpress:uuid:1d8f7c82-9deb-4bc8-8292-5ff32ee3a2be.
[Wed Jan 3 08:44:08 2024] nvmet: adding queue 1 to ctrl 2.
[Wed Jan 3 08:44:08 2024] nvmet: adding queue 2 to ctrl 2.
[Wed Jan 3 08:44:08 2024] nvmet: adding queue 3 to ctrl 2.
[Wed Jan 3 08:44:08 2024] nvmet: adding queue 4 to ctrl 2.
[Wed Jan 3 08:44:18 2024] nvmet: ctrl 2 update keep-alive timer for 15 secs
[Wed Jan 3 08:44:28 2024] nvmet: ctrl 2 update keep-alive timer for 15 secs
then back to returning NVME_ANA_PERSISTENT_LOSS, fio occasionally fails
too. log output are pretty the same.
then back to dm multipath, for about 50 times enable/disable, fio never
fails.
next prev parent reply other threads:[~2024-01-03 10:24 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-12-04 7:58 Should NVME_SC_INVALID_NS be translated to BLK_STS_IOERR instead of BLK_STS_NOTSUPP so that multipath(both native and dm) can failover on the failure? Jirong Feng
2023-12-04 8:47 ` Sagi Grimberg
2023-12-05 3:54 ` Jirong Feng
2023-12-05 4:37 ` Keith Busch
2023-12-05 4:40 ` Christoph Hellwig
2023-12-05 5:18 ` Keith Busch
2023-12-05 7:06 ` Jirong Feng
2023-12-05 8:50 ` Sagi Grimberg
2023-12-25 11:25 ` Jirong Feng
2023-12-25 11:40 ` Sagi Grimberg
2023-12-25 12:14 ` Jirong Feng
2023-12-26 13:27 ` Jirong Feng
2024-01-01 9:51 ` Sagi Grimberg
2024-01-02 10:33 ` Jirong Feng
2024-01-02 12:46 ` Sagi Grimberg
2024-01-03 10:24 ` Jirong Feng [this message]
2024-01-04 11:56 ` Sagi Grimberg
2024-01-30 9:36 ` Jirong Feng
2024-01-30 11:29 ` Sagi Grimberg
2024-01-31 6:25 ` Christoph Hellwig
2024-03-20 3:17 ` Jirong Feng
2024-03-20 8:51 ` Sagi Grimberg
2024-03-21 3:06 ` Jirong Feng
2024-04-07 22:28 ` Sagi Grimberg
2024-04-12 7:52 ` Jirong Feng
2024-04-12 8:57 ` Sagi Grimberg
2024-04-22 9:47 ` Sagi Grimberg
2024-04-23 3:15 ` Jirong Feng
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=89b542d3-dedb-4d5c-ad7a-279467d28e51@easystack.cn \
--to=jirong.feng@easystack.cn \
--cc=axboe@fb.com \
--cc=hch@lst.de \
--cc=kbusch@kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=peng.xiao@easystack.cn \
--cc=sagi@grimberg.me \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox