Linux-NVME Archive on lore.kernel.org
 help / color / mirror / Atom feed
From: Johannes Thumshirn <Johannes.Thumshirn@wdc.com>
To: Chaitanya Kulkarni <chaitanyak@nvidia.com>
Cc: Yi Zhang <yi.zhang@redhat.com>,
	linux-block <linux-block@vger.kernel.org>,
	"open list:NVM EXPRESS DRIVER" <linux-nvme@lists.infradead.org>
Subject: Re: [bug report] RIP: 0010:blk_flush_complete_seq+0x450/0x1060 observed during blktests nvme/tcp nvme/012
Date: Tue, 30 Apr 2024 06:16:42 +0000	[thread overview]
Message-ID: <25fd1c08-fe6a-48dc-874e-464b2b0e12e5@wdc.com> (raw)
In-Reply-To: <aded9da3-347a-4268-8190-6f39692ea8ee@nvidia.com>

On 30.04.24 00:18, Chaitanya Kulkarni wrote:
> On 4/29/24 07:35, Johannes Thumshirn wrote:
>> On 23.04.24 15:18, Yi Zhang wrote:
>>> Hi
>>> I found this issue on the latest linux-block/for-next by blktests
>>> nvme/tcp nvme/012, please help check it and let me know if you need
>>> any info/testing for it, thanks.
>>>
>>> [ 1873.394323] run blktests nvme/012 at 2024-04-23 04:13:47
>>> [ 1873.761900] loop0: detected capacity change from 0 to 2097152
>>> [ 1873.846926] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
>>> [ 1873.987806] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
>>> [ 1874.208883] nvmet: creating nvm controller 1 for subsystem
>>> blktests-subsystem-1 for NQN
>>> nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
>>> [ 1874.243423] nvme nvme0: creating 48 I/O queues.
>>> [ 1874.362383] nvme nvme0: mapped 48/0/0 default/read/poll queues.
>>> [ 1874.517677] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr
>>> 127.0.0.1:4420, hostnqn:
>>> nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
> 
> [...]
> 
>>>
>>> [  326.827260] run blktests nvme/012 at 2024-04-29 16:28:31
>>> [  327.475957] loop0: detected capacity change from 0 to 2097152
>>> [  327.538987] nvmet: adding nsid 1 to subsystem blktests-subsystem-1
>>>
>>> [  327.603405] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
>>>    
>>>
>>> [  327.872343] nvmet: creating nvm controller 1 for subsystem
>>> blktests-subsystem-1 for NQN
>>> nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349.
>>>
>>> [  327.877120] nvme nvme0: Please enable CONFIG_NVME_MULTIPATH for full
>>> support of multi-port devices.
> 
> seems like you don't have multipath enabled that is one difference
> I can see in above log posted by Yi, and your log.


Yup, but even with multipath enabled I can't get the bug to trigger :(

nvme/012 (run mkfs and data verification fio job on NVMeOF block 
device-backed ns) 

[  279.642826] run blktests nvme/012 at 2024-04-29 18:52:26 

[  280.296493] loop0: detected capacity change from 0 to 2097152
[  280.360139] nvmet: adding nsid 1 to subsystem blktests-subsystem-1 
 

[  280.426171] nvmet_tcp: enabling port 0 (127.0.0.1:4420)
[  280.712262] nvmet: creating nvm controller 1 for subsystem 
blktests-subsystem-1 for NQN 
nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349. 

[  280.718259] nvme nvme0: creating 4 I/O queues. 
 

[  280.722258] nvme nvme0: mapped 4/0/0 default/read/poll queues. 
 

[  280.726088] nvme nvme0: new ctrl: NQN "blktests-subsystem-1", addr 
127.0.0.1:4420, hostnqn: 
nqn.2014-08.org.nvmexpress:uuid:0f01fb42-9f7f-4856-b0b3-51e60b8de349
[  281.343044] XFS (nvme0n1): Mounting V5 Filesystem 
513881ee-db18-48c7-a2b0-3e4e3e41f38c 

[  281.381925] XFS (nvme0n1): Ending clean mount 
 

[  281.390154] xfs filesystem being mounted at /mnt/blktests supports 
timestamps until 2038-01-19 (0x7fffffff)
[  309.958309] perf: interrupt took too long (2593 > 2500), lowering 
kernel.perf_event_max_sample_rate to 77000 

[  377.847337] perf: interrupt took too long (3278 > 3241), lowering 
kernel.perf_event_max_sample_rate to 61000 

[  471.964099] XFS (nvme0n1): Unmounting Filesystem 
513881ee-db18-48c7-a2b0-3e4e3e41f38c 

nvme/012 (run mkfs and data verification fio job on NVMeOF block 
device-backed ns) [passed] 

     runtime    ...  192.747s 



Can you see if you can reproduce it on your side?

Thanks,
	Johannes

  reply	other threads:[~2024-04-30  6:16 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-04-23 13:17 [bug report] RIP: 0010:blk_flush_complete_seq+0x450/0x1060 observed during blktests nvme/tcp nvme/012 Yi Zhang
2024-04-26  8:30 ` [bug report][bisected] " Yi Zhang
2024-04-29 14:35 ` [bug report] " Johannes Thumshirn
2024-04-29 22:18   ` Chaitanya Kulkarni
2024-04-30  6:16     ` Johannes Thumshirn [this message]
2024-04-30 14:17       ` Yi Zhang
2024-05-03  7:59         ` Sagi Grimberg
2024-05-03 10:32           ` Johannes Thumshirn
2024-05-03 11:01             ` Sagi Grimberg
2024-05-03 21:14               ` Chaitanya Kulkarni
2024-05-09  6:15                 ` Yi Zhang

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=25fd1c08-fe6a-48dc-874e-464b2b0e12e5@wdc.com \
    --to=johannes.thumshirn@wdc.com \
    --cc=chaitanyak@nvidia.com \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=yi.zhang@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox