public inbox for linux-nvme@lists.infradead.org
 help / color / mirror / Atom feed
From: Jonathan Derrick <jonathan.derrick@linux.dev>
To: Keith Busch <kbusch@kernel.org>
Cc: linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
	Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>,
	Chaitanya Kulkarni <chaitanyak@nvidia.com>,
	Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH v2] tests/nvme: Add admin-passthru+reset race test
Date: Mon, 21 Nov 2022 15:49:45 -0700	[thread overview]
Message-ID: <7dcb9e3c-aa3e-b7b9-fc30-59281d581fd0@linux.dev> (raw)
In-Reply-To: <e99fef7c-1b48-61e2-b503-a2363968d5fc@linux.dev>



On 11/21/2022 3:34 PM, Jonathan Derrick wrote:
> 
> 
> On 11/21/2022 1:55 PM, Keith Busch wrote:
>> On Thu, Nov 17, 2022 at 02:22:10PM -0700, Jonathan Derrick wrote:
>>> I seem to have isolated the error mechanism for older kernels, but 6.2.0-rc2
>>> reliably segfaults my QEMU instance (something else to look into) and I don't
>>> have any 'real' hardware to test this on at the moment. It looks like several
>>> passthru commands are able to enqueue prior/during/after resetting/connecting.
>>
>> I'm not seeing any problem with the latest nvme-qemu after several dozen
>> iterations of this test case. In that environment, the formats and
>> resets complete practically synchronously with the call, so everything
>> proceeds quickly. Is there anything special I need to change?
>>  
> I can still repro this with nvme-fixes tag, so I'll have to dig into it myself
Here's a backtrace:

Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
[Switching to Thread 0x7ffff7554400 (LWP 531154)]
0x000055555597a9d5 in nvme_ctrl (req=0x7fffec892780) at ../hw/nvme/nvme.h:539
539         return sq->ctrl;
(gdb) backtrace
#0  0x000055555597a9d5 in nvme_ctrl (req=0x7fffec892780) at ../hw/nvme/nvme.h:539
#1  0x0000555555994360 in nvme_format_bh (opaque=0x5555579dd000) at ../hw/nvme/ctrl.c:5852
#2  0x0000555555f4db15 in aio_bh_call (bh=0x7fffec279910) at ../util/async.c:150
#3  0x0000555555f4dc24 in aio_bh_poll (ctx=0x55555688fa00) at ../util/async.c:178
#4  0x0000555555f34df0 in aio_dispatch (ctx=0x55555688fa00) at ../util/aio-posix.c:421
#5  0x0000555555f4e083 in aio_ctx_dispatch (source=0x55555688fa00, callback=0x0, user_data=0x0) at ../util/async.c:320
#6  0x00007ffff7bd717d in g_main_context_dispatch () at /lib/x86_64-linux-gnu/libglib-2.0.so.0
#7  0x0000555555f600c2 in glib_pollfds_poll () at ../util/main-loop.c:297
#8  0x0000555555f60140 in os_host_main_loop_wait (timeout=0) at ../util/main-loop.c:320
#9  0x0000555555f60251 in main_loop_wait (nonblocking=0) at ../util/main-loop.c:596
#10 0x0000555555a8f27c in qemu_main_loop () at ../softmmu/runstate.c:739
#11 0x000055555582b77a in qemu_default_main () at ../softmmu/main.c:37
#12 0x000055555582b7b4 in main (argc=53, argv=0x7fffffffdf88) at ../softmmu/main.c:48



> Does the tighter loop in the test comment header produce results?
> 
> 
>>> The issue seems to be very heavily timing related, so the loop in the header is
>>> a lot more forceful in this approach.
>>>
>>> As far as the loop goes, I've noticed it will typically repro immediately or
>>> pass the whole test.
>>
>> I can only get possible repro in scenarios that have multi-second long,
>> serialized format times. Even then, it still appears that everything
>> fixes itself after a waiting. Are you observing the same, or is it stuck
>> forever in your observations?
> In 5.19, it gets stuck forever with lots of formats outstanding and
> controller stuck in resetting. I'll keep digging. Thanks Keith
> 
>>
>>> +remove_and_rescan() {
>>> +	local pdev=$1
>>> +	echo 1 > /sys/bus/pci/devices/"$pdev"/remove
>>> +	echo 1 > /sys/bus/pci/rescan
>>> +}
>>
>> This function isn't called anywhere.


  parent reply	other threads:[~2022-11-21 22:50 UTC|newest]

Thread overview: 9+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-11-17 21:22 [PATCH v2] tests/nvme: Add admin-passthru+reset race test Jonathan Derrick
2022-11-21 20:55 ` Keith Busch
2022-11-21 22:34   ` Jonathan Derrick
2022-11-21 22:47     ` Keith Busch
2022-11-21 22:49     ` Jonathan Derrick [this message]
2022-11-21 23:04       ` Keith Busch
2022-11-22  8:26         ` Klaus Jensen
2022-11-22 20:30           ` Jonathan Derrick
2022-11-23  8:15             ` Klaus Jensen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=7dcb9e3c-aa3e-b7b9-fc30-59281d581fd0@linux.dev \
    --to=jonathan.derrick@linux.dev \
    --cc=chaitanyak@nvidia.com \
    --cc=hch@lst.de \
    --cc=kbusch@kernel.org \
    --cc=linux-block@vger.kernel.org \
    --cc=linux-nvme@lists.infradead.org \
    --cc=sagi@grimberg.me \
    --cc=shinichiro.kawasaki@wdc.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox