From: Klaus Jensen <its@irrelevant.dk>
To: Jonathan Derrick <jonathan.derrick@linux.dev>
Cc: Keith Busch <kbusch@kernel.org>,
linux-nvme@lists.infradead.org, linux-block@vger.kernel.org,
Shin'ichiro Kawasaki <shinichiro.kawasaki@wdc.com>,
Chaitanya Kulkarni <chaitanyak@nvidia.com>,
Christoph Hellwig <hch@lst.de>, Sagi Grimberg <sagi@grimberg.me>
Subject: Re: [PATCH v2] tests/nvme: Add admin-passthru+reset race test
Date: Wed, 23 Nov 2022 09:15:05 +0100 [thread overview]
Message-ID: <Y33WiTAdIpEc6M6V@cormorant.local> (raw)
In-Reply-To: <4f3c32a5-54cf-dcba-afe2-1f08b3f48b16@linux.dev>
[-- Attachment #1: Type: text/plain, Size: 3763 bytes --]
On Nov 22 13:30, Jonathan Derrick wrote:
>
>
> On 11/22/2022 1:26 AM, Klaus Jensen wrote:
> > On Nov 21 16:04, Keith Busch wrote:
> >> [cc'ing Klaus]
> >>
> >> On Mon, Nov 21, 2022 at 03:49:45PM -0700, Jonathan Derrick wrote:
> >>> On 11/21/2022 3:34 PM, Jonathan Derrick wrote:
> >>>> On 11/21/2022 1:55 PM, Keith Busch wrote:
> >>>>> On Thu, Nov 17, 2022 at 02:22:10PM -0700, Jonathan Derrick wrote:
> >>>>>> I seem to have isolated the error mechanism for older kernels, but 6.2.0-rc2
> >>>>>> reliably segfaults my QEMU instance (something else to look into) and I don't
> >>>>>> have any 'real' hardware to test this on at the moment. It looks like several
> >>>>>> passthru commands are able to enqueue prior/during/after resetting/connecting.
> >>>>>
> >>>>> I'm not seeing any problem with the latest nvme-qemu after several dozen
> >>>>> iterations of this test case. In that environment, the formats and
> >>>>> resets complete practically synchronously with the call, so everything
> >>>>> proceeds quickly. Is there anything special I need to change?
> >>>>>
> >>>> I can still repro this with nvme-fixes tag, so I'll have to dig into it myself
> >>> Here's a backtrace:
> >>>
> >>> Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
> >>> [Switching to Thread 0x7ffff7554400 (LWP 531154)]
> >>> 0x000055555597a9d5 in nvme_ctrl (req=0x7fffec892780) at ../hw/nvme/nvme.h:539
> >>> 540 return sq->ctrl;
> >>> (gdb) backtrace
> >>> #0 0x000055555597a9d5 in nvme_ctrl (req=0x7fffec892780) at ../hw/nvme/nvme.h:539
> >>> #1 0x0000555555994360 in nvme_format_bh (opaque=0x5555579dd000) at ../hw/nvme/ctrl.c:5852
> >>
> >> Thanks, looks like a race between the admin queue format's bottom half,
> >> and the controller reset tearing down that queue. I'll work with Klaus
> >> on that qemu side (looks like a well placed qemu_bh_cancel() should do
> >> it).
> >>
> >
> > Yuck. Bug located and quelched I think.
> >
> > Jonathan, please try
> >
> > https://lore.kernel.org/qemu-devel/20221122081348.49963-2-its@irrelevant.dk/
> >
> > This fixes the qemu crash, but I still see a "nvme still not live after
> > 42 seconds!" resulting from the test. I'm seeing A LOT of invalid
> > submission queue doorbell writes:
> >
> > pci_nvme_ub_db_wr_invalid_sq in nvme_process_db: submission queue doorbell write for nonexistent queue, sqid=0, ignoring
> >
> > Tested on a 6.1-rc4.
>
> Good change, just defers it a bit for me:
>
> Thread 1 "qemu-system-x86" received signal SIGSEGV, Segmentation fault.
> [Switching to Thread 0x7ffff7554400 (LWP 559269)]
> 0x000055555598922e in nvme_enqueue_req_completion (cq=0x0, req=0x7fffec141310) at ../hw/nvme/ctrl.c:1390
> 1390 assert(cq->cqid == req->sq->cqid);
> (gdb) backtrace
> #0 0x000055555598922e in nvme_enqueue_req_completion (cq=0x0, req=0x7fffec141310) at ../hw/nvme/ctrl.c:1390
> #1 0x000055555598a7a7 in nvme_misc_cb (opaque=0x7fffec141310, ret=0) at ../hw/nvme/ctrl.c:2002
> #2 0x000055555599448a in nvme_do_format (iocb=0x55555770ccd0) at ../hw/nvme/ctrl.c:5891
> #3 0x00005555559942a9 in nvme_format_ns_cb (opaque=0x55555770ccd0, ret=0) at ../hw/nvme/ctrl.c:5828
> #4 0x0000555555dda018 in blk_aio_complete (acb=0x7fffec1fccd0) at ../block/block-backend.c:1501
> #5 0x0000555555dda2fc in blk_aio_write_entry (opaque=0x7fffec1fccd0) at ../block/block-backend.c:1568
> #6 0x0000555555f506b9 in coroutine_trampoline (i0=-331119632, i1=32767) at ../util/coroutine-ucontext.c:177
> #7 0x00007ffff77c84e0 in __start_context () at ../sysdeps/unix/sysv/linux/x86_64/__start_context.S:91
> #8 0x00007ffff4ff2bd0 in ()
> #9 0x0000000000000000 in ()
>
Bummer.
I'll keep digging.
[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
prev parent reply other threads:[~2022-11-23 8:15 UTC|newest]
Thread overview: 9+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-11-17 21:22 [PATCH v2] tests/nvme: Add admin-passthru+reset race test Jonathan Derrick
2022-11-21 20:55 ` Keith Busch
2022-11-21 22:34 ` Jonathan Derrick
2022-11-21 22:47 ` Keith Busch
2022-11-21 22:49 ` Jonathan Derrick
2022-11-21 23:04 ` Keith Busch
2022-11-22 8:26 ` Klaus Jensen
2022-11-22 20:30 ` Jonathan Derrick
2022-11-23 8:15 ` Klaus Jensen [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=Y33WiTAdIpEc6M6V@cormorant.local \
--to=its@irrelevant.dk \
--cc=chaitanyak@nvidia.com \
--cc=hch@lst.de \
--cc=jonathan.derrick@linux.dev \
--cc=kbusch@kernel.org \
--cc=linux-block@vger.kernel.org \
--cc=linux-nvme@lists.infradead.org \
--cc=sagi@grimberg.me \
--cc=shinichiro.kawasaki@wdc.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox