qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Jonathan Cameron via <qemu-devel@nongnu.org>
To: "Xingtao Yao (Fujitsu)" <yaoxt.fnst@fujitsu.com>
Cc: "dan.j.williams@intel.com" <dan.j.williams@intel.com>,
	"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
	"linux-cxl@vger.kernel.org" <linux-cxl@vger.kernel.org>,
	"nvdimm@lists.linux.dev" <nvdimm@lists.linux.dev>
Subject: Re: [BUG REPORT] cxl process in infinity loop
Date: Tue, 2 Jul 2024 09:46:42 +0100	[thread overview]
Message-ID: <20240702094642.00000fd8@Huawei.com> (raw)
In-Reply-To: <OSZPR01MB6453BC61D2FF4035F18084EF8DDC2@OSZPR01MB6453.jpnprd01.prod.outlook.com>

On Tue, 2 Jul 2024 00:30:06 +0000
"Xingtao Yao (Fujitsu)" <yaoxt.fnst@fujitsu.com> wrote:

> Hi, all
> 
> When I did the cxl memory hot-plug test on QEMU, I accidentally connected 
> two memdev to the same downstream port, the command like below:
> 
> > -object memory-backend-ram,size=262144k,share=on,id=vmem0 \
> > -object memory-backend-ram,size=262144k,share=on,id=vmem1 \
> > -device pxb-cxl,bus_nr=12,bus=pcie.0,id=cxl.1 \
> > -device cxl-rp,port=0,bus=cxl.1,id=root_port0,chassis=0,slot=0 \
> > -device cxl-upstream,bus=root_port0,id=us0 \
> > -device cxl-downstream,port=0,bus=us0,id=swport00,chassis=0,slot=5 \
> > -device cxl-downstream,port=0,bus=us0,id=swport01,chassis=0,slot=7 \  
> same downstream port but has different slot!
> 
> > -device cxl-type3,bus=swport00,volatile-memdev=vmem0,id=cxl-vmem0 \
> > -device cxl-type3,bus=swport01,volatile-memdev=vmem1,id=cxl-vmem1 \
> > -M cxl-fmw.0.targets.0=cxl.1,cxl-fmw.0.size=64G,cxl-fmw.0.interleave-granularity=4k \  
> 
> There is no error occurred when vm start, but when I executed the “cxl list” command to view
> the CXL objects info, the process can not end properly.

I'd be happy to look preventing this on QEMU side if you send one,
but in general there are are lots of ways to shoot yourself in the
foot with CXL and PCI device emulation in QEMU so I'm not going
to rush to solve this specific one.

Likewise, some hardening in kernel / userspace probably makes sense but
this is a non compliant switch so priority of a fix is probably fairly low.

Jonathan

> 
> Then I used strace to trace the process, I found that the process is in infinity loop:
> # strace cxl list
> ......
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> clock_nanosleep(CLOCK_REALTIME, 0, {tv_sec=0, tv_nsec=1000000}, NULL) = 0
> openat(AT_FDCWD, "/sys/bus/cxl/flush", O_WRONLY|O_CLOEXEC) = 3
> write(3, "1\n\0", 3)                    = 3
> close(3)                                = 0
> access("/run/udev/queue", F_OK)         = 0
> 
> [Environment]:
> linux: V6.10-rc3
> QEMU: V9.0.0
> ndctl: v79
> 
> I know this is because of the wrong use of the QEMU command, but I think we should 
> be aware of this error in one of the QEMU, OS or ndctl side at least.
> 
> Thanks
> Xingtao



      reply	other threads:[~2024-07-02  8:47 UTC|newest]

Thread overview: 2+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-07-02  0:30 [BUG REPORT] cxl process in infinity loop Xingtao Yao (Fujitsu) via
2024-07-02  8:46 ` Jonathan Cameron via [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20240702094642.00000fd8@Huawei.com \
    --to=qemu-devel@nongnu.org \
    --cc=Jonathan.Cameron@Huawei.com \
    --cc=dan.j.williams@intel.com \
    --cc=linux-cxl@vger.kernel.org \
    --cc=nvdimm@lists.linux.dev \
    --cc=yaoxt.fnst@fujitsu.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).