From: Val Packett <val@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>,
Stefano Stabellini <sstabellini@kernel.org>,
Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Cc: Val Packett <val@invisiblethingslab.com>,
xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: [RFC PATCH] xen: privcmd: fix ioeventfd/ioreq crashing PV domain
Date: Wed, 15 Oct 2025 16:57:03 -0300 [thread overview]
Message-ID: <20251015195713.6500-1-val@invisiblethingslab.com> (raw)
Starting a virtio backend in a PV domain would panic the kernel in
alloc_ioreq, trying to dereference vma->vm_private_data as a pages
pointer when in reality it stayed as PRIV_VMA_LOCKED.
Fix by allocating a pages array in mmap_resource in the PV case,
filling it with page info converted from the pfn array. This allows
ioreq to function successfully with a backend provided by a PV dom0.
Signed-off-by: Val Packett <val@invisiblethingslab.com>
---
I've been porting the xen-vhost-frontend[1] to Qubes, which runs on amd64
and we (still) use PV for dom0. The x86 part didn't give me much trouble,
but the first thing I found was this crash due to using a PV domain to host
the backend. alloc_ioreq was dereferencing the '1' constant and panicking
the dom0 kernel.
I figured out that I can make a pages array in the expected format from the
pfn array where the actual memory mapping happens for the PV case, and with
the fix, the ioreq part works: the vhost frontend replies to the probing
sequence and the guest recognizes which virtio device is being provided.
I still have another thing to debug: the MMIO accesses from the inner driver
(e.g. virtio_rng) don't get through to the vhost provider (ioeventfd does
not get notified), and manually kicking the eventfd from the frontend
seems to crash... Xen itself?? (no Linux panic on console, just a freeze and
quick reboot - will try to set up a serial console now)
But I figured I'd post this as an RFC already, since the other bug may be
unrelated and the ioreq area itself does work now. I'd like to hear some
feedback on this from people who actually know Xen :)
[1]: https://github.com/vireshk/xen-vhost-frontend
Thanks,
~val
---
drivers/xen/privcmd.c | 34 ++++++++++++++++++++++++++--------
1 file changed, 26 insertions(+), 8 deletions(-)
diff --git a/drivers/xen/privcmd.c b/drivers/xen/privcmd.c
index f52a457b302d..c9b4dae7e520 100644
--- a/drivers/xen/privcmd.c
+++ b/drivers/xen/privcmd.c
@@ -834,8 +834,23 @@ static long privcmd_ioctl_mmap_resource(struct file *file,
if (rc < 0)
break;
}
- } else
+ } else {
+ unsigned int i;
+ unsigned int numpgs = kdata.num / XEN_PFN_PER_PAGE;
+ struct page **pages;
rc = 0;
+
+ pages = kvcalloc(numpgs, sizeof(pages[0]), GFP_KERNEL);
+ if (pages == NULL) {
+ rc = -ENOMEM;
+ goto out;
+ }
+
+ for (i = 0; i < numpgs; i++) {
+ pages[i] = xen_pfn_to_page(pfns[i * XEN_PFN_PER_PAGE]);
+ }
+ vma->vm_private_data = pages;
+ }
}
out:
@@ -1589,15 +1604,18 @@ static void privcmd_close(struct vm_area_struct *vma)
int numgfns = (vma->vm_end - vma->vm_start) >> XEN_PAGE_SHIFT;
int rc;
- if (xen_pv_domain() || !numpgs || !pages)
+ if (!numpgs || !pages)
return;
- rc = xen_unmap_domain_gfn_range(vma, numgfns, pages);
- if (rc == 0)
- xen_free_unpopulated_pages(numpgs, pages);
- else
- pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
- numpgs, rc);
+ if (!xen_pv_domain()) {
+ rc = xen_unmap_domain_gfn_range(vma, numgfns, pages);
+ if (rc == 0)
+ xen_free_unpopulated_pages(numpgs, pages);
+ else
+ pr_crit("unable to unmap MFN range: leaking %d pages. rc=%d\n",
+ numpgs, rc);
+ }
+
kvfree(pages);
}
--
2.51.0
next reply other threads:[~2025-10-15 23:13 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2025-10-15 19:57 Val Packett [this message]
2025-10-19 0:42 ` [RFC PATCH] xen: privcmd: fix ioeventfd/ioreq crashing PV domain Demi Marie Obenour
2025-10-19 1:07 ` Demi Marie Obenour
2025-11-04 12:15 ` Jürgen Groß
2025-11-04 23:05 ` Demi Marie Obenour
2025-11-04 23:06 ` Demi Marie Obenour
2025-11-05 1:16 ` Val Packett
2025-11-05 20:42 ` Demi Marie Obenour
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20251015195713.6500-1-val@invisiblethingslab.com \
--to=val@invisiblethingslab.com \
--cc=jgross@suse.com \
--cc=linux-kernel@vger.kernel.org \
--cc=oleksandr_tyshchenko@epam.com \
--cc=sstabellini@kernel.org \
--cc=xen-devel@lists.xenproject.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).